text
stringlengths
11
320k
source
stringlengths
26
161
Obligate anaerobes are microorganisms killed by normal atmospheric concentrations of oxygen (20.95% O 2 ). [ 1 ] [ 2 ] Oxygen tolerance varies between species, with some species capable of surviving in up to 8% oxygen, while others lose viability in environments with an oxygen concentration greater than 0.5%. [ 3 ] The oxygen sensitivity of obligate anaerobes has been attributed to a combination of factors including oxidative stress and enzyme production. Oxygen can also damage obligate anaerobes in ways not involving oxidative stress. [ citation needed ] Because molecular oxygen contains two unpaired electrons in the highest occupied molecular orbital , it is readily reduced to superoxide ( O − 2 ) and hydrogen peroxide ( H 2 O 2 ) within cells. [ 1 ] A reaction between these two products results in the formation of a free hydroxyl radical (OH . ). [ 4 ] Superoxide, hydrogen peroxide, and hydroxyl radicals are a class of compounds known as reactive oxygen species (ROS), highly reactant products that are damaging to microbes, including obligate anaerobes. [ 4 ] Aerobic organisms produce superoxide dismutase and catalase to detoxify these products, but obligate anaerobes produce these enzymes in very small quantities, or not at all. [ 1 ] [ 2 ] [ 3 ] [ 5 ] The variability in oxygen tolerance of obligate anaerobes (<0.5 to 8% O 2 ) is thought to reflect the quantity of superoxide dismutase and catalase being produced. [ 2 ] [ 3 ] In 1986, Carlioz and Touati performed experiments which support the idea that reactive oxygen species may be toxic to anaerobes. E. coli , a facultative anaerobe, was mutated by a deletion of superoxide dismutase genes. In the presence of oxygen, this mutation resulted in the inability to properly synthesize certain amino acids or use common carbon sources as substrates during metabolism. [ 6 ] In the absence of oxygen, the mutated samples grew normally. [ 6 ] In 2018, Lu et al. found that in Bacteroides thetaiotaomicron , an obligate anaerobe found in the mammalian digestive tract, exposure to oxygen results in increased levels of superoxide which inactivated important metabolic enzymes. [ 6 ] Dissolved oxygen increases the redox potential of a solution, and high redox potential inhibits the growth of some obligate anaerobes. [ 3 ] [ 5 ] [ 7 ] For example, methanogens grow at a redox potential lower than -0.3 V. [ 7 ] Sulfide is an essential component of some enzymes, and molecular oxygen oxidizes this to form disulfide , thus inactivating certain enzymes (e.g. nitrogenase ). Organisms may not be able to grow with these essential enzymes deactivated. [ 1 ] [ 5 ] [ 7 ] Growth may also be inhibited due to a lack of reducing equivalents for biosynthesis because electrons are exhausted in reducing oxygen. [ 7 ] Obligate anaerobes convert nutrients into energy through anaerobic respiration or fermentation . In aerobic respiration, the pyruvate generated from glycolysis is converted to acetyl-CoA . This is then broken down via the TCA cycle and electron transport chain . Anaerobic respiration differs from aerobic respiration in that it uses an electron acceptor other than oxygen in the electron transport chain. Examples of alternative electron acceptors include sulfate , nitrate , iron , manganese , mercury , and carbon monoxide . [ 8 ] Fermentation differs from anaerobic respiration in that the pyruvate generated from glycolysis is broken down without the involvement of an electron transport chain (i.e. there is no oxidative phosphorylation ). Numerous fermentation pathways exist such as lactic acid fermentation , mixed acid fermentation , 2-3 butanediol fermentation where organic compounds are reduced to organic acids and alcohol. [ 8 ] [ 4 ] The energy yield of anaerobic respiration and fermentation (i.e. the number of ATP molecules generated) is less than in aerobic respiration. [ 8 ] This is why facultative anaerobes , which can metabolise energy both aerobically and anaerobically, preferentially metabolise energy aerobically. This is observable when facultative anaerobes are cultured in thioglycolate broth . [ 1 ] Obligate anaerobes are found in oxygen-free environments such as the intestinal tracts of animals, the deep ocean, still waters, landfills, in deep sediments of soil. [ 9 ] Examples of obligately anaerobic bacterial genera include Actinomyces , Bacteroides , Clostridium , Fusobacterium , Peptostreptococcus , Porphyromonas , Prevotella , Propionibacterium , and Veillonella . Clostridium species are endospore -forming bacteria, and can survive in atmospheric concentrations of oxygen in this dormant form. The remaining bacteria listed do not form endospores. [ 5 ] Several species of the Mycobacterium , Streptomyces , and Rhodococcus genera are examples of obligate anaerobe found in soil. [ 10 ] Obligate anaerobes are also found in the digestive tracts of humans and other animals as well as in the first stomach of ruminants . [ 11 ] Examples of obligately anaerobic fungal genera include the rumen fungi Neocallimastix , Piromonas , and Sphaeromonas . [ 12 ]
https://en.wikipedia.org/wiki/Obligate_anaerobe
Obligate mutualism is a special case of mutualism where an ecological interaction between species mutually benefits each other, and one or all species are unable to survive without the other. [ 1 ] In some obligate relationships, only one species is dependent on the relationship. For example, a parasite may require a host in order to reproduce and survive, while the host does not depend at all on the parasite. [ 2 ] Fig and fig wasps are an example of a co-obligate relationship, where both species are totally dependent on the relationship. The fig plant is entirely dependent on the fig wasp for pollination, and the fig wasp requires the fig plant for reproductive purposes. [ 3 ] Many insect-fungi relationships are also co-obligate: the insect disperses, and in some cases protects, the fungi while the fungi provide nutrients for the insects. This interaction allows insects and fungi to, as a group, inhabit previously inhospitable or unreachable environments. [ 4 ] Though obligate relationships need not be limited to two species, they are often discussed as such, with the relationship being made up of a host and a symbiont, though the terms are often attributed arbitrarily. [ 2 ] Obligate mutualistic relationships, where species are entirely dependent on each other for survival, can evolve through different pathways. In some cases, a free-living symbiont may be engulfed by a host organism and subsequently passed down through vertical transmission, resulting in an obligate dependency. [ 2 ] However, it is more common for facultative mutualisms, where the mutualist can exist independently or in association with a host, to act as an intermediary step toward the evolution of obligate or co-obligate mutualism. [ 2 ] In this second case, the evolution of obligate mutualism can be divided into three steps: formation, maintenance, and transformation. [ 4 ] The formation of the facultative mutualism requires that the species involved all benefit from their mutual cooperation. This mutualism, though it is to the benefit of said species, is best understood as co-exploitation. Facultative mutualism occurs when species' interests align, so that each may reciprocally exploit the other to the benefit of both. [ 5 ] In order for facultative relationships to turn into obligate relationships, the facultative mutualism must be maintained and continued across generations. There are two methods for the relationship to be carried through generations: vertical and horizontal transmission . Vertical transmission involves the passage of symbionts from parent to offspring hosts. Horizontal transmission involves the passage of symbionts between unrelated hosts. [ 5 ] It is proposed that vertical transmission makes for a more stable relationship, because in vertical transmission a host is paired with the same symbiont in every generation, thus the host and symbiont have more chance for co-adaption. In vertical transmission, the hosts and symbionts also share reproductive fate and therefore both suffer from cheating. [ 6 ] A cheater is a mutualist who gains more than they get. An extreme example would be an organism who gains from a relationship without giving anything, such as an insect that feeds on nectar without contributing to pollination. [ 7 ] Cheaters are thought to destabilize mutualistic relationships, both when they arrive as a third exploitative party and when they result as mutants within pre-existing mutualistic relationships. [ 8 ] Horizontal transmission, where there can be multiple symbionts, can result in competition between symbionts and exploitation of the host. [ 6 ] There are many obligate relationships involving horizontal transmission. And it has also been found that mutualist/exploiter co-existence is not uncommon. [ 7 ] Cheaters often exist alongside mutualistic relationships, and in obligate mutualism the presence of third party explorers early in the formation of the relationship may protect the host-symbiont relationship from further exploitation later on. [ 8 ] Once a mutualistic group has reached a point of stability, where both species are benefiting and there is not a destabilizing problem with cheaters, the third stage, transformation, can occur. In this stage, the mutualists lose the ability to survive independently of one another and thus form a new superorganism . In this case, each symbiont has become so specialized within the mutualistic group that they are now fully dependent on the relationship. [ 4 ] Physiological and behavioral changes can evolve as consequences of obligate dependency. In insect-fungi mutualistic groups, for example, fungal spore-carrying organs in insects and the production of increasingly nutrient rich, asexually reproductive spores in fungi appear as part of the co-obligate relationship. [ 4 ] In the fig and wasps co-obligate relationship, female wasps have developed morphological traits, such as elongated heads and easily detachable antennae and wings, that allow them to enter the fig ostile and lay eggs and collect pollen, and likewise, as the fig matures it produces nourishment for the wasp larvae. [ 9 ] Obligate dependency links the evolutionary fate of the organisms involved, this coupling has the potential to result in both negative and positive consequences. [ 1 ] This coupling can enhance the ability of the organism to evolve because natural selection can influence two genomes at once, meaning there are more opportunities for a mutation to positively impact both species. [ 1 ] This coupling also has the potential to negatively affect species evolution by limiting the ability of one species to react to environmental selective pressures , tying the organism with the higher fitness to an organism with now lower fitness, this is called the weakest link hypothesis. [ 1 ] Understanding how obligate dependency affects the evolution of involved species as well as being able to properly identify and understand obligate relationships is important in predicting and perhaps guardian against the impacts of climate change on ecological communities. It is not easy to study or identify obligate species and the number of species involved in obligate relationships, as hosts and symbionts lose and gain traits in their relationship, making it hard to determine their taxonomic relationships with other species. [ 5 ] Studying obligate relationships is also difficult, as they do not respond well to experimental interference. [ 1 ]
https://en.wikipedia.org/wiki/Obligate_mutualism
Obligationes or disputations de obligationibus were a medieval disputation format common in the 13th and 14th centuries. [ 1 ] Despite the name, they had nothing to do with ethics or morals but rather dealt with logical formalisms; [ 2 ] the name comes from the fact that the participants were "obliged" to follow the rules. [ 3 ] Typically, there were two disputants, one Opponens and one Respondens . At the start of a debate, both the disputants would agree on a ‘ positum ’, usually a false statement. The task of Respondens was to answer rationally to the questions from the Opponens , assuming the truth of the positum and without contradicting himself. On the opposite, the task of the Opponens was to try to force the Respondens into contradictions. [ 1 ] Several styles of Obligationes were distinguished in the medieval literature with the most widely studied being called " positio " (positing). "Obligational" disputations resemble recent theories of counterfactual reasoning and are believed to precede the modern practice of the academic "thesis defense." Obligationes also resembles a stylized, highly formalized, version of Socratic dialogues . It can also be a form a Aristotelian dialectical situation with an Answerer and a Questioner. [ 4 ] [ 5 ] It precedes other more modern dialogical accounts of logic such as Lorenzen games, Hintikka games and game semantics . William of Ockham said Obligationes : ...consists of this that in the beginning some proposition has to be posited, and then propositions have to be proposed as pleases the opponent, and to these the respondent has to answer by granting or denying or doubting or distinguishing. When these answers are given, the opponent, when it pleases him, has to say: “time is finished”. This is, the time of the obligation is finished. And then it is seen whether the respondent has answered well or not. [ 6 ]
https://en.wikipedia.org/wiki/Obligationes
Oblimersen ( INN , trade name Genasense ; also known as Augmerosen and bcl-2 antisense oligodeoxynucleotide G3139 ) is an antisense oligodeoxyribonucleotide being studied as a possible treatment for several types of cancer , including chronic lymphocytic leukemia , B-cell lymphoma , and breast cancer . It may kill cancer cells by blocking the production of Bcl-2 —a protein that makes cancer cells live longer—and by making them more sensitive to chemotherapy. The antisense oligonucleotide drug oblimersen was developed by Genta Incorporated to target Bcl-2. An antisense DNA or RNA strand is non-coding and complementary to the coding strand (which is the template for producing respectively RNA or protein). An antisense drug is a short sequence of RNA which hybridises with and inactivates mRNA, preventing the protein from being formed. [ citation needed ] It was shown that the proliferation of human lymphoma cells (with t(14;18) translocation) could be inhibited by antisense RNA targeted at the start codon region of Bcl-2 mRNA . In vitro studies led to the identification of oblimersen, which is complementary to the first 6 codons of Bcl-2 mRNA. [ 1 ] These have shown successful results in Phase I/II trials for lymphoma, and a large Phase III trial was launched in 2004. [ 2 ] By the first quarter 2010, the drug had not received FDA approval due to disappointing results in a melanoma trial. Although its safety and efficacy have not been established for any use, Genta Incorporated still [ when? ] claims on its website that studies are currently under way to examine the potential role of oblimersen in a variety of clinical indications. Recent studies in 2023 continue to explore the potential of oblimersen in combination with other therapies. Preclinical models have demonstrated that oblimersen, when combined with vinorelbine, significantly inhibits tumor growth and prolongs survival in NSCLC. These findings suggest that combining oblimersen with other chemotherapeutic agents could enhance its efficacy and potentially overcome previous clinical challenges. [1]
https://en.wikipedia.org/wiki/Oblimersen
An oblique shock wave is a shock wave that, unlike a normal shock , is inclined with respect to the direction of incoming air. It occurs when a supersonic flow encounters a corner that effectively turns the flow into itself and compresses. [ 1 ] The upstream streamlines are uniformly deflected after the shock wave. The most common way to produce an oblique shock wave is to place a wedge into supersonic , compressible flow . Similar to a normal shock wave, the oblique shock wave consists of a very thin region across which nearly discontinuous changes in the thermodynamic properties of a gas occur. While the upstream and downstream flow directions are unchanged across a normal shock, they are different for flow across an oblique shock wave. It is always possible to convert an oblique shock into a normal shock by a Galilean transformation . For a given Mach number , M 1 , and corner angle, θ, the oblique shock angle, β, and the downstream Mach number, M 2 , can be calculated. Unlike after a normal shock where M 2 must always be less than 1, in oblique shock M 2 can be supersonic (weak shock wave) or subsonic (strong shock wave). Weak solutions are often observed in flow geometries open to atmosphere (such as on the outside of a flight vehicle). Strong solutions may be observed in confined geometries (such as inside a nozzle intake). Strong solutions are required when the flow needs to match the downstream high pressure condition. Discontinuous changes also occur in the pressure, density and temperature, which all rise downstream of the oblique shock wave. Using the continuity equation and the fact that the tangential velocity component does not change across the shock, trigonometric relations eventually lead to the θ-β-M equation which shows θ as a function of M 1 , β and ɣ, where ɣ is the heat capacity ratio . [ 2 ] It is more intuitive to want to solve for β as a function of M 1 and θ, but this approach is more complicated, the results of which are often contained in tables or calculated through a numerical method . Within the θ-β-M equation, a maximum corner angle, θ MAX , exists for any upstream Mach number. When θ > θ MAX , the oblique shock wave is no longer attached to the corner and is replaced by a detached bow shock . A θ-β-M diagram, common in most compressible flow textbooks, shows a series of curves that will indicate θ MAX for each Mach number. The θ-β-M relationship will produce two β angles for a given θ and M 1 , with the larger angle called a strong shock and the smaller called a weak shock. The weak shock is almost always seen experimentally. The rise in pressure, density, and temperature after an oblique shock can be calculated as follows: M 2 is solved for as follows, where θ {\displaystyle \theta } is the post-shock flow deflection angle: Oblique shocks are often preferable in engineering applications when compared to normal shocks. This can be attributed to the fact that using one or a combination of oblique shock waves results in more favourable post-shock conditions (smaller increase in entropy, less stagnation pressure loss, etc.) when compared to utilizing a single normal shock. An example of this technique can be seen in the design of supersonic aircraft engine intakes or supersonic inlets . A type of these inlets is wedge-shaped to compress air flow into the combustion chamber while minimizing thermodynamic losses. Early supersonic aircraft jet engine intakes were designed using compression from a single normal shock, but this approach caps the maximum achievable Mach number to roughly 1.6. Concorde (which first flew in 1969) used variable geometry wedge-shaped intakes to achieve a maximum speed of Mach 2.2. A similar design was used on the F-14 Tomcat (the F-14D was first delivered in 1994) and achieved a maximum speed of Mach 2.34. Many supersonic aircraft wings are designed around a thin diamond shape. Placing a diamond-shaped object at an angle of attack relative to the supersonic flow streamlines will result in two oblique shocks propagating from the front tip over the top and bottom of the wing, with Prandtl-Meyer expansion fans created at the two corners of the diamond closest to the front tip. When correctly designed, this generates lift. As the Mach number of the upstream flow becomes increasingly hypersonic, the equations for the pressure, density, and temperature after the oblique shock wave reach a mathematical limit . The pressure and density ratios can then be expressed as: For a perfect atmospheric gas approximation using γ = 1.4, the hypersonic limit for the density ratio is 6. However, hypersonic post-shock dissociation of O 2 and N 2 into O and N lowers γ, allowing for higher density ratios in nature. The hypersonic temperature ratio is:
https://en.wikipedia.org/wiki/Oblique_shock
In software engineering , more specifically in distributed computing , observability is the ability to collect data about programs' execution, modules' internal states, and the communication among components. [ 1 ] [ 2 ] To improve observability, software engineers use a wide range of logging and tracing techniques to gather telemetry information, and tools to analyze and use it. Observability is foundational to site reliability engineering , as it is the first step in triaging a service outage. One of the goals of observability is to minimize the amount of prior knowledge needed to debug an issue. The term is borrowed from control theory, where the " observability " of a system measures how well its state can be determined from its outputs. Similarly, software observability measures how well a system's state can be understood from the obtained telemetry (metrics, logs, traces, profiling). The definition of observability varies by vendor: a measure of how well you can understand and explain any state your system can get into, no matter how novel or bizarre [...] without needing to ship new code software tools and practices for aggregating, correlating and analyzing a steady stream of performance data from a distributed application along with the hardware and network it runs on observability starts by shipping all your raw data to central service before you begin analysis the ability to measure a system’s current state based on the data it generates, such as logs, metrics, and traces Observability is tooling or a technical solution that allows teams to actively debug their system. Observability is based on exploring properties and patterns not defined in advance. proactively collecting, visualizing, and applying intelligence to all of your metrics, events, logs, and traces—so you can understand the behavior of your complex digital system The term is frequently referred to as its numeronym o11y (where 11 stands for the number of letters between the first letter and the last letter of the word). This is similar to other computer science abbreviations such as i18n and l10n and k8s . [ 9 ] Observability and monitoring are sometimes used interchangeably. [ 10 ] As tooling, commercial offerings and practices evolved in complexity, "monitoring" was re-branded as observability in order to differentiate new tools from the old. The terms are commonly contrasted in that systems are monitored using predefined sets of telemetry , [ 7 ] and monitored systems may be observable . [ 11 ] Majors et al. suggest that engineering teams that only have monitoring tools end up relying on expert foreknowledge (seniority), whereas teams that have observability tools rely on exploratory analysis (curiosity). [ 3 ] Observability relies on three main types of telemetry data: metrics, logs and traces. [ 6 ] [ 7 ] [ 12 ] Those are often referred to as "pillars of observability". [ 13 ] A metric is a point in time measurement ( scalar ) that represents some system state. Examples of common metrics include: Monitoring tools are typically configured to emit alerts when certain metric values exceed set thresholds. Thresholds are set based on knowledge about normal operating conditions and experience. Metrics are typically tagged to facilitate grouping and searchability. Application developers choose what kind of metrics to instrument their software with, before it is released. As a result, when a previously unknown issue is encountered, it is impossible to add new metrics without shipping new code. Furthermore, their cardinality can quickly make the storage size of telemetry data prohibitively expensive. Since metrics are cardinality-limited, they are often used to represent aggregate values (for example: average page load time, or 5-second average of the request rate). Without external context, it is impossible to correlate between events (such as user requests) and distinct metric values. Logs, or log lines, are generally free-form, unstructured text blobs [ clarification needed ] that are intended to be human readable. Modern logging is structured to enable machine parsability. [ 3 ] As with metrics, an application developer must instrument the application upfront and ship new code if different logging information is required. Logs typically include a timestamp and severity level. An event (such as a user request) may be fragmented across multiple log lines and interweave with logs from concurrent events. A cloud native application is typically made up of distributed services which together fulfill a single request. A distributed trace is an interrelated series of discrete events (also called spans) that track the progression of a single user request. [ 3 ] A trace shows the causal and temporal relationships between the services that interoperate to fulfill a request. Instrumenting an application with traces means sending span information to a tracing backend. The tracing backend correlates the received spans to generate presentable traces. To be able to follow a request as it traverses multiple services, spans are labeled with unique identifiers that enable constructing a parent-child relationship between spans. Span information is typically shared in the HTTP headers of outbound requests. [ 3 ] [ 14 ] [ 15 ] Continuous profiling is another telemetry type used to precisely determine how an application consumes resources. [ 16 ] To be able to observe an application, telemetry about the application's behavior needs to be collected or exported. Instrumentation means generating telemetry alongside the normal operation of the application. [ 3 ] Telemetry is then collected by an independent backend for later analysis. In fast-changing systems, instrumentation itself is often the best possible documentation, since it combines intention (what are the dimensions that an engineer named and decided to collect?) with the real-time, up-to-date information of live status in production. [ 3 ] Instrumentation can be automatic, or custom. Automatic instrumentation offers blanket coverage and immediate value; custom instrumentation brings higher value but requires more intimate involvement with the instrumented application. Instrumentation can be native - done in-code (modifying the code of the instrumented application) - or out-of-code (e.g. sidecar, eBPF ). Verifying new features in production by shipping them together with custom instrumentation is a practice called "observability-driven development". [ 3 ] Metrics, logs and traces are most commonly listed as the pillars of observability. [ 13 ] Majors et al. suggest that the pillars of observability are high cardinality, high-dimensionality, and explorability, arguing that runbooks and dashboards have little value because "modern systems rarely fail in precisely the same way twice." [ 3 ] Self monitoring is a practice where observability stacks monitor each other, in order to reduce the risk of inconspicuous outages. Self monitoring may be put in place in addition to high availability and redundancy to further avoid correlated failures. This computer science article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Observability_(software)
In physics , an observable is a physical property or physical quantity that can be measured . In classical mechanics , an observable is a real -valued "function" on the set of all possible system states, e.g., position and momentum . In quantum mechanics , an observable is an operator , or gauge , where the property of the quantum state can be determined by some sequence of operations . For example, these operations might involve submitting the system to various electromagnetic fields and eventually reading a value. Physically meaningful observables must also satisfy transformation laws that relate observations performed by different observers in different frames of reference . These transformation laws are automorphisms of the state space , that is bijective transformations that preserve certain mathematical properties of the space in question. In quantum mechanics , observables manifest as self-adjoint operators on a separable complex Hilbert space representing the quantum state space . [ 1 ] Observables assign values to outcomes of particular measurements , corresponding to the eigenvalue of the operator. If these outcomes represent physically allowable states (i.e. those that belong to the Hilbert space) the eigenvalues are real ; however, the converse is not necessarily true. [ 2 ] [ 3 ] [ 4 ] As a consequence, only certain measurements can determine the value of an observable for some state of a quantum system. In classical mechanics, any measurement can be made to determine the value of an observable. The relation between the state of a quantum system and the value of an observable requires some linear algebra for its description. In the mathematical formulation of quantum mechanics , up to a phase constant , pure states are given by non-zero vectors in a Hilbert space V . Two vectors v and w are considered to specify the same state if and only if w = c v {\displaystyle \mathbf {w} =c\mathbf {v} } for some non-zero c ∈ C {\displaystyle c\in \mathbb {C} } . Observables are given by self-adjoint operators on V . Not every self-adjoint operator corresponds to a physically meaningful observable. [ 5 ] [ 6 ] [ 7 ] [ 8 ] Also, not all physical observables are associated with non-trivial self-adjoint operators. For example, in quantum theory, mass appears as a parameter in the Hamiltonian, not as a non-trivial operator. [ 9 ] In the case of transformation laws in quantum mechanics, the requisite automorphisms are unitary (or antiunitary ) linear transformations of the Hilbert space V . Under Galilean relativity or special relativity , the mathematics of frames of reference is particularly simple, considerably restricting the set of physically meaningful observables. In quantum mechanics, measurement of observables exhibits some seemingly unintuitive properties. Specifically, if a system is in a state described by a vector in a Hilbert space , the measurement process affects the state in a non-deterministic but statistically predictable way. In particular, after a measurement is applied, the state description by a single vector may be destroyed, being replaced by a statistical ensemble . The irreversible nature of measurement operations in quantum physics is sometimes referred to as the measurement problem and is described mathematically by quantum operations . By the structure of quantum operations, this description is mathematically equivalent to that offered by the relative state interpretation where the original system is regarded as a subsystem of a larger system and the state of the original system is given by the partial trace of the state of the larger system. In quantum mechanics, dynamical variables A {\displaystyle A} such as position, translational (linear) momentum , orbital angular momentum , spin , and total angular momentum are each associated with a self-adjoint operator A ^ {\displaystyle {\hat {A}}} that acts on the state of the quantum system. The eigenvalues of operator A ^ {\displaystyle {\hat {A}}} correspond to the possible values that the dynamical variable can be observed as having. For example, suppose | ψ a ⟩ {\displaystyle |\psi _{a}\rangle } is an eigenket ( eigenvector ) of the observable A ^ {\displaystyle {\hat {A}}} , with eigenvalue a {\displaystyle a} , and exists in a Hilbert space . Then A ^ | ψ a ⟩ = a | ψ a ⟩ . {\displaystyle {\hat {A}}|\psi _{a}\rangle =a|\psi _{a}\rangle .} This eigenket equation says that if a measurement of the observable A ^ {\displaystyle {\hat {A}}} is made while the system of interest is in the state | ψ a ⟩ {\displaystyle |\psi _{a}\rangle } , then the observed value of that particular measurement must return the eigenvalue a {\displaystyle a} with certainty. However, if the system of interest is in the general state | ϕ ⟩ ∈ H {\displaystyle |\phi \rangle \in {\mathcal {H}}} (and | ϕ ⟩ {\displaystyle |\phi \rangle } and | ψ a ⟩ {\displaystyle |\psi _{a}\rangle } are unit vectors , and the eigenspace of a {\displaystyle a} is one-dimensional), then the eigenvalue a {\displaystyle a} is returned with probability | ⟨ ψ a | ϕ ⟩ | 2 {\displaystyle |\langle \psi _{a}|\phi \rangle |^{2}} , by the Born rule . A crucial difference between classical quantities and quantum mechanical observables is that some pairs of quantum observables may not be simultaneously measurable, a property referred to as complementarity . This is mathematically expressed by non- commutativity of their corresponding operators, to the effect that the commutator [ A ^ , B ^ ] := A ^ B ^ − B ^ A ^ ≠ 0 ^ . {\displaystyle \left[{\hat {A}},{\hat {B}}\right]:={\hat {A}}{\hat {B}}-{\hat {B}}{\hat {A}}\neq {\hat {0}}.} This inequality expresses a dependence of measurement results on the order in which measurements of observables A ^ {\displaystyle {\hat {A}}} and B ^ {\displaystyle {\hat {B}}} are performed. A measurement of A ^ {\displaystyle {\hat {A}}} alters the quantum state in a way that is incompatible with the subsequent measurement of B ^ {\displaystyle {\hat {B}}} and vice versa. Observables corresponding to commuting operators are called compatible observables . For example, momentum along say the x {\displaystyle x} and y {\displaystyle y} axes are compatible. Observables corresponding to non-commuting operators are called incompatible observables or complementary variables . For example, the position and momentum along the same axis are incompatible. [ 10 ] : 155 Incompatible observables cannot have a complete set of common eigenfunctions . Note that there can be some simultaneous eigenvectors of A ^ {\displaystyle {\hat {A}}} and B ^ {\displaystyle {\hat {B}}} , but not enough in number to constitute a complete basis . [ 11 ] [ 12 ]
https://en.wikipedia.org/wiki/Observable
The observable universe is a spherical region of the universe consisting of all matter that can be observed from Earth ; the electromagnetic radiation from these objects has had time to reach the Solar System and Earth since the beginning of the cosmological expansion . Assuming the universe is isotropic , the distance to the edge of the observable universe is the same in every direction. That is, the observable universe is a spherical region centered on the observer. Every location in the universe has its own observable universe, which may or may not overlap with the one centered on Earth. The word observable in this sense does not refer to the capability of modern technology to detect light or other information from an object, or whether there is anything to be detected. It refers to the physical limit created by the speed of light itself. No signal can travel faster than light, hence there is a maximum distance, called the particle horizon , beyond which nothing can be detected, as the signals could not have reached the observer yet. Sometimes astrophysicists distinguish between the observable universe and the visible universe. The former includes signals since the end of the inflationary epoch , while the latter includes only signals emitted since recombination . [ note 2 ] According to calculations, the current comoving distance to particles from which the cosmic microwave background radiation (CMBR) was emitted, which represents the radius of the visible universe, is about 14.0 billion parsecs (about 45.7 billion light-years). The comoving distance to the edge of the observable universe is about 14.3 billion parsecs (about 46.6 billion light-years), [ 7 ] about 2% larger. The radius of the observable universe is therefore estimated to be about 46.5 billion light-years. [ 8 ] [ 9 ] Using the critical density and the diameter of the observable universe, the total mass of ordinary matter in the universe can be calculated to be about 1.5 × 10 53 kg . [ 10 ] In November 2018, astronomers reported that extragalactic background light (EBL) amounted to 4 × 10 84 photons. [ 11 ] [ 12 ] As the universe's expansion is accelerating, all currently observable objects, outside the local supercluster , will eventually appear to freeze in time, while emitting progressively redder and fainter light. For instance, objects with the current redshift z from 5 to 10 will only be observable up to an age of 4–6 billion years. In addition, light emitted by objects currently situated beyond a certain comoving distance (currently about 19 gigaparsecs (62 Gly)) will never reach Earth. [ 13 ] The universe's size is unknown, and it may be infinite in extent. [ 14 ] Some parts of the universe are too far away for the light emitted since the Big Bang to have had enough time to reach Earth or space-based instruments, and therefore lie outside the observable universe. In the future, light from distant galaxies will have had more time to travel, so one might expect that additional regions will become observable. Regions distant from observers (such as us) are expanding away faster than the speed of light, at rates estimated by Hubble's law . [ note 3 ] The expansion rate appears to be accelerating , which dark energy was proposed to explain. Assuming dark energy remains constant (an unchanging cosmological constant ) so that the expansion rate of the universe continues to accelerate, there is a "future visibility limit" beyond which objects will never enter the observable universe at any time in the future because light emitted by objects outside that limit could never reach the Earth. Note that, because the Hubble parameter is decreasing with time, there can be cases where a galaxy that is receding from Earth only slightly faster than light emits a signal that eventually reaches Earth. [ 9 ] [ 15 ] This future visibility limit is calculated at a comoving distance of 19 billion parsecs (62 billion light-years), assuming the universe will keep expanding forever, which implies the number of galaxies that can ever be theoretically observed in the infinite future is only larger than the number currently observable by a factor of 2.36 (ignoring redshift effects). [ note 4 ] In principle, more galaxies will become observable in the future; in practice, an increasing number of galaxies will become extremely redshifted due to ongoing expansion, so much so that they will seem to disappear from view and become invisible. [ 16 ] [ 17 ] [ 18 ] A galaxy at a given comoving distance is defined to lie within the "observable universe" if we can receive signals emitted by the galaxy at any age in its history, say, a signal sent from the galaxy only 500 million years after the Big Bang. Because of the universe's expansion, there may be some later age at which a signal sent from the same galaxy can never reach the Earth at any point in the infinite future, so, for example, we might never see what the galaxy looked like 10 billion years after the Big Bang, [ 13 ] even though it remains at the same comoving distance less than that of the observable universe. This can be used to define a type of cosmic event horizon whose distance from the Earth changes over time. For example, the current distance to this horizon is about 16 billion light-years, meaning that a signal from an event happening at present can eventually reach the Earth if the event is less than 16 billion light-years away, but the signal will never reach the Earth if the event is further away. [ 9 ] The space before this cosmic event horizon can be called "reachable universe", that is all galaxies closer than that could be reached if we left for them today, at the speed of light; all galaxies beyond that are unreachable. [ 19 ] [ 20 ] Simple observation will show the future visibility limit (62 billion light-years) is exactly equal to the reachable limit (16 billion light-years) added to the current visibility limit (46 billion light-years). [ 21 ] [ 7 ] Both popular and professional research articles in cosmology often use the term "universe" to mean "observable universe". [ citation needed ] This can be justified on the grounds that we can never know anything by direct observation about any part of the universe that is causally disconnected from the Earth, although many credible theories require a total universe much larger than the observable universe. [ citation needed ] No evidence exists to suggest that the boundary of the observable universe constitutes a boundary on the universe as a whole, nor do any of the mainstream cosmological models propose that the universe has any physical boundary in the first place. However, some models propose it could be finite but unbounded, [ note 5 ] like a higher-dimensional analogue of the 2D surface of a sphere that is finite in area but has no edge. It is plausible that the galaxies within the observable universe represent only a minuscule fraction of the galaxies in the universe. According to the theory of cosmic inflation initially introduced by Alan Guth and D. Kazanas , [ 22 ] if it is assumed that inflation began about 10 −37 seconds after the Big Bang and that the pre-inflation size of the universe was approximately equal to the speed of light times its age, that would suggest that at present the entire universe's size is at least 1.5 × 10 34 light-years — this is at least 3 × 10 23 times the radius of the observable universe. [ 23 ] If the universe is finite but unbounded, it is also possible that the universe is smaller than the observable universe. In this case, what we take to be very distant galaxies may actually be duplicate images of nearby galaxies, formed by light that has circumnavigated the universe. It is difficult to test this hypothesis experimentally because different images of a galaxy would show different eras in its history, and consequently might appear quite different. Bielewicz et al. [ 24 ] claim to establish a lower bound of 27.9 gigaparsecs (91 billion light-years) on the diameter of the last scattering surface. This value is based on matching-circle analysis of the WMAP 7-year data. This approach has been disputed. [ 25 ] The comoving distance from Earth to the edge of the observable universe is about 14.26 giga parsecs (46.5 billion light-years or 4.40 × 10 26 m) in any direction. The observable universe is thus a sphere with a diameter of about 28.5 gigaparsecs [ 27 ] (93 billion light-years or 8.8 × 10 26 m). [ 28 ] Assuming that space is roughly flat (in the sense of being a Euclidean space ), this size corresponds to a comoving volume of about 1.22 × 10 4 Gpc 3 ( 4.22 × 10 5 Gly 3 or 3.57 × 10 80 m 3 ). [ 29 ] These are distances now (in cosmological time ), not distances at the time the light was emitted. For example, the cosmic microwave background radiation that we see right now was emitted at the time of photon decoupling , estimated to have occurred about 380,000 years after the Big Bang, [ 30 ] [ 31 ] which occurred around 13.8 billion years ago. This radiation was emitted by matter that has, in the intervening time, mostly condensed into galaxies, and those galaxies are now calculated to be about 46 billion light-years from Earth. [ 7 ] [ 9 ] To estimate the distance to that matter at the time the light was emitted, we may first note that according to the Friedmann–Lemaître–Robertson–Walker metric , which is used to model the expanding universe, if we receive light with a redshift of z , then the scale factor at the time the light was originally emitted is given by [ 32 ] [ 33 ] a ( t ) = 1 1 + z {\displaystyle a(t)={\frac {1}{1+z}}} . WMAP nine-year results combined with other measurements give the redshift of photon decoupling as z = 1 091 .64 ± 0.47 , [ 34 ] which implies that the scale factor at the time of photon decoupling would be 1 ⁄ 1092.64 . So if the matter that originally emitted the oldest CMBR photons has a present distance of 46 billion light-years, then the distance would have been only about 42 million light-years at the time of decoupling. The light-travel distance to the edge of the observable universe is the age of the universe times the speed of light , 13.8 billion light years. This is the distance that a photon emitted shortly after the Big Bang, such as one from the cosmic microwave background , has traveled to reach observers on Earth. Because spacetime is curved, corresponding to the expansion of space , this distance does not correspond to the true distance at any moment in time. [ 35 ] The observable universe contains as many as an estimated 2 trillion galaxies [ 36 ] [ 37 ] [ 38 ] and, overall, as many as an estimated 10 24 stars [ 39 ] [ 40 ] – more stars (and, potentially, Earth-like planets) than all the grains of beach sand on planet Earth . [ 41 ] [ 42 ] [ 43 ] Other estimates are in the hundreds of billions rather than trillions. [ 44 ] [ 45 ] [ 46 ] The estimated total number of stars in an inflationary universe (observed and unobserved) is 10 100 . [ 47 ] Assuming the mass of ordinary matter is about 1.45 × 10 53 kg as discussed above, and assuming all atoms are hydrogen atoms (which are about 74% of all atoms in the Milky Way by mass), the estimated total number of atoms in the observable universe is obtained by dividing the mass of ordinary matter by the mass of a hydrogen atom. The result is approximately 10 80 hydrogen atoms, also known as the Eddington number . The mass of the observable universe is often quoted as 10 53 kg. [ 48 ] In this context, mass refers to ordinary (baryonic) matter and includes the interstellar medium (ISM) and the intergalactic medium (IGM). However, it excludes dark matter and dark energy . This quoted value for the mass of ordinary matter in the universe can be estimated based on critical density. The calculations are for the observable universe only as the volume of the whole is unknown and may be infinite. Critical density is the energy density for which the universe is flat. [ 49 ] If there is no dark energy, it is also the density for which the expansion of the universe is poised between continued expansion and collapse. [ 50 ] From the Friedmann equations , the value for ρ c {\displaystyle \rho _{\text{c}}} critical density, is: [ 51 ] where G is the gravitational constant and H = H 0 is the present value of the Hubble constant . The value for H 0 , as given by the European Space Agency's Planck Telescope, is H 0 = 67.15 kilometres per second per megaparsec. This gives a critical density of 0.85 × 10 −26 kg/m 3 , or about 5 hydrogen atoms per cubic metre. This density includes four significant types of energy/mass: ordinary matter (4.8%), neutrinos (0.1%), cold dark matter (26.8%), and dark energy (68.3%). [ 52 ] Although neutrinos are Standard Model particles, they are listed separately because they are ultra-relativistic and hence behave like radiation rather than like matter. The density of ordinary matter, as measured by Planck, is 4.8% of the total critical density or 4.08 × 10 −28 kg/m 3 . To convert this density to mass we must multiply by volume, a value based on the radius of the "observable universe". Since the universe has been expanding for 13.8 billion years, the comoving distance (radius) is now about 46.6 billion light-years. Thus, volume ( ⁠ 4 / 3 ⁠ πr 3 ) equals 3.58 × 10 80 m 3 and the mass of ordinary matter equals density ( 4.08 × 10 −28 kg/m 3 ) times volume ( 3.58 × 10 80 m 3 ) or 1.46 × 10 53 kg . Sky surveys and mappings of the various wavelength bands of electromagnetic radiation (in particular 21-cm emission ) have yielded much information on the content and character of the universe 's structure. The organization of structure appears to follow a hierarchical model with organization up to the scale of superclusters and filaments . Larger than this (at scales between 30 and 200 megaparsecs), [ 55 ] there seems to be no continued structure, a phenomenon that has been referred to as the End of Greatness . [ 56 ] The shape of the large scale structure can be summarized by the matter power spectrum . The organization of structure arguably begins at the stellar level, though most cosmologists rarely address astrophysics on that scale. Stars are organized into galaxies , which in turn form galaxy groups , galaxy clusters , superclusters , sheets, walls and filaments , which are separated by immense voids , creating a vast foam-like structure [ 58 ] sometimes called the "cosmic web". Prior to 1989, it was commonly assumed that virialized galaxy clusters were the largest structures in existence, and that they were distributed more or less uniformly throughout the universe in every direction. However, since the early 1980s, more and more structures have been discovered. In 1983, Adrian Webster identified the Webster LQG , a large quasar group consisting of 5 quasars. The discovery was the first identification of a large-scale structure, and has expanded the information about the known grouping of matter in the universe. In 1987, Robert Brent Tully identified the Pisces–Cetus Supercluster Complex , the galaxy filament in which the Milky Way resides. It is about 1 billion light-years across. That same year, an unusually large region with a much lower than average distribution of galaxies was discovered, the Giant Void , which measures 1.3 billion light-years across. Based on redshift survey data, in 1989 Margaret Geller and John Huchra discovered the " Great Wall ", [ 59 ] a sheet of galaxies more than 500 million light-years long and 200 million light-years wide, but only 15 million light-years thick. The existence of this structure escaped notice for so long because it requires locating the position of galaxies in three dimensions, which involves combining location information about the galaxies with distance information from redshifts . Two years later, astronomers Roger G. Clowes and Luis E. Campusano discovered the Clowes–Campusano LQG , a large quasar group measuring two billion light-years at its widest point, which was the largest known structure in the universe at the time of its announcement. In April 2003, another large-scale structure was discovered, the Sloan Great Wall . In August 2007, a possible supervoid was detected in the constellation Eridanus . [ 60 ] It coincides with the ' CMB cold spot ', a cold region in the microwave sky that is highly improbable under the currently favored cosmological model. This supervoid could cause the cold spot, but to do so it would have to be improbably big, possibly a billion light-years across, almost as big as the Giant Void mentioned above. Another large-scale structure is the SSA22 Protocluster , a collection of galaxies and enormous gas bubbles that measures about 200 million light-years across. In 2011, a large quasar group was discovered, U1.11 , measuring about 2.5 billion light-years across. On January 11, 2013, another large quasar group, the Huge-LQG , was discovered, which was measured to be four billion light-years across, the largest known structure in the universe at that time. [ 61 ] In November 2013, astronomers discovered the Hercules–Corona Borealis Great Wall , [ 62 ] [ 63 ] an even bigger structure twice as large as the former. It was defined by the mapping of gamma-ray bursts . [ 62 ] [ 64 ] In 2021, the American Astronomical Society announced the detection of the Giant Arc ; a crescent-shaped string of galaxies that span 3.3 billion light years in length, located 9.2 billion light years from Earth in the constellation Boötes from observations captured by the Sloan Digital Sky Survey . [ 65 ] The End of Greatness is an observational scale discovered at roughly 100 Mpc (roughly 300 million light-years) where the lumpiness seen in the large-scale structure of the universe is homogenized and isotropized in accordance with the cosmological principle . [ 56 ] At this scale, no pseudo-random fractalness is apparent. [ 66 ] The superclusters and filaments seen in smaller surveys are randomized to the extent that the smooth distribution of the universe is visually apparent. It was not until the redshift surveys of the 1990s were completed that this scale could accurately be observed. [ 56 ] Another indicator of large-scale structure is the ' Lyman-alpha forest '. This is a collection of absorption lines that appear in the spectra of light from quasars , which are interpreted as indicating the existence of huge thin sheets of intergalactic (mostly hydrogen ) gas. These sheets appear to collapse into filaments, which can feed galaxies as they grow where filaments either cross or are dense. An early direct evidence for this cosmic web of gas was the 2019 detection, by astronomers from the RIKEN Cluster for Pioneering Research in Japan and Durham University in the U.K., of light from the brightest part of this web, surrounding and illuminated by a cluster of forming galaxies, acting as cosmic flashlights for intercluster medium hydrogen fluorescence via Lyman-alpha emissions. [ 68 ] [ 69 ] In 2021, an international team, headed by Roland Bacon from the Centre de Recherche Astrophysique de Lyon (France), reported the first observation of diffuse extended Lyman-alpha emission from redshift 3.1 to 4.5 that traced several cosmic web filaments on scales of 2.5−4 cMpc (comoving mega-parsecs), in filamentary environments outside massive structures typical of web nodes. [ 70 ] Some caution is required in describing structures on a cosmic scale because they are often different from how they appear. Gravitational lensing can make an image appear to originate in a different direction from its real source, when foreground objects curve surrounding spacetime (as predicted by general relativity ) and deflect passing light rays. Rather usefully, strong gravitational lensing can sometimes magnify distant galaxies, making them easier to detect. Weak lensing by the intervening universe in general also subtly changes the observed large-scale structure. The large-scale structure of the universe also looks different if only redshift is used to measure distances to galaxies. For example, galaxies behind a galaxy cluster are attracted to it and fall towards it, and so are blueshifted (compared to how they would be if there were no cluster). On the near side, objects are redshifted. Thus, the environment of the cluster looks somewhat pinched if using redshifts to measure distance. The opposite effect is observed on galaxies already within a cluster: the galaxies have some random motion around the cluster center, and when these random motions are converted to redshifts, the cluster appears elongated. This creates a " finger of God "—the illusion of a long chain of galaxies pointed at Earth. At the centre of the Hydra–Centaurus Supercluster , a gravitational anomaly called the Great Attractor affects the motion of galaxies over a region hundreds of millions of light-years across. These galaxies are all redshifted , in accordance with Hubble's law . This indicates that they are receding from us and from each other, but the variations in their redshift are sufficient to reveal the existence of a concentration of mass equivalent to tens of thousands of galaxies. The Great Attractor, discovered in 1986, lies at a distance of between 150 million and 250 million light-years in the direction of the Hydra and Centaurus constellations . In its vicinity there is a preponderance of large old galaxies, many of which are colliding with their neighbours, or radiating large amounts of radio waves. In 1987, astronomer R. Brent Tully of the University of Hawaii 's Institute of Astronomy identified what he called the Pisces–Cetus Supercluster Complex , a structure one billion light-years long and 150 million light-years across in which, he claimed, the Local Supercluster is embedded. [ 71 ] The most distant astronomical object identified (as of August of 2024) is a galaxy classified as JADES-GS-z14-0 . [ 72 ] In 2009, a gamma ray burst , GRB 090423 , was found to have a redshift of 8.2, which indicates that the collapsing star that caused it exploded when the universe was only 630 million years old. [ 73 ] The burst happened approximately 13 billion years ago, [ 74 ] so a distance of about 13 billion light-years was widely quoted in the media, or sometimes a more precise figure of 13.035 billion light-years. [ 73 ] This would be the "light travel distance" (see Distance measures (cosmology) ) rather than the " proper distance " used in both Hubble's law and in defining the size of the observable universe. Cosmologist Ned Wright argues against using this measure. [ 75 ] The proper distance for a redshift of 8.2 would be about 9.2 Gpc , [ 76 ] or about 30 billion light-years. The limit of observability in the universe is set by cosmological horizons which limit—based on various physical constraints—the extent to which information can be obtained about various events in the universe. The most famous horizon is the particle horizon which sets a limit on the precise distance that can be seen due to the finite age of the universe . Additional horizons are associated with the possible future extent of observations, larger than the particle horizon owing to the expansion of space , an "optical horizon" at the surface of last scattering , and associated horizons with the surface of last scattering for neutrinos and gravitational waves .
https://en.wikipedia.org/wiki/Observable_universe
Observational astronomy is a division of astronomy that is concerned with recording data about the observable universe , in contrast with theoretical astronomy , which is mainly concerned with calculating the measurable implications of physical models . It is the practice and study of observing celestial objects with the use of telescopes and other astronomical instruments. As a science , the study of astronomy is somewhat hindered in that direct experiments with the properties of the distant universe are not possible. However, this is partly compensated by the fact that astronomers have a vast number of visible examples of stellar phenomena that can be examined. This allows for observational data to be plotted on graphs, and general trends recorded. Nearby examples of specific phenomena, such as variable stars , can then be used to infer the behavior of more distant representatives. Those distant yardsticks can then be employed to measure other phenomena in that neighborhood, including the distance to a galaxy . Galileo Galilei turned a telescope to the heavens and recorded what he saw. Since that time, observational astronomy has made steady advances with each improvement in telescope technology. A traditional division of observational astronomy is based on the region of the electromagnetic spectrum observed: In addition to using electromagnetic radiation, modern astrophysicists can also make observations using neutrinos , cosmic rays or gravitational waves . Observing a source using multiple methods is known as multi-messenger astronomy . Optical and radio astronomy can be performed with ground-based observatories, because the atmosphere is relatively transparent at the wavelengths being detected. Observatories are usually located at high altitudes so as to minimise the absorption and distortion caused by the Earth's atmosphere. Some wavelengths of infrared light are heavily absorbed by water vapor , so many infrared observatories are located in dry places at high altitude, or in space. The atmosphere is opaque at the wavelengths used by X-ray astronomy, gamma-ray astronomy, UV astronomy and (except for a few wavelength "windows") far infrared astronomy , so observations must be carried out mostly from balloons or space observatories. Powerful gamma rays can, however be detected by the large air showers they produce, and the study of cosmic rays is a rapidly expanding branch of astronomy. For much of the history of observational astronomy, almost all observation was performed in the visual spectrum with optical telescopes . While the Earth's atmosphere is relatively transparent in this portion of the electromagnetic spectrum , most telescope work is still dependent on seeing conditions and air transparency, and is generally restricted to the night time. The seeing conditions depend on the turbulence and thermal variations in the air. Locations that are frequently cloudy or suffer from atmospheric turbulence limit the resolution of observations. Likewise the presence of the full Moon can brighten up the sky with scattered light, hindering observation of faint objects. For observation purposes, the optimal location for an optical telescope is undoubtedly in outer space . There the telescope can make observations without being affected by the atmosphere . However, at present it remains costly to lift telescopes into orbit . Thus the next best locations are certain mountain peaks that have a high number of cloudless days and generally possess good atmospheric conditions (with good seeing conditions). The peaks of the islands of Mauna Kea, Hawaii and La Palma possess these properties, as to a lesser extent do inland sites such as Llano de Chajnantor , Paranal , Cerro Tololo and La Silla in Chile . These observatory locations have attracted an assemblage of powerful telescopes, totalling many billion US dollars of investment. The darkness of the night sky is an important factor in optical astronomy. With the size of cities and human populated areas ever expanding, the amount of artificial light at night has also increased. These artificial lights produce a diffuse background illumination that makes observation of faint astronomical features very difficult without special filters. In a few locations such as the state of Arizona and in the United Kingdom , this has led to campaigns for the reduction of light pollution . The use of hoods around street lights not only improves the amount of light directed toward the ground, but also helps reduce the light directed toward the sky. Atmospheric effects ( astronomical seeing ) can severely hinder the resolution of a telescope. Without some means of correcting for the blurring effect of the shifting atmosphere, telescopes larger than about 15–20 cm in aperture can not achieve their theoretical resolution at visible wavelengths. As a result, the primary benefit of using very large telescopes has been the improved light-gathering capability, allowing very faint magnitudes to be observed. However the resolution handicap has begun to be overcome by adaptive optics , speckle imaging and interferometric imaging , as well as the use of space telescopes . Astronomers have a number of observational tools that they can use to make measurements of the heavens. For objects that are relatively close to the Sun and Earth, direct and very precise position measurements can be made against a more distant (and thereby nearly stationary) background. Early observations of this nature were used to develop very precise orbital models of the various planets, and to determine their respective masses and gravitational perturbations . Such measurements led to the discovery of the planets Uranus , Neptune , and (indirectly) Pluto . They also resulted in an erroneous assumption of a fictional planet Vulcan within the orbit of Mercury (but the explanation of the precession of Mercury's orbit by Einstein is considered one of the triumphs of his general relativity theory). In addition to examination of the universe in the optical spectrum, astronomers have increasingly been able to acquire information in other portions of the electromagnetic spectrum. The earliest such non-optical measurements were made of the thermal properties of the Sun . Instruments employed during a solar eclipse could be used to measure the radiation from the corona . With the discovery of radio waves, radio astronomy began to emerge as a new discipline in astronomy. The long wavelengths of radio waves required much larger collecting dishes in order to make images with good resolution, and later led to the development of the multi-dish interferometer for making high-resolution aperture synthesis radio images (or "radio maps"). The development of the microwave horn receiver led to the discovery of the microwave background radiation associated with the Big Bang . [ 4 ] Radio astronomy has continued to expand its capabilities, even using radio astronomy satellites to produce interferometers with baselines much larger than the size of the Earth. However, the ever-expanding use of the radio spectrum for other uses is gradually drowning out the faint radio signals from the stars. For this reason, in the future radio astronomy might be performed from shielded locations, such as the far side of the Moon . The last part of the twentieth century saw rapid technological advances in astronomical instrumentation. Optical telescopes were growing ever larger, and employing adaptive optics to partly negate atmospheric blurring. New telescopes were launched into space, and began observing the universe in the infrared , ultraviolet , x-ray , and gamma ray parts of the electromagnetic spectrum, as well as observing cosmic rays . Interferometer arrays produced the first extremely high-resolution images using aperture synthesis at radio, infrared and optical wavelengths. Orbiting instruments such as the Hubble Space Telescope produced rapid advances in astronomical knowledge, acting as the workhorse for visible-light observations of faint objects. New space instruments under development are expected to directly observe planets around other stars, perhaps even some Earth-like worlds. In addition to telescopes, astronomers have begun using other instruments to make observations. Neutrino astronomy is the branch of astronomy that observes astronomical objects with neutrino detectors in special observatories, usually huge underground tanks. Nuclear reactions in stars and supernova explosions produce very large numbers of neutrinos , very few of which may be detected by a neutrino telescope . Neutrino astronomy is motivated by the possibility of observing processes that are inaccessible to optical telescopes , such as the Sun's core . Gravitational wave detectors are being designed that may capture events such as collisions of massive objects such as neutron stars or black holes . [ 5 ] Robotic spacecraft are also being increasingly used to make highly detailed observations of planets within the Solar System , so that the field of planetary science now has significant cross-over with the disciplines of geology and meteorology . The key instrument of nearly all modern observational astronomy is the telescope . This serves the dual purposes of gathering more light so that very faint objects can be observed, and magnifying the image so that small and distant objects can be observed. Optical astronomy requires telescopes that use optical components of great precision. Typical requirements for grinding and polishing a curved mirror, for example, require the surface to be within a fraction of a wavelength of light of a particular conic shape. Many modern "telescopes" actually consist of arrays of telescopes working together to provide higher resolution through aperture synthesis . Large telescopes are housed in domes, both to protect them from the weather and to stabilize the environmental conditions. For example, if the temperature is different from one side of the telescope to the other, the shape of the structure changes, due to thermal expansion pushing optical elements out of position. This can affect the image. For this reason, the domes are usually bright white ( titanium dioxide ) or unpainted metal. Domes are often opened around sunset, long before observing can begin, so that air can circulate and bring the entire telescope to the same temperature as the surroundings. To prevent wind-buffet or other vibrations affecting observations, it is standard practice to mount the telescope on a concrete pier whose foundations are entirely separate from those of the surrounding dome and building. To do almost any scientific work requires that telescopes track objects as they wheel across the visible sky. In other words, they must smoothly compensate for the rotation of the Earth. Until the advent of computer controlled drive mechanisms, the standard solution was some form of equatorial mount , and for small telescopes this is still the norm. However, this is a structurally poor design and becomes more and more cumbersome as the diameter and weight of the telescope increases. The world's largest equatorial mounted telescope is the 200 inch (5.1 m) Hale Telescope , whereas recent 8–10 m telescopes use the structurally better altazimuth mount , and are actually physically smaller than the Hale, despite the larger mirrors. As of 2006, there are design projects underway for gigantic alt-az telescopes: the Thirty Metre Telescope [1] , and the 100 m diameter Overwhelmingly Large Telescope . [ 7 ] Amateur astronomers use such instruments as the Newtonian reflector , the Refractor and the increasingly popular Maksutov telescope . The photograph has served a critical role in observational astronomy for over a century, but in the last 30 years it has been largely replaced for imaging applications by digital sensors such as CCDs and CMOS chips. Specialist areas of astronomy such as photometry and interferometry have utilised electronic detectors for a much longer period of time. Astrophotography uses specialised photographic film (or usually a glass plate coated with photographic emulsion ), but there are a number of drawbacks, particularly a low quantum efficiency , of the order of 3%, whereas CCDs can be tuned for a QE >90% in a narrow band. Almost all modern telescope instruments are electronic arrays, and older telescopes have been either been retrofitted with these instruments or closed down. Glass plates are still used in some applications, such as surveying, [ citation needed ] because the resolution possible with a chemical film is much higher than any electronic detector yet constructed. Prior to the invention of photography, all astronomy was done with the naked eye. However, even before films became sensitive enough, scientific astronomy moved entirely to film, because of the overwhelming advantages: The blink comparator is an instrument that is used to compare two nearly identical photographs made of the same section of sky at different points in time. The comparator alternates illumination of the two plates, and any changes are revealed by blinking points or streaks. This instrument has been used to find asteroids , comets , and variable stars . The position or cross-wire micrometer is an implement that has been used to measure double stars . This consists of a pair of fine, movable lines that can be moved together or apart. The telescope lens is lined up on the pair and oriented using position wires that lie at right angles to the star separation. The movable wires are then adjusted to match the two star positions. The separation of the stars is then read off the instrument, and their true separation determined based on the magnification of the instrument. A vital instrument of observational astronomy is the spectrograph . The absorption of specific wavelengths of light by elements allows specific properties of distant bodies to be observed. This capability has resulted in the discovery of the element of helium in the Sun's emission spectrum , and has allowed astronomers to determine a great deal of information concerning distant stars, galaxies, and other celestial bodies. Doppler shift (particularly " redshift ") of spectra can also be used to determine the radial motion or distance with respect to the Earth . Early spectrographs employed banks of prisms that split light into a broad spectrum. Later the grating spectrograph was developed, which reduced the amount of light loss compared to prisms and provided higher spectral resolution. The spectrum can be photographed in a long exposure, allowing the spectrum of faint objects (such as distant galaxies) to be measured. Stellar photometry came into use in 1861 as a means of measuring stellar colors . This technique measured the magnitude of a star at specific frequency ranges, allowing a determination of the overall color, and therefore temperature of a star. By 1951 an internationally standardized system of UBV- magnitudes ( U ltraviolet- B lue- V isual) was adopted. Photoelectric photometry using the CCD is now frequently used to make observations through a telescope. These sensitive instruments can record the image nearly down to the level of individual photons , and can be designed to view in parts of the spectrum that are invisible to the eye. The ability to record the arrival of small numbers of photons over a period of time can allow a degree of computer correction for atmospheric effects, sharpening up the image. Multiple digital images can also be combined to further enhance the image, often known as "stacking". When combined with the adaptive optics technology, image quality can approach the theoretical resolution capability of the telescope. Filters are used to view an object at particular frequencies or frequency ranges. Multilayer film filters can provide very precise control of the frequencies transmitted and blocked, so that, for example, objects can be viewed at a particular frequency emitted only by excited hydrogen atoms. Filters can also be used to partially compensate for the effects of light pollution by blocking out unwanted light. Polarization filters can also be used to determine if a source is emitting polarized light, and the orientation of the polarization. Astronomers observe a wide range of astronomical sources, including high-redshift galaxies, AGNs , the afterglow from the Big Bang , and many different types of stars and protostars. A variety of data can be observed for each object. The position coordinates locate the object in the sky using the techniques of spherical astronomy , and the magnitude determines its brightness as seen from the Earth . The relative brightness in different parts of the spectrum yields information about the temperature and physics of the object. Photographs of the spectra allow the chemistry of the object to be examined. Parallax shifts of a star against the background can be used to determine the distance, up to a limit imposed by the resolution of the instrument. The radial velocity of the star and changes in its position over time ( proper motion ) can be used to measure its velocity relative to the Sun. Variations in the brightness of the star give evidence of instabilities in the star's atmosphere, or else the presence of an occulting companion. The orbits of binary stars can be used to measure the relative masses of each companion, or the total mass of the system. Spectroscopic binaries can be found by observing Doppler shifts in the spectrum of the star and its close companion. Stars of identical masses that formed at the same time and under similar conditions typically have nearly identical observed properties. Observing a mass of closely associated stars, such as in a globular cluster , allows data to be assembled about the distribution of stellar types. These tables can then be used to infer the age of the association. For distant galaxies and AGNs , observations are made of the overall shape and properties of the galaxy, as well as the groupings where they are found. Observations of certain types of variable stars and supernovae of known luminosity , called standard candles , in other galaxies allow the inference of the distance to the host galaxy. The expansion of space causes the spectra of these galaxies to be shifted, depending on the distance, and modified by the Doppler effect of the galaxy's radial velocity. Both the size of the galaxy and its redshift can be used to infer something about the distance of the galaxy. Observations of large numbers of galaxies are referred to as redshift surveys , and are used to model the evolution of galaxy forms.
https://en.wikipedia.org/wiki/Observational_astronomy
The planet Venus was first observed in antiquity, and continued with telescopic observations, and then by visiting spacecraft. Spacecraft have performed multiple flybys, orbits, and landings on the planet, including balloon probes that floated in its atmosphere . Study of the planet is aided by its relatively close proximity to the Earth, but the surface of Venus is obscured by an atmosphere opaque to visible light. Transits of Venus directly between the Earth and the Sun's visible disc are rare astronomical events. The first such transit to be predicted and observed was the 1639 transit of Venus , seen and recorded by English astronomers Jeremiah Horrocks and William Crabtree . [ 1 ] The observation by Mikhail Lomonosov of the transit of 1761 provided the first evidence that Venus had an atmosphere, and the 19th-century observations of parallax during Venus transits allowed the distance between the Earth and Sun to be accurately calculated for the first time. [ 2 ] Transits can only occur either in early June or early December, these being the points at which Venus crosses the ecliptic (the orbital plane of the Earth), and occur in pairs at eight-year intervals, with each such pair more than a century apart. The most recent pair of transits of Venus occurred in 2004 and 2012, while the prior pair occurred in 1874 and 1882. [ 3 ] In the 19th century, many observers stated that Venus had a period of rotation of roughly 24 hours. Italian astronomer Giovanni Schiaparelli was the first to predict a significantly slower rotation, proposing that Venus was tidally locked with the Sun (as he had also proposed for Mercury). [ 4 ] While not actually true for either body, this was still a reasonably accurate estimate. The near-resonance between its rotation and its closest approach to Earth helped to create this impression, as Venus always seemed to be facing the same direction when it was in the best location for observations to be made. The rotation rate of Venus was first measured during the 1961 conjunction, observed by radar from a 26 m antenna at Goldstone, California , the Jodrell Bank Radio Observatory in the UK , and the Soviet deep space facility in Yevpatoria , Crimea . Accuracy was refined at each subsequent conjunction, primarily from measurements made from Goldstone and Eupatoria. The fact that rotation was retrograde was not confirmed until 1964. Before radio observations in the 1960s, many believed that Venus contained a lush, Earth-like environment. This was due to the planet's size and orbital radius, which suggested a fairly Earth-like situation as well as to the thick layer of clouds which prevented the surface from being seen. Among the speculations on Venus were that it had a jungle-like environment or that it had oceans of either petroleum or carbonated water. [ 5 ] However, microwave observations by C. Mayer et al. [ 6 ] indicated a high-temperature source (600 K). Strangely, millimetre-band observations made by A. D. Kuzmin indicated much lower temperatures. [ 7 ] Two competing theories explained the unusual radio spectrum, one suggesting the high temperatures originated in the ionosphere, and another suggesting a hot planetary surface. In September 2020, a team at Cardiff University announced that observations of Venus using the James Clerk Maxwell Telescope and Atacama Large Millimeter Array in 2017 and 2019 indicated that the Venusian atmosphere contained phosphine (PH 3 ) in concentrations 10,000 times higher than those that could be ascribed to any known non-biological source on Venus. The phosphine was detected at heights of at least 30 miles (48 kilometres) above the surface of Venus, and was detected primarily at mid-latitudes with none detected at the poles of Venus. This could have indicated the potential presence of biological organisms on Venus, [ 8 ] [ 9 ] however, this measurement was later shown to be in error. [ 10 ] [ 11 ] After the Moon, Venus was the second object in the Solar System to be explored by radar from the Earth. The first studies were carried out in 1961 at NASA 's Goldstone Observatory , part of the Deep Space Network . [ 12 ] At successive inferior conjunctions , Venus was observed both by Goldstone and the National Astronomy and Ionosphere Center in Arecibo . The studies carried out were similar to the earlier measurement of transits of the meridian , which had revealed in 1963 that the rotation of Venus was retrograde (it rotates in the opposite direction to that in which it orbits the Sun). [ 13 ] The radar observations also allowed astronomers to determine that the rotation period of Venus was 243.1 days, [ 14 ] and that its axis of rotation was almost perpendicular to its orbital plane . It was also established that the radius of the planet was 6,052 kilometres (3,761 mi), some 70 kilometres (43 mi) less than the best previous figure obtained with terrestrial telescopes. Interest in the geological characteristics of Venus was stimulated by the refinement of imaging techniques between 1970 and 1985. Early radar observations suggested merely that the surface of Venus was more compacted than the dusty surface of the Moon. The first radar images taken from the Earth showed very bright (radar-reflective) highlands christened Alpha Regio , Beta Regio , and Maxwell Montes ; improvements in radar techniques later achieved an image resolution of 1–2 kilometres. [ citation needed ] There have been numerous uncrewed missions to Venus. Ten Soviet Venera probes achieved a soft landing on the surface, with up to 110 minutes of communication from the surface, all without return. [ 15 ] Launch windows occur every 19 months. [ 16 ] On February 12, 1961, the Soviet spacecraft Venera 1 was the first flyby probe launched to another planet. An overheated orientation sensor caused it to malfunction, losing contact with Earth before its closest approach to Venus of 100,000 km. [ 17 ] However, the probe was first to combine all the necessary features of an interplanetary spacecraft: solar panels, parabolic telemetry antenna, 3-axis stabilization, course-correction engine, and the first launch from parking orbit . [ citation needed ] The first successful flyby Venus probe was the American Mariner 2 spacecraft, which flew past Venus in 1962, coming within 35,000 km. A modified Ranger Moon probe, it established that Venus has practically no intrinsic magnetic field and measured the temperature of the planet's atmosphere to be approximately 500 °C (773 K ; 932 °F ). [ 18 ] The Soviet Union launched the Venera 2 probe to Venus in 1966, but it malfunctioned sometime after its May 16 telemetry session. The probe completed a flyby of Venus, but failed to transmit any data. [ 17 ] During another American flyby in 1967, Mariner 5 measured the strength of Venus's magnetic field . In 1974, Mariner 10 swung by Venus on its way to Mercury and took ultraviolet photographs of the clouds, revealing the extraordinarily high wind speeds in the Venusian atmosphere . Mariner-10 provided the best images of Venus taken so far, the series of images clearly demonstrated the high speeds of the planet's atmosphere, first seen in the Doppler-effect velocity measurements of Venera-4 through Venera-8. [ 19 ] On March 1, 1966, the Venera 3 Soviet space probe crash-landed on Venus, becoming the first spacecraft to reach the surface of another planet. [ 17 ] The descent capsule of Venera 4 entered the atmosphere of Venus on October 18, 1967, making it the first probe to return direct measurements from another planet's atmosphere. [ 17 ] The capsule measured temperature, pressure, density and performed 11 automatic chemical experiments to analyze the atmosphere. It discovered that the atmosphere of Venus was 95% carbon dioxide ( CO 2 ), and in combination with radio occultation data from the Mariner 5 probe, showed that surface pressures were far greater than expected (75 to 100 atmospheres). [ citation needed ] These results were verified and refined by the Venera 5 and Venera 6 in May 1969. [ 17 ] But thus far, none of these missions had reached the surface while still transmitting. Venera 4' s battery ran out while still slowly floating through the massive atmosphere, and Venera 5 and 6 were crushed by high pressure 18 km (60,000 ft) above the surface. [ citation needed ] The first successful landing on Venus was by Venera 7 on December 15, 1970 — the first successful soft landing on another planet, as well as the first successful transmission of data from another planet's surface to Earth. [ 20 ] [ 21 ] Venera 7 remained in contact with Earth for 23 minutes, relaying surface temperatures of 455 to 475 °C (851 to 887 °F), and an atmospheric pressure of 92 bar. [ 17 ] [ 22 ] Venera 8 landed on July 22, 1972. In addition to pressure and temperature profiles, a photometer showed that the clouds of Venus formed a layer ending over 35 kilometres (22 mi) above the surface. A gamma ray spectrometer analyzed the chemical composition of the crust. Venera 8 measured the light level as being suitable for surface photography, finding it to be similar to the amount of light on Earth on an overcast day with roughly 1 km visibility. [ 23 ] The Soviet probe Venera 9 entered orbit on October 22, 1975, becoming the first artificial satellite of Venus. A battery of cameras and spectrometers returned information about the planet's clouds, ionosphere and magnetosphere, as well as performing bi-static radar measurements of the surface. The 660 kg (1,460 lb) descent vehicle [ 25 ] separated from Venera 9 and landed, taking the first pictures of the surface and analyzing the crust with a gamma ray spectrometer and a densitometer. During descent, pressure, temperature and photometric measurements were made, as well as backscattering and multi-angle scattering ( nephelometer ) measurements of cloud density. It was discovered that the clouds of Venus are formed in three distinct layers. [ citation needed ] On October 25, Venera 10 arrived and carried out a similar program of study. In 1978, NASA sent two Pioneer spacecraft to Venus. The Pioneer mission consisted of two components, launched separately: an orbiter and a multiprobe. The Pioneer Venus Multiprobe carried one large and three small atmospheric probes. The large probe was released on November 16, 1978, and the three small probes on November 20. All four probes entered the Venusian atmosphere on December 9, followed by the delivery vehicle. Although not expected to survive the descent through the atmosphere, one probe continued to operate for 45 minutes after reaching the surface. The Pioneer Venus Orbiter was inserted into an elliptical orbit around Venus on December 4, 1978. It carried 17 experiments and operated until the fuel used to maintain its orbit was exhausted and atmospheric entry destroyed the spacecraft in August 1992. Also in 1978, Venera 11 and Venera 12 flew past Venus, dropping descent vehicles on December 21 and December 25 respectively. The landers carried colour cameras and a soil drill and analyzer, which unfortunately malfunctioned. Each lander made measurements with a nephelometer , mass spectrometer , gas chromatograph , and a cloud-droplet chemical analyzer using X-ray fluorescence that unexpectedly discovered a large proportion of chlorine in the clouds, in addition to sulfur. Strong lightning activity was also detected. [ 26 ] [ 27 ] [ 28 ] In 1982, the Soviet Venera 13 sent the first colour image of Venus's surface, revealing an orange-brown flat bedrock surface covered with loose regolith and small flat thin angular rocks, [ 29 ] and analysed the X-ray fluorescence of an excavated soil sample. The probe operated for a record 127 minutes on the planet's hostile surface. Also in 1982, the Venera 14 lander detected possible seismic activity in the planet's crust . In December 1984, during the apparition of Halley's Comet , the Soviet Union launched the two Vega probes to Venus. Vega 1 and Vega 2 encountered Venus in June 1985, each deploying a lander and an instrumented helium balloon. The balloon-borne aerostat probes floated at about 53 km altitude for 46 and 60 hours respectively, traveling about 1/3 of the way around the planet and allowing scientists to study the dynamics of the most active part of Venus's atmosphere. These measured wind speed, temperature, pressure and cloud density. More turbulence and convection activity than expected was discovered, including occasional plunges of 1 to 3 km in downdrafts. The landing vehicles carried experiments focusing on cloud aerosol composition and structure. Each carried an ultraviolet absorption spectrometer, aerosol particle-size analyzers, and devices for collecting aerosol material and analyzing it with a mass spectrometer, a gas chromatograph, and an X-ray fluorescence spectrometer. The upper two layers of the clouds were found to be sulfuric acid droplets, but the lower layer is probably composed of phosphoric acid solution. The crust of Venus was analyzed with the soil drill experiment and a gamma ray spectrometer. As the landers carried no cameras on board, no images were returned from the surface. They would be the last probes to land on Venus for decades. The Vega spacecraft continued to rendezvous with Halley's Comet nine months later, bringing an additional 14 instruments and cameras for that mission. The multiaimed Soviet Vesta mission , developed in cooperation with European countries for realisation in 1991–1994 but canceled due to the Soviet Union disbanding, included the delivery of balloons and a small lander to Venus, according to the first plan. [ citation needed ] In October 1983, Venera 15 and Venera 16 entered polar orbits around Venus. The images had a 1–2 kilometres (0.62–1.24 mi) resolution, comparable to those obtained by the best Earth radars. Venera 15 analyzed and mapped the upper atmosphere with an infrared Fourier spectrometer . From November 11, 1983, to July 10, 1984, both satellites mapped the northern third of the planet with synthetic aperture radar . These results provided the first detailed understanding of the surface geology of Venus, including the discovery of unusual massive shield volcanoes such as coronae and arachnoids . Venus had no evidence of plate tectonics, unless the northern third of the planet happened to be a single plate. The altimetry data obtained by the Venera missions had a resolution four times better than Pioneer' s. [ citation needed ] On August 10, 1990, the American Magellan probe, named after the explorer Ferdinand Magellan , arrived at its orbit around the planet and started a mission of detailed radar mapping at a frequency of 2.38 GHz. [ 30 ] Whereas previous probes had created low-resolution radar maps of continent-sized formations, Magellan mapped 98% of the surface with a resolution of approximately 100 m. [ citation needed ] The resulting maps were comparable to visible-light photographs of other planets, and are still the most detailed in existence. Magellan greatly improved scientific understanding of the geology of Venus : the probe found no signs of plate tectonics , but the scarcity of impact craters suggested the surface was relatively young, and there were lava channels thousands of kilometers long. After a four-year mission, Magellan , as planned, plunged into the atmosphere on October 11, 1994, and partly vaporized; some sections are thought to have hit the planet's surface. Venus Express was a mission by the European Space Agency to study the atmosphere and surface characteristics of Venus from orbit. The design was based on ESA's Mars Express and Rosetta missions. The probe's main objective was the long-term observation of the Venusian atmosphere, which it is hoped will also contribute to an understanding of Earth's atmosphere and climate. It also made global maps of Venerean surface temperatures, and attempted to observe signs of life on Earth from a distance. Venus Express successfully assumed a polar orbit on April 11, 2006. The mission was originally planned to last for two Venusian years (about 500 Earth days), but was extended to the end of 2014 until its propellant was exhausted. Some of the first results emerging from Venus Express include evidence of past oceans, the discovery of a huge double atmospheric vortex at the south pole, [ 32 ] [ 33 ] and the detection of hydroxyl in the atmosphere. Akatsuki was launched on May 20, 2010, by JAXA , and was planned to enter Venusian orbit in December 2010. However, the orbital insertion maneuver failed and the spacecraft was left in heliocentric orbit. It was placed on an alternative elliptical Venerian orbit on 7 December 2015 by firing its attitude control thrusters for 1,233 seconds. [ 34 ] The probe imaged the surface in ultraviolet, infrared, microwaves, and radio, and looked for evidence of lightning and volcanism on the planet. Astronomers working on the mission reported detecting a possible gravity wave that occurred on the planet Venus in December 2015. [ 35 ] Akatsuki' s mission ended in 2024. Several space probes en route to other destinations have used flybys of Venus to increase their speed via the gravitational slingshot method. These include the Galileo mission to Jupiter , and the Cassini–Huygens mission to Saturn , which made two flybys. During Cassini' s examination of the radio frequency emissions of Venus with its radio and plasma wave science instrument during both the 1998 and 1999 flybys, it reported no high-frequency radio waves (0.125 to 16 MHz), which are commonly associated with lightning. This was in direct opposition to the findings of the Soviet Venera missions 20 years earlier. It was postulated that perhaps if Venus did have lightning, it might be some type of low-frequency electrical activity, because radio signals cannot penetrate the ionosphere at frequencies below about 1 megahertz. An examination of Venus's radio emissions by the Galileo spacecraft during its flyby in 1990 was interpreted at the time to be indicative of lightning. However, the Galileo probe was over 60 times further from Venus than Cassini was during its flyby, making its observations substantially less significant. In 2007, the Venus Express mission confirmed the presence of lightning on Venus, finding that it is more common on Venus than it is on Earth. [ 36 ] [ 37 ] MESSENGER passed by Venus twice on its way to Mercury. The first time, it flew by on October 24, 2006, passing 3000 km from Venus . As Earth was on the other side of the Sun, no data was recorded. [ 38 ] The second flyby was on July 6, 2007, where the spacecraft passed only 325 km from the cloudtops. [ 39 ] BepiColombo also flew by Venus twice on its way to Mercury, the first time on October 15, 2020. During its second flyby of Venus, on August 10, 2021, BepiColombo came 552 km near Venus's surface. [ 40 ] [ 41 ] [ 42 ] [ 43 ] While BepiColombo approached Venus before making its second flyby of the planet, two monitoring cameras and seven science instruments were switched on. [ 44 ] Johannes Benkhoff, project scientist, believes BepiColombo's MERTIS (Mercury Radiometer and Thermal Infrared Spectrometer) could possibly detect phosphine, but "we do not know if our instrument is sensitive enough". [ 45 ] Parker Solar Probe has performed seven Venus flybys, which occurred on October 3, 2018, December 26, 2019, July 11, 2020, February 20, 2021, October 16, 2021, August 21, 2023, and November 6, 2024. Parker Solar Probe makes observations of the Sun and solar wind , and these Venus encounters enable Parker Solar Probe to perform gravity assists and travel closer to the Sun. [ 46 ] [ 47 ] The Venera-D spacecraft was proposed to Roscosmos in 2003 and is proposed to be launched in 2031. Its prime purpose is to map Venus's surface using a powerful radar. [ 48 ] The mission would also include a lander capable to function for a long duration on the surface. India's ISRO is developing Venus Orbiter Mission , an orbiter and an atmospheric probe with a balloon aerobot which is planned to launch in 2028. [ 49 ] In June 2021, NASA announced the selection of two new Venus spacecraft, both part of its Discovery Program : VERITAS and DAVINCI . [ 50 ] These spacecraft are the first NASA missions to focus on Venus since Magellan in 1990. [ 51 ] VERITAS, an orbiter, will map the surface of Venus in high resolution, [ 52 ] while DAVINCI will include an orbiter, which will map Venus in multiple wavelengths, and a descent probe that will study the chemistry of the Venusian atmosphere while taking photographs of the descent. [ 53 ] DAVINCI and VERITAS were initially slated to launch in 2029 and 2028 respectively, but funding issues have pushed VERITAS's launch date back to at least 2029–2031. [ 54 ] [ 55 ] In June 2021, soon after NASA announced VERITAS and DAVINCI, ESA announced Venus orbiter EnVision as part of their Cosmic Vision program. [ 56 ] EnVision is planned to perform high-resolution radar mapping and atmospheric studies of Venus, and is planned to launch in 2031. [ 57 ] [ 58 ] [ 59 ] On October 6, 2021, the United Arab Emirates announced its intention to send a probe to Venus as early as 2028. MBR Explorer would make observations of the planet while using it for a gravity assist to propel it to the asteroid belt . [ 60 ] Rocket Lab , a private aerospace manufacturer, hopes to launch the first private Venus mission in collaboration with MIT as soon as 2024. [ 61 ] The spacecraft, Venus Life Finder , will send a lightweight atmospheric probe into the Venusian atmosphere to search for signs of life. [ 62 ] To overcome the high pressure and temperature at the surface, a team led by Geoffrey Landis of NASA's Glenn Research Center produced a concept in 2007 of a solar-powered aircraft that would control a resistant surface rover on the ground. The aircraft would carry the mission's sensitive electronics in the relatively mild temperatures of Venus's upper atmosphere. [ 63 ] Another concept from 2007 suggests to equip a rover with a Stirling cooler powered by a nuclear power source to keep an electronics package at an operational temperature of about 200 °C (392 °F). [ 64 ] In 2020 NASA's JPL launched an open competition, titled "Exploring Hell: Avoiding Obstacles on a Clockwork Rover", to design a sensor that could work on Venus's surface. [ 65 ] Research on the atmosphere of Venus has produced significant insights not only about its own state but also about the atmospheres of other planetary objects , especially of Earth. It has helped to find and understand the depletion of Earth's ozone in the 1970s and 1980s. [ 75 ] The voyage of James Cook and his crew of HMS Endeavour to observe the Venus transit of 1769 brought about the claiming of Australia at Possession Island for colonisation by Europeans . Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of".
https://en.wikipedia.org/wiki/Observations_and_explorations_of_Venus
In statistics , the observed information , or observed Fisher information , is the negative of the second derivative (the Hessian matrix ) of the " log-likelihood " (the logarithm of the likelihood function ). It is a sample-based version of the Fisher information . Suppose we observe random variables X 1 , … , X n {\displaystyle X_{1},\ldots ,X_{n}} , independent and identically distributed with density f ( X ; θ), where θ is a (possibly unknown) vector. Then the log-likelihood of the parameters θ {\displaystyle \theta } given the data X 1 , … , X n {\displaystyle X_{1},\ldots ,X_{n}} is We define the observed information matrix at θ ∗ {\displaystyle \theta ^{*}} as Since the inverse of the information matrix is the asymptotic covariance matrix of the corresponding maximum-likelihood estimator , the observed information is often evaluated at the maximum-likelihood estimate for the purpose of significance testing or confidence-interval construction. [ 1 ] The invariance property of maximum-likelihood estimators allows the observed information matrix to be evaluated before being inverted. Andrew Gelman , David Dunson and Donald Rubin [ 2 ] define observed information instead in terms of the parameters' posterior probability , p ( θ | y ) {\displaystyle p(\theta |y)} : I ( θ ) = − d 2 d θ 2 log ⁡ p ( θ | y ) {\displaystyle I(\theta )=-{\frac {d^{2}}{d\theta ^{2}}}\log p(\theta |y)} The Fisher information I ( θ ) {\displaystyle {\mathcal {I}}(\theta )} is the expected value of the observed information given a single observation X {\displaystyle X} distributed according to the hypothetical model with parameter θ {\displaystyle \theta } : The comparison between the observed information and the expected information remains an active and ongoing area of research and debate. Efron and Hinkley [ 3 ] provided a frequentist justification for preferring the observed information to the expected information when employing normal approximations to the distribution of the maximum-likelihood estimator in one-parameter families in the presence of an ancillary statistic that affects the precision of the MLE. Lindsay and Li showed that the observed information matrix gives the minimum mean squared error as an approximation of the true information if an error term of O ( n − 3 / 2 ) {\displaystyle O(n^{-3/2})} is ignored. [ 4 ] In Lindsay and Li's case, the expected information matrix still requires evaluation at the obtained ML estimates, introducing randomness. However, when the construction of confidence intervals is of primary focus, there are reported findings that the expected information outperforms the observed counterpart. Yuan and Spall showed that the expected information outperforms the observed counterpart for confidence-interval constructions of scalar parameters in the mean squared error sense. [ 5 ] This finding was later generalized to multiparameter cases, although the claim had been weakened to the expected information matrix performing at least as well as the observed information matrix. [ 6 ]
https://en.wikipedia.org/wiki/Observed_information
Some interpretations of quantum mechanics posit a central role for an observer of a quantum phenomenon. [ 1 ] The quantum mechanical observer is tied to the issue of observer effect , where a measurement necessarily requires interacting with the physical object being measured, affecting its properties through the interaction. The term "observable" has gained a technical meaning, denoting a Hermitian operator that represents a measurement. [ 2 ] : 55 The theoretical foundation of the concept of measurement in quantum mechanics is a contentious issue deeply connected to the many interpretations of quantum mechanics . A key focus point is that of wave function collapse , for which several popular interpretations assert that measurement causes a discontinuous change into an eigenstate of the operator associated with the quantity that was measured, a change which is not time-reversible. More explicitly, the superposition principle ( ψ = Σ n a n ψ n ) of quantum physics dictates that for a wave function ψ , a measurement will result in a state of the quantum system of one of the m possible eigenvalues f n , n = 1, 2, ..., m , of the operator ∧ F which is in the space of the eigenfunctions ψ n , n = 1, 2, ..., m . Once one has measured the system, one knows its current state; and this prevents it from being in one of its other states ⁠— it has apparently decohered from them without prospects of future strong quantum interference. [ 3 ] [ 4 ] [ 5 ] This means that the type of measurement one performs on the system affects the end-state of the system. An experimentally studied situation related to this is the quantum Zeno effect , in which a quantum state would decay if left alone, but does not decay because of its continuous observation. The dynamics of a quantum system under continuous observation are described by a quantum stochastic master equation known as the Belavkin equation . [ 6 ] [ 7 ] [ 8 ] Further studies have shown that even observing the results after the photon is produced leads to collapsing the wave function and loading a back-history as shown by delayed choice quantum eraser . [ 9 ] When discussing the wave function ψ which describes the state of a system in quantum mechanics, one should be cautious of a common misconception that assumes that the wave function ψ amounts to the same thing as the physical object it describes. This flawed concept must then require existence of an external mechanism, such as a measuring instrument, that lies outside the principles governing the time evolution of the wave function ψ , in order to account for the so-called " collapse of the wave function " after a measurement has been performed. But the wave function ψ is not a physical object like, for example, an atom, which has an observable mass, charge and spin, as well as internal degrees of freedom. Instead, ψ is an abstract mathematical function that contains all the statistical information that an observer can obtain from measurements of a given system. In this case, there is no real mystery in that this mathematical form of the wave function ψ must change abruptly after a measurement has been performed. A consequence of Bell's theorem is that measurement on one of two entangled particles can appear to have a nonlocal effect on the other particle. Additional problems related to decoherence arise when the observer is modeled as a quantum system. The Copenhagen interpretation , which is the most widely accepted interpretation of quantum mechanics among physicists, [ 1 ] [ 10 ] : 248 posits that an "observer" or a "measurement" is merely a physical process. One of the founders of the Copenhagen interpretation, Werner Heisenberg , wrote: Of course the introduction of the observer must not be misunderstood to imply that some kind of subjective features are to be brought into the description of nature. The observer has, rather, only the function of registering decisions, i.e., processes in space and time, and it does not matter whether the observer is an apparatus or a human being; but the registration, i.e., the transition from the "possible" to the "actual," is absolutely necessary here and cannot be omitted from the interpretation of quantum theory. [ 11 ] Niels Bohr , also a founder of the Copenhagen interpretation, wrote: all unambiguous information concerning atomic objects is derived from the permanent marks such as a spot on a photographic plate, caused by the impact of an electron left on the bodies which define the experimental conditions. Far from involving any special intricacy, the irreversible amplification effects on which the recording of the presence of atomic objects rests rather remind us of the essential irreversibility inherent in the very concept of observation. The description of atomic phenomena has in these respects a perfectly objective character, in the sense that no explicit reference is made to any individual observer and that therefore, with proper regard to relativistic exigencies, no ambiguity is involved in the communication of information. [ 12 ] Likewise, Asher Peres stated that "observers" in quantum physics are similar to the ubiquitous "observers" who send and receive light signals in special relativity . Obviously, this terminology does not imply the actual presence of human beings. These fictitious physicists may as well be inanimate automata that can perform all the required tasks, if suitably programmed. [ 13 ] : 12 Critics of the special role of the observer also point out that observers can themselves be observed, leading to paradoxes such as that of Wigner's friend ; and that it is not clear how much consciousness is required. As John Bell inquired, "Was the wave function waiting to jump for thousands of millions of years until a single-celled living creature appeared? Or did it have to wait a little longer for some highly qualified measurer—with a PhD?" [ 14 ] The prominence of seemingly subjective or anthropocentric ideas like "observer" in the early development of the theory has been a continuing source of disquiet and philosophical dispute. [ 15 ] A number of new-age religious or philosophical views give the observer a more special role, or place constraints on who or what can be an observer. As an example of such claims, Fritjof Capra declared, "The crucial feature of atomic physics is that the human observer is not only necessary to observe the properties of an object, but is necessary even to define these properties." [ 16 ] There is no credible peer-reviewed research that backs such claims. The uncertainty principle has been frequently confused with the observer effect, evidently even by its originator, Werner Heisenberg . [ 17 ] The uncertainty principle in its standard form describes how precisely it is possible to measure the position and momentum of a particle at the same time. If the precision in measuring one quantity is increased, the precision in measuring the other decreases. [ 18 ] An alternative version of the uncertainty principle, [ 19 ] more in the spirit of an observer effect, [ 20 ] fully accounts for the disturbance the observer has on a system and the error incurred, although this is not how the term "uncertainty principle" is most commonly used in practice.
https://en.wikipedia.org/wiki/Observer_(quantum_physics)
In physics , the observer effect is the disturbance of an observed system by the act of observation. [ 1 ] [ 2 ] This is often the result of utilising instruments that, by necessity, alter the state of what they measure in some manner. A common example is checking the pressure in an automobile tire, which causes some of the air to escape, thereby changing the amount of pressure one observes. Similarly, seeing non-luminous objects requires light hitting the object to cause it to reflect that light. While the effects of observation are often negligible, the object still experiences a change (leading to the Schrödinger's cat thought experiment). This effect can be found in many domains of physics, but can usually be reduced to insignificance by using different instruments or observation techniques. A notable example of the observer effect occurs in quantum mechanics , as demonstrated by the double-slit experiment . Physicists have found that observation of quantum phenomena by a detector or an instrument can change the measured results of this experiment. Despite the "observer effect" in the double-slit experiment being caused by the presence of an electronic detector, the experiment's results have been interpreted by some to suggest that a conscious mind can directly affect reality. [ 3 ] However, the need for the "observer" to be conscious is not supported by scientific research, and has been pointed out as a misconception rooted in a poor understanding of the quantum wave function ψ and the quantum measurement process. [ 4 ] [ 5 ] [ 6 ] An electron is detected upon interaction with a photon ; this interaction will inevitably alter the velocity and momentum of that electron. It is possible for other, less direct means of measurement to affect the electron. It is also necessary to distinguish clearly between the measured value of a quantity and the value resulting from the measurement process. In particular, a measurement of momentum is non-repeatable in short intervals of time. A formula (one-dimensional for simplicity) relating involved quantities, due to Niels Bohr (1928) is given by | v x ′ − v x | Δ p x ≈ ℏ / Δ t , {\displaystyle |v'_{x}-v_{x}|\Delta p_{x}\approx \hbar /\Delta t,} where The measured momentum of the electron is then related to v x , whereas its momentum after the measurement is related to v ′ x . This is a best-case scenario. [ 7 ] In electronics , ammeters and voltmeters are usually wired in series or parallel to the circuit, and so by their very presence affect the current or the voltage they are measuring by way of presenting an additional real or complex load to the circuit, thus changing the transfer function and behavior of the circuit itself. Even a more passive device such as a current clamp , which measures the wire current without coming into physical contact with the wire, affects the current through the circuit being measured because the inductance is mutual . In thermodynamics , a standard mercury-in-glass thermometer must absorb or give up some thermal energy to record a temperature , and therefore changes the temperature of the body which it is measuring.
https://en.wikipedia.org/wiki/Observer_effect_(physics)
Obsidian hydration dating ( OHD ) is a geochemical method of determining age in either absolute or relative terms of an artifact made of obsidian . Obsidian is a volcanic glass that was used by prehistoric people as a raw material in the manufacture of stone tools such as projectile points, knives, or other cutting tools through knapping , or breaking off pieces in a controlled manner, such as pressure flaking. Obsidian obeys the property of mineral hydration and absorbs water , when exposed to air at a well-defined rate. When an unworked nodule of obsidian is initially fractured, there is typically less than 1% water present. Over time, water slowly diffuses into the artifact forming a narrow "band", "rim", or "rind" that can be seen and measured with many different techniques such as a high-power microscope with 40–80 power magnification , depth profiling with SIMS ( secondary ion mass spectrometry ), and IR-PAS (infra red photoacoustic spectroscopy). [ 1 ] [ 2 ] In order to use obsidian hydration for absolute dating, the conditions that the sample has been exposed to and its origin must be understood or compared to samples of a known age (e.g. as a result of radiocarbon dating of associated materials). [ 3 ] [ 4 ] Obsidian hydration dating was introduced in 1960 by Irving Friedman and Robert Smith of the U.S. Geological Survey . [ 5 ] Their initial work focused on obsidians from archaeological sites in western North America. The use of Secondary ion mass spectrometry (SIMS) in the measurement of obsidian hydration dating was introduced by two independent research teams in 2002. [ 6 ] [ 7 ] Today the technique is applied extensively by archaeologists to date prehistoric sites and sites from prehistory in California [ 8 ] and the Great Basin of North America. It has also been applied in South America, the Middle East, the Pacific Islands, including New Zealand and Mediterranean Basin. To measure the hydration band, a small slice of material is typically cut from an artifact. This sample is ground down to about 30 micrometers thick and mounted on a petrographic slide (this is called a thin section). The hydration rind is then measured under a high-power microscope outfitted with some method for measuring distance, typically in tenths of micrometers. The technician measures the microscopic amount of water absorbed on freshly broken surfaces. The principle behind obsidian hydration dating is simple–the longer the artifact surface has been exposed, the thicker the hydration band will be. In case of measuring the hydration rim using the depth profiling ability of the secondary ion mass spectrometry technique, the sample is mounted on a holder without any preparation or cutting. This method of measurement is non-destructive. There are two general SIMS modes: static mode and dynamic mode, depending on the primary ion current density, and three different types of mass spectrometers: magnetic sector, quadrupole and time-of-flight (TOF). Any mass-spectrometer can work in static mode (very low ion current, a top mono-atomic layer analysis), and dynamic mode (a high ion current density, in-depth analysis). Although relatively infrequent the use of SIMS on obsidian surface investigations has produced great progress in OHD dating. SIMS in general refers to four instrumental categories according to their operation; static, dynamic, quadrupole, and time-of-flight, TOF. In essence it is a technique with a large resolution on a plethora of chemical elements and molecular structures in an essentially non destructive manner. An approach to OHD with a completely new rationale suggests that refinement of the technique is possible in a manner which improves both its accuracy and precision and potentially expands the utility by generating reliable chronological data. Anovitz et al. [ 9 ] presented a model which relied solely on compositionally-dependent diffusion, following numerical solutions (finite difference (FD), or finite element) elaborating on the H+ profile acquired by SIMS. A test of the model followed using results from Mount 65, Chalco in Mexico by Riciputi et al. [ 10 ] This technique used numerical calculation to model the formation of the entire diffusion profile as a function of time and fitted the derived curve to the hydrogen profile. The FD equations are based on a number of assumptions about the behavior of water as it diffused into the glass and characteristic points of the SIMS H+ diffusion profile. In Rhodes, Greece, under the direction and invention of Ioannis Liritzis, [ 11 ] the dating approach is based on modeling the S-like hydrogen profile by SIMS, following Fick's diffusion law, and an understanding of the surface saturation layer (see Figure). In fact, the saturation layer on the surface forms up to a certain depth depending on factors that include the kinetics of the diffusion mechanism for the water molecules, the specific chemical structure of obsidian, as well as the external conditions affecting diffusion (temperature, relative humidity, and pressure). [ 12 ] Together these factors result in the formation of an approximately constant, boundary concentration value, in the external surface layer. Using the end product of diffusion, a phenomenological model has been developed, based on certain initial and boundary conditions and appropriate physicochemical mechanisms, that express the H 2 O concentration versus depth profile as a diffusion/time equation. This latest advance, the novel secondary ion mass spectrometry–surface saturation (SIMS-SS), thus, involves modelling the hydrogen concentration profile of the surface versus depth, whereas the age determination is reached via equations describing the diffusion process, while topographical effects have been confirmed and monitored through atomic force microscopy . [ 13 ] [ 14 ] [ 15 ] [ 16 ] Several factors complicate simple correlation of obsidian hydration band thickness with absolute age. Temperature is known to speed up the hydration process. Thus, artifacts exposed to higher temperatures, for example by being at lower elevation , seem to hydrate faster. As well, obsidian chemistry, including the intrinsic water content, seems to affect the rate of hydration. Once an archeologist can control for the geochemical signature of the obsidian (e.g., the "source") and temperature (usually approximated using an "effective hydration temperature" or EHT coefficient), he or she may be able to date the artifact using the obsidian hydration technique. Water vapor pressure may also affect the rate of obsidian hydration. [ 9 ] The reliability of the method based on Friedman's empirical age equation ( x²=kt , where x is the thickness of the hydration rim, k is the diffusion coefficient, and t is the time) is questioned from several grounds regarding temperature dependence, square root of time and determination of diffusion rate per sample and per site, as part of some successful attempts on the procedure and applications. The SIMS-SS age calculation procedure is separated into two major steps. The first step concerns the calculation of a 3rd order fitting polynomial of the SIMS profile (eq. 1). The second stage regards the determination of the saturation layer, i.e. its depth and concentration. The whole computing processing is embedded in stand-alone software created in Matlab (version 7.0.1) software package with a graphical user interface and executable under Windows XP. Thus, the SIMS-SS age equation in years before present is given in eq. 2: c = e a + b x + c x 2 + d x 3 {\displaystyle c=e^{a+bx+cx^{2}+dx^{3}}} Eq. 1 Fitting polynomial of the SIMS profile T = ( C 1 − C 2 ) 2 ( 1.128 1 − 0.177 k C 1 C 2 ) 2 4 D s e f f ( d C d x | x = 0 ) 2 {\displaystyle T={\frac {(C_{1}-C_{2})^{2}\left({\frac {1.128}{1-{\frac {0.177kC_{1}}{C_{2}}}}}\right)^{2}}{4Dse\!f\!\!f\left(\left.{\frac {\mathrm {d} C}{\mathrm {d} x}}\right|_{x=0}\right)^{2}}}} Eq. 2 The SIMS-SS age equation in years before present Where, Ci is the intrinsic concentration of water, Cs is the saturation concentration, dC/dx is the diffusion coefficient for depth x=0, k is derived from a family of Crank's theoretical diffusion curves, and D s , e f f {\displaystyle Ds,eff} is an effective diffusion coefficient (eq. 3) which relates the inverse gradient of the fit polynomial to well dated samples: where Ds = (1/(dC/dx))10 −11 assuming a constant flux and taken as unity. The eq. (2) and assumption of unity is a matter of further investigation. [ 17 ] Several commercial companies and university laboratories provide obsidian hydration services.
https://en.wikipedia.org/wiki/Obsidian_hydration_dating
In addition to the variety of verified DNA structures , there have been a range of proposed DNA models that have either been disproven, or lack sufficient evidence. Some of these structures were proposed during the 1950s before the structure of the double helix was solved , most famously by Linus Pauling. Non-helical or "side-by-side" models of DNA were proposed in the 1970s to address what appeared at the time to be problems with the topology of circular DNA chromosomes during replication (subsequently resolved via the discovery of enzymes that modify DNA topology). [ 1 ] These were also rejected due to accumulating experimental evidence from X-ray crystallography , solution NMR , and atomic force microscopy (of both DNA alone, and bound to DNA-binding proteins ). Although localised or transient non-duplex helical structures exist, [ 2 ] non-helical models are not currently accepted by the mainstream scientific community. [ 3 ] Finally, there exists a persistent set of contemporary fringe theories proposing a range of unsupported models. The DNA double helix was discovered in 1953 [ 4 ] (with further details in 1954 [ 5 ] ) based on X-ray diffraction images of DNA (most notably photo 51 , taken by Raymond Gosling and Rosalind Franklin [ 6 ] ) as well as base-pairing chemical and biochemical information. [ 7 ] [ 8 ] Prior to this, X-ray data being gathered in the 1950s indicated that DNA formed some sort of helix, but it had not yet been discovered what the exact structure of that helix was. There were therefore several proposed structures that were later overturned by the data supporting a DNA duplex. The most famous of these early models was by Linus Pauling and Robert Corey in 1953 in which they proposed a triple helix with the phosphate backbone on the inside, and the nucleotide bases pointing outwards. [ 9 ] [ 10 ] A broadly similar, but detailed structure was also proposed by Bruce Fraser that same year. [ 11 ] However, Watson and Crick soon identified several problems with these models: The initial double helix model discovered, now termed B-form DNA is by far the most common conformation in cells. [ 12 ] Two additional rarer helical conformations that also naturally occur were identified in the 1970s: A-form DNA , and Z-form DNA . [ 13 ] Even once the DNA duplex structure was solved, it was initially an open question whether additional DNA structures were needed to explain its overall topology. there were initially questions about how it might affect DNA replication. In 1963, autoradiographs of the E. coli chromosome demonstrated that it was a single circular molecule that is replicated at a pair of replication forks at which both new DNA strands are being synthesized. [ 15 ] The two daughter chromosomes after replication would therefore be topologically linked. The separation of the two linked daughter DNA strands during replication either required DNA to have a net-zero helical twist, or for the strands to be cut, crossed, and rejoined. It was this apparent contradictions that early non-helical models attempted to address until the discovery of topoisomerases in 1970 resolved the problem. [ 16 ] [ 17 ] In the 1960s and 1970s, a number of structures were hypothesised that would give a net-zero helical twist over the length of the DNA, either by being fully straight throughout or by alternating right-handed and left-handed helical twists. [ 18 ] [ 19 ] For example, in 1969, a linear tetramer structure was hypothesised, [ 14 ] and in 1976, a structure with alternating sections of right-handed and left-handed helix was independently proposed by two different groups. [ 20 ] [ 21 ] The alternating twists model was initially presented with the helicity changing every half turn, but later long stretches of each helical direction were later proposed. [ 22 ] However, these models suffered from a lack of experimental support. [ 23 ] Under torsional stress, a Z-DNA structure can form with opposite twist to B-form DNA, but this is rare within the cellular environment. [ 24 ] The discovery of topoisomerases and gyrases , enzymes that can change the linking number of circular nucleic acids and thus "unwind" and "rewind" the replicating bacterial chromosome, solved the topological objections to the B-form DNA helical structure. [ 25 ] Indeed, in the absence of these topology-altering enzymes, small circular viral and plasmid DNA are inseparable supporting structure whose strands are topologically locked together. [ 26 ] Non-helical DNA proposals have therefore dropped from mainstream science. [ 3 ] [ 16 ] Initially, there had been questions of whether the solved DNA structures were artefacts of the X-ray crystallography techniques used. However, the structure of DNA was subsequently confirmed in solution via gel electrophoretic methods [ 27 ] and later via solution NMR [ 28 ] and AFM [ 29 ] indicating that the crystallography process did not distort it. The structure of DNA in complex with nucleosomes , helicases , and numerous other DNA binding proteins also supported its biological relevance in vivo . [ 30 ]
https://en.wikipedia.org/wiki/Obsolete_models_of_DNA_structure
Obstacles to troop movement represent either natural, human habitat originated, constructed, concealed obstacles, or obstructive impediments to movement of military troops and their vehicles, or to their visibility. By impeding strategic , operational or tactical manoeuvre , the obstacle represents an added barrier between opposing combat forces, and therefore prevent achievement of objectives and goals specified in the operational planning schedule. The constructed obstacles are used as an aid to defending a position or area as part of the general defensive plan of the commander. The obstacles that originate from the human habitat can be converted by troops into constructed obstacles by either performing additional construction, or executing demolitions to obstruct movement over the transport network, to create a choke point , or to deny traversing of an area to the enemy. The natural obstacles can be used defensively by securing a more difficult to breach defensive position by for example securing a flank on terrain that is deemed impossible to traverse, thus denying the enemy an ability to close into combat range of direct fire weapons. Obstacles are used in combat operations to create choke points , deny mobility corridors and avenue of approach to positions, to enhance field of fire for direct fire weapons, or to protect key tactical terrain features to the enemy. Natural obstacles are represented by those terrain features that for which few troops and their vehicles have capability to traverse. They include water obstacles, or areas of poor drainage such as lakes, rivers, swamps and marshes. The former two can be crossed by amphibious vehicles capable of swimming, or vehicles capable of deep wading after preparation, or by constructing a water crossing, and thus creating an easily targeted choke point. Soil and rock can also represent mobility obstacles if the soil is too soft and unable to support the weight of the military vehicles, or the terrain is fractured by cliffs, or large boulders that make organised movement impossible. While soft soil is relative to vehicle ground pressure , there is little that can be done to negotiate very rocky terrain or cliffs except by using specially trained light infantry troops. Vegetation such as jungles or dense forests can also represent obstacles to movement, in some cases even to light infantry troops. Some natural obstacles can be a result of climatic or soil activity such as deep snow that by covering all terrain makes safe traversing difficult and slow, or landslides that may create an obstruction suddenly despite previously clear route reconnaissance report. While human habitat had, since early construction of roads, sought to create ways of negotiating terrain faster, the human activity on the landscape can create obstacles in its own right. Artificial lakes and ponds, canals, and areas of agricultural cultivation, particularly those that are water-intensive such as rice-paddy fields create obstacles often more difficult than the natural equivalents. Mining activity creates quarries, and the building of roads, rail roads and dams also involve construction of cuts and fills. Seeded tree-line windbreaks , hedgerows , stone walls and plantation forests also disrupt mobility, particularly of vehicles. Lastly the urban areas in themselves represent obstacles by offering elevated firing positions and canyon-like choke points by forcing the opponent to advance through the streets. Constructed obstacles are those prepared by military engineering troops, often combat engineers , by either using materials to construct impediments to foot and vehicle-borne troops, or by using demolition methods, or excavation such as an abatis , to create obstacles from natural materials and terrain in specific location in accordance with the overall plan of operations. Sometimes such obstacles can be created intentionally or unintentionally through effects of artillery fire cratering . Buildings demolished due to combat or aerial bombing become very effective obstacles as rubble represents difficult to negotiate and irregular piles of building materials. Concealed obstacles are used with the intention of not only preventing movement of enemy troops, but also causing casualties during attempted movement. Although one of the oldest forms of obstacle use, this became far deadlier with the invention of the mine warfare , and more so with air-delivered scattered submunition minelets that can create an instant minefield. Obstructive obstacles are used primarily to deny terrain visibility to the enemy, thus creating uncertainty in targeting friendly troops. Although ancient in use as tar smoke pots , modern smoke screens are temporary and are used as a tactical measure during manoeuvring, often when a unit is performing a position change . Ground troops prefer to deal with physical obstacles by circumventing them as rapidly as possible, thus avoiding becoming stationary targets to the enemy direct and indirect fire weapons, and aircraft. [ 1 ] Where this is not possible, in modern warfare the most expedient measures taken against constructed or urban obstacles are to either use armoured vehicles, preferably tanks, to remove the obstacle, or to demolish them by firing High Explosive munitions at them. Where combat engineers are present, they can perform this using their specialist skills and tools or vehicles. In the case of natural obstacles, specialist engineering equipment is usually required to negotiate the obstacle, commonly bridging or pontoons. The solution to obstacle bridging had at the strategic level created new forms of warfare and employment of troops in the amphibious operations, and later the airborne operations. At the operational level the use of helicopters in airmobile operations offers a vertical option to negotiating obstacles, often of considerable extent such as mountain passes or extensive areas of impossible vegetation.
https://en.wikipedia.org/wiki/Obstacles_to_troop_movement
The obstetrical dilemma is a hypothesis to explain why humans often require assistance from other humans during childbirth to avoid complications , whereas most non-human primates give birth unassisted with relatively little difficulty. This occurs due to the tight fit of the fetal head to the maternal birth canal, which is additionally convoluted, meaning the head and therefore body of the infant must rotate during childbirth in order to fit, unlike in other, non-upright walking mammals. Consequently, there is an unusually high incidence of cephalopelvic disproportion and obstructed labor in humans. [ 1 ] The obstetrical dilemma claims that this difference is due to the biological trade-off imposed by two opposing evolutionary pressures in the development of the human pelvis : smaller birth canals in the mothers, and larger brains, and therefore skulls in the babies. Proponents believe bipedal locomotion (the ability to walk upright) decreased the size of the bony parts of the birth canal. They also believe that as hominids' and humans' skull and brain sizes increased over the millennia, that women needed wider hips to give birth, that these wider hips made women inherently less able to walk or run than men, and that babies had to be born earlier to fit through the birth canal, resulting in the so-called fourth trimester period for newborns (being born when the baby seems less developed than in other animals). [ 2 ] Recent evidence has suggested that bipedal locomotion is only a part of the strong evolutionary pressure constraining the expansion of the maternal birth canal. In addition to bipedal locomotion, the reduced strength of the pelvic floor due to a wider maternal pelvis also leads to fitness detriments in the mother, pressuring the birth canal to remain relatively narrow. [ 3 ] [ 4 ] This idea was widely accepted when first published in 1960, but has since been criticized by other scientists. [ 5 ] The term, obstetrical dilemma, was coined in 1960, by Sherwood Larned Washburn , a prominent early American physical anthropologist , in order to describe the evolutionary development of the human pelvis and its relation to childbirth and pregnancy in hominids and non-human primates. [ 6 ] In the intervening decades, the term has been used broadly among anthropologists, biologists, and other scientists to describe aspects of this hypothesis and related topics. The obstetrical dilemma hypothesizes that when hominids began to develop bipedal locomotion, the conflict between these two opposing evolutionary pressures became greatly exacerbated. Because humans are currently the only recognized extant obligately bipedal primates, meaning the body shape requires to only use two legs, major evolutionary developments had to occur in order to alter the shape of the female pelvis. [ 2 ] Human males evolved narrower hips optimized for locomotion, whereas female hips evolved to be a wider optimization because of childbirthing needs. [ 6 ] [ 7 ] [ 8 ] Human pelves have no gross distinguishing skeletal markers for sex before puberty. With puberty, hormones alter the shape of the pelvis in females to cater to obstetrical demands. Overall, through the evolution of the species, a number of structures in the body have changed size, proportion, or location in order to accommodate bipedal locomotion and allow a person to stand upright and face forward. To help support the upper body, a number of structural changes were made to the pelvis. The ilial pelvic bone shifted forward and broadened, while the ischial pelvic bone shrank, narrowing the pelvic canal. These changes were occurring at the same time as humans were developing larger craniums. Examination of the pelvis is the most useful method for identifying biological sex through the skeleton. Distinguishing features between the human male and female pelvis stem from the selective pressures of childbearing and birth. Females must be able to carry out the process of childbirth but also be able to move bipedally. The human female pelvis has evolved to be as wide as possible while still being able to allow bipedal locomotion. The compromise between these two necessary functions of the female pelvis can be especially seen through the comparative skeletal anatomy between males and females. [ 9 ] The human pelvis is made up of three sections: the hip bones (ilium, ischium and pubis), the sacrum , and the coccyx . How these three segments articulate and what their dimensions are is key for differentiation between males and females. Females acquired the characteristic of the overall pelvic bone being thinner and denser than the pelvic bones of males. The female pelvis has also evolved to be much wider and allow for greater room in order to safely deliver a child. After sexual maturation, it can be observed that the pubic arch in females is generally an obtuse angle (between 90 and 100 degrees), while males tend to have more of an acute angle (approximately 70 degrees). [ 2 ] This difference in angles can be attributed to the fact that the overall pelvis for a female is preferred to be wider and more open than a male pelvis. Another key difference can be seen in the sciatic notch. The sciatic notch in females tends to be wider than the sciatic notches of males. The pelvic inlet is also a key difference. The pelvic inlet can be observed as oval-shaped in females and more of a heart-shape in males. [ 2 ] The difference in inlet shape is related to the distance between the ischium bones of the pelvis. To allow for a wider and more oval-shaped inlet, female ischium bones are further apart from one another than the ischium bones of a male. Differences in the sacrum between males and females can also be attributed to the needs of childbirth. The female sacrum is wider than the male sacrum. The female sacrum can also be observed as being shorter than the sacrum of a male. The difference in width can be explained by the overall wider shape of the female pelvis. The female sacrum is also more curved posteriorly. This could be explained by the need for as much space as possible for a birthing canal. The articulating coccyx in females is also generally observed as being straighter and more flexible than the coccyx of a male for the same reason. [ 2 ] [ 10 ] Because of the female pelvic bones in general being further apart from one another than those of the male pelvis, the acetabula in a female are positioned more medially and further apart from one another. It is this orientation that allows for the stereotypical swinging motion of a female's hips while walking. [ 2 ] The acetabula not only differ in distance, but in depth as well. It has been found that female acetabula have a greater depth than those in males, but are also paired with a smaller femoral head. This in turn creates a more stable hip joint(insert). [ 10 ] One of the last key differences can be seen in the auricular surface of the pelvic bones. The auricular surface where the sacroiliac joint articulates seen in females generally has a rougher texture compared to the surfaces seen in males. [ 11 ] This difference in the texture of the articulating surface may be due to the differences in shape of the sacrum between males and females. These key differences can be examined and used to determine biological sex between two different sets of pelvic bones; all due to the need for bipedal locomotion while having the need for childbearing and childbirth in females. Early human ancestors, hominids, originally gave birth in a similar way that non-human primates do because early obligate quadrupedal individuals would have retained a similar skeletal structure to great apes. Most non-human primates today have neonatal heads that are close in size to the mother's birth canal, as evidenced by observing female primates who do not need assistance in birthing, often seeking seclusion away from others of their species. [ 9 ] In modern humans, parturition (childbirth) differs greatly from the rest of the primates because of both the pelvic shape of the mother and the neonatal shape of the infant. Further adaptations evolved to cope with bipedalism and larger craniums were also important such as neonatal rotation of the infant, shorter gestation length, assistance with birth, and a malleable neonatal head. Neonatal rotation was a solution for humans evolving larger brain sizes. Comparative zoological analysis has shown that the size of the human brain is anomalous, as humans have brains that are significantly larger than other animals of similar proportions. Even among the great apes, humans are distinctive in this regard, having brains three to four times larger than those of chimpanzees, humans' nearest relatives. Although the close correspondence between the neonatal cranium and the maternal pelvis in monkeys is also characteristic of humans, the orientation of the pelvic diameters differs. On average, a human fetus is nearly twice as large in relation to its mother's weight as would be expected for another similarly sized primate. [ 2 ] The extremely close correspondence between the fetal head and the maternal pelvic dimensions requires that these dimensions line up at all points (inlet, midplane, and outlet) during the birth process. [ 11 ] During delivery, neonatal rotation occurs when the body gets rotated to align head and shoulders transversely when entering the small pelvis, otherwise known as internal rotation. The fetus then rotates longitudinally to exit the birth canal, which is known as external rotation. In humans, the long axes of the inlet and the outlet of the obstetric canal lie perpendicular to each other. [ 2 ] This is an important mechanism because growth in the size of the cranium as well as the width of the shoulders makes it more difficult for the infant to fit through the pelvis. [ 2 ] This enables the largest dimensions of the fetal head to align with the largest dimensions of each plane of the maternal pelvis as labor progresses. [ 2 ] This differs in non-human primates as there is no need for neonatal rotation in non-human primates because the birth canal is wide enough to accommodate the infant. [ 11 ] This elaborate mechanism of labor, which requires a constant readjustment of the fetal head in relation to the bony pelvis (and which may vary somewhat depending on the shape of the pelvis in question), is completely different from the obstetrical mechanics of the other higher primates whose infants generally drop through the pelvis without any rotation or realignment. [ 2 ] In contrast to the narrow shoulders of monkeys and higher primates, which are able to pass through the birth canal without any rotation, modern humans have broad, rigid shoulders, which generally require the same series of rotations that the head undergoes in order to travel through. [ 11 ] Due to the evolution of bipedalism in humans, the pelvis had evolved to have a shorter, more forward-curved ilium and a broader sacrum in order to support ambulating on two legs. This caused the birth canal to shrink and form a more oval shape, thus the infant must undergo specific movements to rotate itself in a certain position to be able to pass through the pelvis. These movements are referred to as the seven cardinal movements , which the infant rotates itself at the widest diameter of the pelvis to allow for the narrowest aspect of the fetal body to align with the narrowest diameter of the pelvis. [ 12 ] These movements include engagement, descent, flexion, internal rotation, extension, external rotation, and expulsion. While the seven cardinal movements are considered the normal mechanism for labor and delivery of human babies, [ 12 ] pelvic sizes and shapes can vary among female humans, which can increase the risk of errors in rotations and delivery, especially since these movements are done completely by the baby. One of the biggest issues with the pelvic shape for childbirth is the Ischial spine . Since the ischial spines support the pelvic floor, if the spines are too far apart, it can lead to weakened pelvic floor muscles. This can cause issues as pregnancy progresses, such as difficulty carrying the fetus to full term. Another complication that can occur during human childbirth is shoulder dystocia, where the shoulder is stuck in the birth canal. [ 13 ] This can lead to a fractured humerus and clavicle of the fetus and hemorrhaging of the mother postpartum. [ 13 ] Thus, these neonatal rotations are important in allowing the baby to safely pass through the pelvis and ensure the health of the mother as well. Gestation length in humans is believed to be shorter than most other primates of comparable size. The gestation length for humans is 266 days, or eight days short of nine months, which is counted from the first day of the woman's last menstrual period. During gestation, mothers must support the metabolic cost of tissue growth, both of the fetus and the mother, as well as the ever-increasing metabolic rate of the growing fetus. [ 14 ] Comparative data from across mammals and primates suggest that there is a metabolic constraint on how large and energetically expensive a fetus can grow before it must leave the mother's body. [ 14 ] It is thought that this shorter gestation period is an adaptation to ensure the survival of mother and child because it leads to altriciality . Neonatal brain and body size have increased in the hominin lineage, and human maternal investment is greater than expected for a primate of similar body mass. [ 14 ] The obstetrical dilemma hypothesis suggests that in order to successfully undergo childbirth, the infant must be born earlier and earlier, thereby making the child increasingly developmentally premature. [ 14 ] The concept of the infant being born underdeveloped is called altriciality . Humans are born with an underdeveloped brain; only 25% of their brains are fully developed at birth, as opposed to non-human primates, where the infant is born with 45–50% brain development. [ 15 ] Scientists have believed that the shorter gestation period can be attributed to the narrower pelvis, as the baby must be born before its head reaches a volume that cannot be accommodated by the obstetric canal. Human infants are also almost always born with assistance from other humans because of the way that the pelvis is shaped. Since the pelvis and opening of the birth canal face backwards, humans have difficulty giving birth themselves because they cannot guide the baby out of the canal. Non-human primates seek seclusion when giving birth because they do not need any help due to the pelvis and opening being more forward. [ 11 ] There is no evidence to ascertain at what point in human evolution birth assistance arose, but some researchers have suggested Homo habilis . [ 16 ] Human infants depend on their parents much more and for much longer than other primates. [ 7 ] [ 14 ] Humans spend a lot of their time caring for their children as they develop whereas other species stand on their own from when they are born. The faster an infant develops, the higher the reproductive output of a female can be. [ 17 ] So in humans, the cost of slow development of their infants is that humans reproduce relatively slowly. This phenomenon is also known as cooperative breeding . Humans are born with a very malleable fetal head which is not fully developed when the infant exits the womb. [ 2 ] This soft spot on the crown of the infant allows for the head to be compressed in order to better fit through the birth canal without obstructing it. [ 7 ] This allows for the head to develop more after birth and for the cranium to continue growing without affecting the birthing process. The obstetrical dilemma hypothesis has had several challenges to it, as more data is collected and analyzed. Several different fields of study have taken an interest in understanding more about the human birth process and that of human ancestor species. Some studies have shown that higher brain growth rates happen earlier on in ontogeny than previously thought, [ 18 ] which challenges the idea that the explanation of the obstetrical dilemma is that humans are born with underdeveloped brains. This is because if brain growth rates were largest in early development, that is when the brain size would increase the most. Premature birth would not allow for a much larger head size if most of the growth had already happened. Also, it has been suggested that maternal pelvic dimensions are sensitive to some ecological factors. There has been a lot of evidence linking body mass to brain mass, leading to the determination of maternal metabolism as a key factor in the growth of the fetus. Maternal constraints could be largely due to thermal stress or energy availability. A larger brain mass in the neonate corresponds to more energy needed to sustain it. It takes much more energy for the mother if the brain fully develops in the womb. If maternal energy is the limiting factor, then an infant can only grow as much as the mother can sustain. Also, because fetal size is positively correlated to maternal energy use, thermal stress is an issue because the larger the fetus, the more the mother can suffer heat stress. [ 6 ] Additional studies suggest that other factors may further complicate the obstetrical dilemma hypothesis. One of these is dietary shifts, possibly due to the emergence of agriculture. This can be both due to change in diet as well as the increase in population density since agriculture was developed; more people leads to more disease. [ 6 ] Studies have also been done in twins to show that pelvic size may be due more to the environment in which they live than their genetics. [ 19 ] Another study disproves the thought that narrower hips are optimized for locomotion because it was found that a Late Stone Age population in Southern Africa that survived largely on terrestrial mobility had women who had uncharacteristically small body size with large pelvic canals. [ 6 ] The energetics of gestation and growth (EGG) hypothesis offers a direct challenge to the obstetrical dilemma hypothesis, equating the constraints on gestation and parturition to the energy restrictions of the mother. It has been shown in studies using professional athletes and pregnant women, that there is an upper limitation to the amount of energy a woman can produce before it causes deleterious effects: approximately 2.1x their basal metabolic rate. During pregnancy, the growing brain mass and length in the neonate correspond to more energy needed to sustain it. This results in a competing balance between the fetus's demand for energy and the maternal ability to meet that demand. At approximately nine months gestation, the fetus's energy needs surpass the mother's energy limitation, correlating with the average time of birth. [ 14 ] The newly born infant can then be sustained on breast milk, which is a more efficient, less energy-demanding mechanism of nutrient transfer between mother and child. [ 20 ] Additionally, this hypothesis demonstrates that, contrary to the obstetrical dilemma, an increased pelvic size would not be deleterious to bipedalism. Studying the running mechanics of males and females, it was shown that an increased pelvic size related to neither an increased metabolic nor structural demand on a woman. [ 21 ] The obstetrical dilemma hypothesis has also been challenged conceptually based on new studies. The authors argue that the obstetrical dilemma hypothesis assumes that human, and therefore hominid, childbirth has been a painful and dangerous experience through the species' evolution. [ 22 ] This assumption may be fundamentally false as many early analyses focused on maternal death data from primarily females of European-descent in Western Europe and the United States during the 19th and 20th century, a limited population. [ 22 ] In a recent study, a covariation between human pelvis shape, stature, and head size is reported. It is said that females with a large head possess a birth canal that can better accommodate large-headed neonates. Mothers with large heads usually give birth to neonates with large heads. Therefore, the detected pattern of covariation contributes to ease childbirth and has likely evolved in response to strong correlational selection. [ 23 ] A recent study aimed to evaluate the original ideas under the 'obstetrical dilemma' and provide a detailed, more complex explanation for the tight fetopelvic fit observed in humans. They propose that the original obstetrical dilemma hypothesis remains valuable as a foundation to explain the complex combination of evolutionary, ecological, and biocultural pressures that constrain maternal pelvic form and fetal size. [ 24 ]
https://en.wikipedia.org/wiki/Obstetrical_dilemma
The obturator process is an anatomical feature on the pelvis of archosaurs . It is a raised area of the ischium bone of the pelvis. [ 1 ] It is the origin of muscles that attach to the femur and aid in running. These muscles are called M. pubo-ischio-femoralis externus 1 and 2 in crocodylians . In birds the muscles are called the M. obturatorius lateralis and M. obturatorius medialis . They insert on the greater trochanter of the femur . [ 2 ] See proximodorsal process
https://en.wikipedia.org/wiki/Obturator_process
An obturator ring was a type of piston ring used in the early rotary engines of some World War I fighter aircraft for improved sealing in the presence of cylinder distortion. The cylinders of rotary aircraft engines (engines with the crankshaft fixed to the airframe and rotating cylinders) suffered from uneven cylinder cooling as the side facing the direction of rotation received more cooling air which lead to thermal distortion. To keep weight down the cylinders on rotary engines had very thin-walls (1.5 mm) [ 1 ] and some had no cylinder liners. On engine types without cylinder liners, obturator rings, made of bronze in the early Gnome engines, [ 2 ] were fitted as these were soft enough to not damage cylinder walls and could flex to the shape of the cylinder. In operation wear on the rings was considerable. Engines needed to be overhauled about every 20 hours. [ 1 ] The reliability of Gnome engines license-built by The British Gnome and Le Rhone Engine Co. was improved with an overhaul life of about 80 hours being achieved, mainly as a result using a special tool to roll the 'L' section obturator rings. [ 3 ] Clerget rotary aircraft engines also used obturator rings which were prone to overheating and seizure. [ 4 ] Le Rhône and Bentley BR1 / BR2 rotary engines used cylinder liners and were sealed using conventional piston rings rather than obturator rings. [ 5 ] [ 6 ] An 'L' section obturator ring is shown in Patent US 1378109A - "Obturator ring".
https://en.wikipedia.org/wiki/Obturator_ring
Ocarina Networks was a technology company selling a hardware / software solution designed to reduce data footprints with file-aware storage optimization. A subsidiary of Dell , [ 1 ] their flagship product, the Ocarina Appliance/Reader, released in April 2008, uses patented data compression techniques incorporating such methods as record linkage and context-based lossless data compression . The product includes the hardware-appliance-based compressor, the Ocarina Optimizer (Models 2400, [ 2 ] 3400, 4600) and a real-time decompressor, the software-based Ocarina Reader. Ocarina was founded by Its solution works by identifying redundancy at a global file system level, and applying specific algorithms for different data formats, such as algorithms specific to images, text, executables, seismic data, and other " unstructured data ". [ 1 ] Ocarina's Optimizers work with existing storage systems through standard network protocols such as NFS, or are directly integrated with partner vendors storage systems. On July 19, 2010, Dell announced it plans to acquire Ocarina Networks. [ 5 ] The transaction was completed on July 31, 2010. [ 6 ] In late 2010, the original Ocarina Optimizer product family was removed from the market, enabling the Ocarina team to focus on the integration of dedupe and compression into Dell storage products. The most notable examples were the DR-family of deduplication appliances, launched in 2012, [ 7 ] and integration of dedupe into Dell's Fluid File System. [ 8 ] The company's ECOsystem (Extract, Correlate, Optimize) provided data reduction technology, providing both deduplication and content-aware data compression in a reliable, scalable, policy-based package. ECOsystem consists of 3 primary components, an optimizer, a reader, and a management and reporting framework. These components were delivered in software or appliance form depending on customer, application, and underlying storage solution. The standard ECOsystem workflow was a post-process. Files were first stored to disk in native form. Policies are used to specify which files were to be optimized (based on age, location, or file type), and what compression settings to use. Policies were commonly used to avoid optimization of files that are actively being modified. ECOsystem could also be configured to migrate optimized data to a secondary tier of lower-cost storage for disk-based archival applications. ECOsystem was content aware, with selection of compression solution based on the type of data being processed. This went beyond file-extension filtering. ECOsystem recursively decomposed compound files, until elemental text, media, or binary components are identified. At the heart of the optimizer software is a context-weighted neural net that will apply the most effective compression solution based on the nature of the elemental file component identified, and will efficiently remember optimal settings based on similar files processed. ECOsystem in most cases is highly effective at achieving results on novel or proprietary file-types, as well as pre-compressed media such as JPEG images and MPEG4 video. Ocarina successfully processed data in over 600 file formats to-date. [ citation needed ] Two forms of Ocarina's post-processing workflow were available: ECOmax utilized all available compression methods to shrink data, including on-disk structures that maximized utilization of physical blocks. The ECOmax workflow required the use of the ECOreader, which is run-anywhere software that efficiently decodes data for transparent read-back. ECOmax could be applied to any file or data types including specialized files used by various vertical industries. The NFO workflow is designed specifically for web-based media companies. In NFO, media files (for example JPEGs) were stored in their native state, which eliminates the need for decoding, and allows customers to capture data-reduction benefits throughout the workflow, including web distribution (bandwidth savings and better end-user experience), and movement into archival systems. NFO provided "visually identical" compression that tailors image parameters to the sensitivities of the Human Visual System Model , and the intended use of the image, without creating any perceivable quality degradation. Note that many of the features and capabilities of the Ocarina ECOsystem were not included in later Dell products.
https://en.wikipedia.org/wiki/Ocarina_Networks
In philosophy , Occam's razor (also spelled Ockham's razor or Ocham's razor ; Latin : novacula Occami ) is the problem-solving principle that recommends searching for explanations constructed with the smallest possible set of elements. It is also known as the principle of parsimony or the law of parsimony ( Latin : lex parsimoniae ). Attributed to William of Ockham , a 14th-century English philosopher and theologian , it is frequently cited as Entia non sunt multiplicanda praeter necessitatem , which translates as "Entities must not be multiplied beyond necessity", [ 1 ] [ 2 ] although Occam never used these exact words. Popularly, the principle is sometimes paraphrased as "of two competing theories, the simpler explanation of an entity is to be preferred." [ 3 ] This philosophical razor advocates that when presented with competing hypotheses about the same prediction and both hypotheses have equal explanatory power, one should prefer the hypothesis that requires the fewest assumptions, [ 4 ] and that this is not meant to be a way of choosing between hypotheses that make different predictions. Similarly, in science, Occam's razor is used as an abductive heuristic in the development of theoretical models rather than as a rigorous arbiter between candidate models. [ 5 ] [ 6 ] The phrase Occam's razor did not appear until a few centuries after William of Ockham's death in 1347. Libert Froidmont , in his 1649 Philosophia Christiana de Anima ( On Christian Philosophy of the Soul ), gives him credit for the phrase, speaking of " novacula occami ". [ 7 ] Ockham did not invent this principle, but its fame—and its association with him—may be due to the frequency and effectiveness with which he used it. [ 8 ] Ockham stated the principle in various ways, but the most popular version, "Entities are not to be multiplied without necessity" ( Non sunt multiplicanda entia sine necessitate ) was formulated by the Irish Franciscan philosopher John Punch in his 1639 commentary on the works of Duns Scotus . [ 9 ] The origins of what has come to be known as Occam's razor are traceable to the works of earlier philosophers such as John Duns Scotus (1265–1308), Robert Grosseteste (1175–1253), Maimonides (Moses ben-Maimon, 1138–1204), and even Aristotle (384–322 BC). [ 10 ] [ 11 ] Aristotle writes in his Posterior Analytics , "We may assume the superiority ceteris paribus [other things being equal] of the demonstration which derives from fewer postulates or hypotheses." Ptolemy ( c. AD 90 – c. 168 ) stated, "We consider it a good principle to explain the phenomena by the simplest hypothesis possible." [ 12 ] Phrases such as "It is vain to do with more what can be done with fewer" and "A plurality is not to be posited without necessity" were commonplace in 13th-century scholastic writing. [ 12 ] Robert Grosseteste, in Commentary on [Aristotle's] the Posterior Analytics Books ( Commentarius in Posteriorum Analyticorum Libros ) ( c. 1217–1220 ), declares: "That is better and more valuable which requires fewer, other circumstances being equal... For if one thing were demonstrated from many and another thing from fewer equally known premises, clearly that is better which is from fewer because it makes us know quickly, just as a universal demonstration is better than particular because it produces knowledge from fewer premises. Similarly in natural science, in moral science, and in metaphysics the best is that which needs no premises and the better that which needs the fewer, other circumstances being equal." [ 13 ] The Summa Theologica of Thomas Aquinas (1225–1274) states that "it is superfluous to suppose that what can be accounted for by a few principles has been produced by many." Aquinas uses this principle to construct an objection to God's existence , an objection that he in turn answers and refutes generally (cf. quinque viae ), and specifically, through an argument based on causality . [ 14 ] Hence, Aquinas acknowledges the principle that today is known as Occam's razor, but prefers causal explanations to other simple explanations (cf. also Correlation does not imply causation ). William of Ockham ( circa 1287–1347) was an English Franciscan friar and theologian , an influential medieval philosopher and a nominalist . His popular fame as a great logician rests chiefly on the maxim attributed to him and known as Occam's razor. The term razor refers to distinguishing between two hypotheses either by "shaving away" unnecessary assumptions or cutting apart two similar conclusions. While it has been claimed that Occam's razor is not found in any of William's writings, [ 15 ] one can cite statements such as Numquam ponenda est pluralitas sine necessitate ("Plurality must never be posited without necessity"), which occurs in his theological work on the Sentences of Peter Lombard ( Quaestiones et decisiones in quattuor libros Sententiarum Petri Lombardi ; ed. Lugd., 1495, i, dist. 27, qu. 2, K). Nevertheless, the precise words sometimes attributed to William of Ockham, Entia non sunt multiplicanda praeter necessitatem (Entities must not be multiplied beyond necessity), [ 16 ] are absent in his extant works; [ 17 ] this particular phrasing comes from John Punch , [ 18 ] who described the principle as a "common axiom" ( axioma vulgare ) of the Scholastics. [ 9 ] William of Ockham himself seems to restrict the operation of this principle in matters pertaining to miracles and God's power, considering a plurality of miracles possible in the Eucharist [ further explanation needed ] simply because it pleases God. [ 12 ] This principle is sometimes phrased as Pluralitas non est ponenda sine necessitate ("Plurality should not be posited without necessity"). [ 19 ] In his Summa Totius Logicae , i. 12, William of Ockham cites the principle of economy, Frustra fit per plura quod potest fieri per pauciora ("It is futile to do with more things that which can be done with fewer"; Thorburn, 1918, pp. 352–53; Kneale and Kneale, 1962, p. 243.) To quote Isaac Newton , "We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances. Therefore, to the same natural effects we must, as far as possible, assign the same causes." [ 20 ] [ 21 ] In the sentence hypotheses non fingo , Newton affirms the success of this approach. Bertrand Russell offers a particular version of Occam's razor: "Whenever possible, substitute constructions out of known entities for inferences to unknown entities." [ 22 ] Around 1960, Ray Solomonoff founded the theory of universal inductive inference , the theory of prediction based on observations – for example, predicting the next symbol based upon a given series of symbols. The only assumption is that the environment follows some unknown but computable probability distribution. This theory is a mathematical formalization of Occam's razor. [ 23 ] [ 24 ] [ 25 ] Another technical approach to Occam's razor is ontological parsimony . [ 26 ] Parsimony means spareness and is also referred to as the Rule of Simplicity. This is considered a strong version of Occam's razor. [ 27 ] [ 28 ] A variation used in medicine is called the " Zebra ": a physician should reject an exotic medical diagnosis when a more commonplace explanation is more likely, derived from Theodore Woodward 's dictum "When you hear hoofbeats, think of horses not zebras". [ 29 ] Ernst Mach formulated the stronger version of Occam's razor into physics , which he called the Principle of Economy stating: "Scientists must use the simplest means of arriving at their results and exclude everything not perceived by the senses." [ 30 ] This principle goes back at least as far as Aristotle, who wrote "Nature operates in the shortest way possible." [ 27 ] The idea of parsimony or simplicity in deciding between theories, though not the intent of the original expression of Occam's razor, has been assimilated into common culture as the widespread layman's formulation that "the simplest explanation is usually the correct one." [ 27 ] Prior to the 20th century, it was a commonly held belief that nature itself was simple and that simpler hypotheses about nature were thus more likely to be true. This notion was deeply rooted in the aesthetic value that simplicity holds for human thought and the justifications presented for it often drew from theology . [ clarification needed ] Thomas Aquinas made this argument in the 13th century, writing, "If a thing can be done adequately by means of one, it is superfluous to do it by means of several; for we observe that nature does not employ two instruments [if] one suffices." [ 31 ] Beginning in the 20th century, epistemological justifications based on induction , logic , pragmatism , and especially probability theory have become more popular among philosophers. [ 7 ] Occam's razor has gained strong empirical support in helping to converge on better theories (see Uses section below for some examples). In the related concept of overfitting , excessively complex models are affected by statistical noise (a problem also known as the bias–variance tradeoff ), whereas simpler models may capture the underlying structure better and may thus have better predictive performance. It is, however, often difficult to deduce which part of the data is noise (cf. model selection , test set , minimum description length , Bayesian inference , etc.). The razor's statement that "other things being equal, simpler explanations are generally better than more complex ones" is amenable to empirical testing. Another interpretation of the razor's statement would be that "simpler hypotheses are generally better than the complex ones". The procedure to test the former interpretation would compare the track records of simple and comparatively complex explanations. If one accepts the first interpretation, the validity of Occam's razor as a tool would then have to be rejected if the more complex explanations were more often correct than the less complex ones (while the converse would lend support to its use). If the latter interpretation is accepted, the validity of Occam's razor as a tool could possibly be accepted if the simpler hypotheses led to correct conclusions more often than not. Even if some increases in complexity are sometimes necessary, there still remains a justified general bias toward the simpler of two competing explanations. To understand why, consider that for each accepted explanation of a phenomenon, there is always an infinite number of possible, more complex, and ultimately incorrect, alternatives. This is so because one can always burden a failing explanation with an ad hoc hypothesis . Ad hoc hypotheses are justifications that prevent theories from being falsified. For example, if a man, accused of breaking a vase, makes supernatural claims that leprechauns were responsible for the breakage, a simple explanation might be that the man did it, but ongoing ad hoc justifications (e.g., "... and that's not me breaking it on the film; they tampered with that, too") could successfully prevent complete disproof. This endless supply of elaborate competing explanations, called saving hypotheses, cannot be technically ruled out – except by using Occam's razor. [ 32 ] [ 33 ] [ 34 ] Any more complex theory might still possibly be true. A study of the predictive validity of Occam's razor found 32 published papers that included 97 comparisons of economic forecasts from simple and complex forecasting methods. None of the papers provided a balance of evidence that complexity of method improved forecast accuracy. In the 25 papers with quantitative comparisons, complexity increased forecast errors by an average of 27 percent. [ 35 ] One justification of Occam's razor is a direct result of basic probability theory . By definition, all assumptions introduce possibilities for error; if an assumption does not improve the accuracy of a theory, its only effect is to increase the probability that the overall theory is wrong. There have also been other attempts to derive Occam's razor from probability theory, including notable attempts made by Harold Jeffreys and E. T. Jaynes . The probabilistic (Bayesian) basis for Occam's razor is elaborated by David J. C. MacKay in chapter 28 of his book Information Theory, Inference, and Learning Algorithms , [ 36 ] where he emphasizes that a prior bias in favor of simpler models is not required. William H. Jefferys and James O. Berger (1991) generalize and quantify the original formulation's "assumptions" concept as the degree to which a proposition is unnecessarily accommodating to possible observable data. [ 37 ] They state, "A hypothesis with fewer adjustable parameters will automatically have an enhanced posterior probability, due to the fact that the predictions it makes are sharp." [ 37 ] The use of "sharp" here is not only a tongue-in-cheek reference to the idea of a razor, but also indicates that such predictions are more accurate than competing predictions. The model they propose balances the precision of a theory's predictions against their sharpness, preferring theories that sharply make correct predictions over theories that accommodate a wide range of other possible results. This, again, reflects the mathematical relationship between key concepts in Bayesian inference (namely marginal probability , conditional probability , and posterior probability ). The bias–variance tradeoff is a framework that incorporates the Occam's razor principle in its balance between overfitting (associated with lower bias but higher variance) and underfitting (associated with lower variance but higher bias). [ 38 ] Karl Popper argues that a preference for simple theories need not appeal to practical or aesthetic considerations. Our preference for simplicity may be justified by its falsifiability criterion: we prefer simpler theories to more complex ones "because their empirical content is greater; and because they are better testable". [ 39 ] The idea here is that a simple theory applies to more cases than a more complex one, and is thus more easily falsifiable. This is again comparing a simple theory to a more complex theory where both explain the data equally well. The philosopher of science Elliott Sober once argued along the same lines as Popper, tying simplicity with "informativeness": The simplest theory is the more informative, in the sense that it requires less information to a question. [ 40 ] He has since rejected this account of simplicity, purportedly because it fails to provide an epistemic justification for simplicity. He now believes that simplicity considerations (and considerations of parsimony in particular) do not count unless they reflect something more fundamental. Philosophers, he suggests, may have made the error of hypostatizing simplicity (i.e., endowed it with a sui generis existence), when it has meaning only when embedded in a specific context (Sober 1992). If we fail to justify simplicity considerations on the basis of the context in which we use them, we may have no non-circular justification: "Just as the question 'why be rational?' may have no non-circular answer, the same may be true of the question 'why should simplicity be considered in evaluating the plausibility of hypotheses? ' " [ 41 ] Richard Swinburne argues for simplicity on logical grounds: ... the simplest hypothesis proposed as an explanation of phenomena is more likely to be the true one than is any other available hypothesis, that its predictions are more likely to be true than those of any other available hypothesis, and that it is an ultimate a priori epistemic principle that simplicity is evidence for truth. According to Swinburne, since our choice of theory cannot be determined by data (see Underdetermination and Duhem–Quine thesis ), we must rely on some criterion to determine which theory to use. Since it is absurd to have no logical method for settling on one hypothesis amongst an infinite number of equally data-compliant hypotheses, we should choose the simplest theory: "Either science is irrational [in the way it judges theories and predictions probable] or the principle of simplicity is a fundamental synthetic a priori truth." [ 42 ] From the Tractatus Logico-Philosophicus : and on the related concept of "simplicity": In science , Occam's razor is used as a heuristic to guide scientists in developing theoretical models rather than as an arbiter between published models. [ 5 ] [ 6 ] In physics , parsimony was an important heuristic in the development and application of the principle of least action by Pierre Louis Maupertuis and Leonhard Euler , [ 43 ] in Albert Einstein 's formulation of special relativity , [ 44 ] [ 45 ] and in the development of quantum mechanics by Max Planck , Werner Heisenberg and Louis de Broglie . [ 6 ] [ 46 ] In chemistry , Occam's razor is often an important heuristic when developing a model of a reaction mechanism . [ 47 ] [ 48 ] Although it is useful as a heuristic in developing models of reaction mechanisms, it has been shown to fail as a criterion for selecting among some selected published models. [ 6 ] In this context, Einstein himself expressed caution when he formulated Einstein's Constraint : "It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience." [ 49 ] [ 50 ] [ 51 ] An often-quoted version of this constraint (which cannot be verified as posited by Einstein himself) [ 52 ] reduces this to "Everything should be kept as simple as possible, but not simpler." In the scientific method , Occam's razor is not considered an irrefutable principle of logic or a scientific result; the preference for simplicity in the scientific method is based on the falsifiability criterion. For each accepted explanation of a phenomenon, there may be an extremely large, perhaps even incomprehensible, number of possible and more complex alternatives. Since failing explanations can always be burdened with ad hoc hypotheses to prevent them from being falsified, simpler theories are preferable to more complex ones because they tend to be more testable . [ 53 ] [ 54 ] [ 55 ] As a logical principle, Occam's razor would demand that scientists accept the simplest possible theoretical explanation for existing data. However, science has shown repeatedly that future data often support more complex theories than do existing data. Science prefers the simplest explanation that is consistent with the data available at a given time, but the simplest explanation may be ruled out as new data become available. [ 5 ] [ 54 ] That is, science is open to the possibility that future experiments might support more complex theories than demanded by current data and is more interested in designing experiments to discriminate between competing theories than favoring one theory over another based merely on philosophical principles. [ 53 ] [ 54 ] [ 55 ] When scientists use the idea of parsimony, it has meaning only in a very specific context of inquiry. Several background assumptions are required for parsimony to connect with plausibility in a particular research problem. [ clarification needed ] The reasonableness of parsimony in one research context may have nothing to do with its reasonableness in another. It is a mistake to think that there is a single global principle that spans diverse subject matter. [ 55 ] It has been suggested that Occam's razor is a widely accepted example of extraevidential consideration, even though it is entirely a metaphysical assumption. Most of the time, however, Occam's razor is a conservative tool, cutting out "crazy, complicated constructions" and assuring "that hypotheses are grounded in the science of the day", thus yielding "normal" science: models of explanation and prediction. [ 6 ] There are, however, notable exceptions where Occam's razor turns a conservative scientist into a reluctant revolutionary. For example, Max Planck interpolated between the Wien and Jeans radiation laws and used Occam's razor logic to formulate the quantum hypothesis, even resisting that hypothesis as it became more obvious that it was correct. [ 6 ] Appeals to simplicity were used to argue against the phenomena of meteorites, ball lightning , continental drift , and reverse transcriptase . [ 56 ] One can argue for atomic building blocks for matter, because it provides a simpler explanation for the observed reversibility of both mixing [ clarification needed ] and chemical reactions as simple separation and rearrangements of atomic building blocks. At the time, however, the atomic theory was considered more complex because it implied the existence of invisible particles that had not been directly detected. Ernst Mach and the logical positivists rejected John Dalton 's atomic theory until the reality of atoms was more evident in Brownian motion , as shown by Albert Einstein . [ 57 ] In the same way, postulating the aether is more complex than transmission of light through a vacuum . At the time, however, all known waves propagated through a physical medium, and it seemed simpler to postulate the existence of a medium than to theorize about wave propagation without a medium. Likewise, Isaac Newton 's idea of light particles seemed simpler than Christiaan Huygens 's idea of waves, so many favored it. In this case, as it turned out, neither the wave—nor the particle—explanation alone suffices, as light behaves like waves and like particles . Three axioms presupposed by the scientific method are realism (the existence of objective reality), the existence of natural laws, and the constancy of natural law. Rather than depend on provability of these axioms, science depends on the fact that they have not been objectively falsified. Occam's razor and parsimony support, but do not prove, these axioms of science. The general principle of science is that theories (or models) of natural law must be consistent with repeatable experimental observations. This ultimate arbiter (selection criterion) rests upon the axioms mentioned above. [ 54 ] If multiple models of natural law make exactly the same testable predictions, they are equivalent and there is no need for parsimony to choose a preferred one. For example, Newtonian , Hamiltonian and Lagrangian classical mechanics are equivalent. Physicists have no interest in using Occam's razor to say the other two are wrong. Likewise, there is no demand for simplicity principles to arbitrate between wave and matrix formulations of quantum mechanics. Science often does not demand arbitration or selection criteria between models that make the same testable predictions. [ 54 ] Biologists or philosophers of biology use Occam's razor in either of two contexts both in evolutionary biology : the units of selection controversy and systematics . George C. Williams in his book Adaptation and Natural Selection (1966) argues that the best way to explain altruism among animals is based on low-level (i.e., individual) selection as opposed to high-level group selection. Altruism is defined by some evolutionary biologists (e.g., R. Alexander, 1987; W. D. Hamilton, 1964) as behavior that is beneficial to others (or to the group) at a cost to the individual, and many posit individual selection as the mechanism that explains altruism solely in terms of the behaviors of individual organisms acting in their own self-interest (or in the interest of their genes, via kin selection). Williams was arguing against the perspective of others who propose selection at the level of the group as an evolutionary mechanism that selects for altruistic traits (e.g., D. S. Wilson & E. O. Wilson, 2007). The basis for Williams's contention is that of the two, individual selection is the more parsimonious theory. In doing so he is invoking a variant of Occam's razor known as Morgan's Canon : "In no case is an animal activity to be interpreted in terms of higher psychological processes, if it can be fairly interpreted in terms of processes which stand lower in the scale of psychological evolution and development." (Morgan 1903). However, more recent biological analyses, such as Richard Dawkins 's The Selfish Gene , have contended that Morgan's Canon is not the simplest and most basic explanation. Dawkins argues the way evolution works is that the genes propagated in most copies end up determining the development of that particular species, i.e., natural selection turns out to select specific genes, and this is really the fundamental underlying principle that automatically gives individual and group selection as emergent features of evolution. Zoology provides an example. Muskoxen , when threatened by wolves , form a circle with the males on the outside and the females and young on the inside. This is an example of a behavior by the males that seems to be altruistic. The behavior is disadvantageous to them individually but beneficial to the group as a whole; thus, it was seen by some to support the group selection theory. Another interpretation is kin selection: if the males are protecting their offspring, they are protecting copies of their own alleles. Engaging in this behavior would be favored by individual selection if the cost to the male musk ox is less than half of the benefit received by his calf – which could easily be the case if wolves have an easier time killing calves than adult males. It could also be the case that male musk oxen would be individually less likely to be killed by wolves if they stood in a circle with their horns pointing out, regardless of whether they were protecting the females and offspring. That would be an example of regular natural selection – a phenomenon called "the selfish herd". Systematics is the branch of biology that attempts to establish patterns of relationship among biological taxa, today generally thought to reflect evolutionary history. It is also concerned with their classification. There are three primary camps in systematics: cladists, pheneticists, and evolutionary taxonomists. Cladists hold that classification should be based on synapomorphies (shared, derived character states), pheneticists contend that overall similarity (synapomorphies and complementary symplesiomorphies ) is the determining criterion, while evolutionary taxonomists say that both genealogy and similarity count in classification (in a manner determined by the evolutionary taxonomist). [ 58 ] [ 59 ] It is among the cladists that Occam's razor is applied, through the method of cladistic parsimony . Cladistic parsimony (or maximum parsimony ) is a method of phylogenetic inference that yields phylogenetic trees (more specifically, cladograms). Cladograms are branching, diagrams used to represent hypotheses of relative degree of relationship, based on synapomorphies . Cladistic parsimony is used to select as the preferred hypothesis of relationships the cladogram that requires the fewest implied character state transformations (or smallest weight, if characters are differentially weighted). Critics of the cladistic approach often observe that for some types of data, parsimony could produce the wrong results, regardless of how much data is collected (this is called statistical inconsistency, or long branch attraction ). However, this criticism is also potentially true for any type of phylogenetic inference, unless the model used to estimate the tree reflects the way that evolution actually happened. Because this information is not empirically accessible, the criticism of statistical inconsistency against parsimony holds no force. [ 60 ] For a book-length treatment of cladistic parsimony, see Elliott Sober 's Reconstructing the Past: Parsimony, Evolution, and Inference (1988). For a discussion of both uses of Occam's razor in biology, see Sober's article "Let's Razor Ockham's Razor" (1990). Other methods for inferring evolutionary relationships use parsimony in a more general way. Likelihood methods for phylogeny use parsimony as they do for all likelihood tests, with hypotheses requiring fewer differing parameters (i.e., numbers or different rates of character change or different frequencies of character state transitions) being treated as null hypotheses relative to hypotheses requiring more differing parameters. Thus, complex hypotheses must predict data much better than do simple hypotheses before researchers reject the simple hypotheses. Recent advances employ information theory , a close cousin of likelihood, which uses Occam's razor in the same way. The choice of the "shortest tree" relative to a not-so-short tree under any optimality criterion (smallest distance, fewest steps, or maximum likelihood) is always based on parsimony. [ 61 ] Francis Crick has commented on potential limitations of Occam's razor in biology. He advances the argument that because biological systems are the products of (an ongoing) natural selection, the mechanisms are not necessarily optimal in an obvious sense. He cautions: "While Ockham's razor is a useful tool in the physical sciences, it can be a very dangerous implement in biology. It is thus very rash to use simplicity and elegance as a guide in biological research." [ 62 ] This is an ontological critique of parsimony. In biogeography , parsimony is used to infer ancient vicariant events or migrations of species or populations by observing the geographic distribution and relationships of existing organisms . Given the phylogenetic tree, ancestral population subdivisions are inferred to be those that require the minimum amount of change. [ citation needed ] In the philosophy of religion , Occam's razor is sometimes applied to the existence of God. William of Ockham himself was a Christian . He believed in God, and in the authority of Christian scripture ; he writes that "nothing ought to be posited without a reason given, unless it is self-evident (literally, known through itself) or known by experience or proved by the authority of Sacred Scripture." [ 63 ] Ockham believed that an explanation has no sufficient basis in reality when it does not harmonize with reason, experience, or the Bible . Unlike many theologians of his time, though, Ockham did not believe God could be logically proven with arguments. To Ockham, science was a matter of discovery; theology was a matter of revelation and faith . He states: "Only faith gives us access to theological truths. The ways of God are not open to reason, for God has freely chosen to create a world and establish a way of salvation within it apart from any necessary laws that human logic or rationality can uncover." [ 64 ] Thomas Aquinas , in the Summa Theologica , uses a formulation of Occam's razor to construct an objection to the idea that God exists, which he refutes directly with a counterargument: [ 65 ] Further, it is superfluous to suppose that what can be accounted for by a few principles has been produced by many. But it seems that everything we see in the world can be accounted for by other principles, supposing God did not exist. For all natural things can be reduced to one principle which is nature; and all voluntary things can be reduced to one principle which is human reason, or will. Therefore there is no need to suppose God's existence. In turn, Aquinas answers this with the quinque viae , and addresses the particular objection above with the following answer: Since nature works for a determinate end under the direction of a higher agent, whatever is done by nature must needs be traced back to God, as to its first cause. So also whatever is done voluntarily must also be traced back to some higher cause other than human reason or will, since these can change or fail; for all things that are changeable and capable of defect must be traced back to an immovable and self-necessary first principle, as was shown in the body of the Article. Rather than argue for the necessity of a god, some theists base their belief upon grounds independent of, or prior to, reason, making Occam's razor irrelevant. This was the stance of Søren Kierkegaard , who viewed belief in God as a leap of faith that sometimes directly opposed reason. [ 66 ] This is also the doctrine of Gordon Clark 's presuppositional apologetics , with the exception that Clark never thought the leap of faith was contrary to reason (see also Fideism ). Various arguments in favor of God establish God as a useful or even necessary assumption. Contrastingly some anti-theists hold firmly to the belief that assuming the existence of God introduces unnecessary complexity (e.g., the Ultimate Boeing 747 gambit from Dawkins's The God Delusion [ 67 ] ). [ 68 ] Another application of the principle is to be found in the work of George Berkeley (1685–1753). Berkeley was an idealist who believed that all of reality could be explained in terms of the mind alone. He invoked Occam's razor against materialism , stating that matter was not required by his metaphysics and was thus eliminable. One potential problem with this belief [ for whom? ] is that it's possible, given Berkeley's position, to find solipsism itself more in line with the razor than a God-mediated world beyond a single thinker. Occam's razor may also be recognized in the apocryphal story about an exchange between Pierre-Simon Laplace and Napoleon . It is said that in praising Laplace for one of his recent publications, the emperor asked how it was that the name of God, which featured so frequently in the writings of Lagrange , appeared nowhere in Laplace's. At that, he is said to have replied, "It's because I had no need of that hypothesis." [ 69 ] Though some points of this story illustrate Laplace's atheism , more careful consideration suggests that he may instead have intended merely to illustrate the power of methodological naturalism , or even simply that the fewer logical premises one assumes, the stronger is one's conclusion. In his article "Sensations and Brain Processes" (1959), J. J. C. Smart invoked Occam's razor with the aim to justify his preference of the mind-brain identity theory over spirit-body dualism . Dualists state that there are two kinds of substances in the universe: physical (including the body) and spiritual, which is non-physical. In contrast, identity theorists state that everything is physical, including consciousness, and that there is nothing nonphysical. Though it is impossible to appreciate the spiritual when limiting oneself to the physical, [ citation needed ] Smart maintained that identity theory explains all phenomena by assuming only a physical reality. Subsequently, Smart has been severely criticized for his use (or misuse) of Occam's razor and ultimately retracted his advocacy of it in this context. Paul Churchland (1984) states that by itself Occam's razor is inconclusive regarding duality. In a similar way, Dale Jacquette (1994) stated that Occam's razor has been used in attempts to justify eliminativism and reductionism in the philosophy of mind. Eliminativism is the thesis that the ontology of folk psychology including such entities as "pain", "joy", "desire", "fear", etc., are eliminable in favor of an ontology of a completed neuroscience. In penal theory and the philosophy of punishment, parsimony refers specifically to taking care in the distribution of punishment in order to avoid excessive punishment. In the utilitarian approach to the philosophy of punishment, Jeremy Bentham 's "parsimony principle" states that any punishment greater than is required to achieve its end is unjust. The concept is related but not identical to the legal concept of proportionality . Parsimony is a key consideration of the modern restorative justice , and is a component of utilitarian approaches to punishment, as well as the prison abolition movement . Bentham believed that true parsimony would require punishment to be individualised to take account of the sensibility of the individual—an individual more sensitive to punishment should be given a proportionately lesser one, since otherwise needless pain would be inflicted. Later utilitarian writers have tended to abandon this idea, in large part due to the impracticality of determining each alleged criminal's relative sensitivity to specific punishments. [ 70 ] Marcus Hutter's universal artificial intelligence builds upon Solomonoff's mathematical formalization of the razor to calculate the expected value of an action. There are various papers in scholarly journals deriving formal versions of Occam's razor from probability theory, applying it in statistical inference , and using it to come up with criteria for penalizing complexity in statistical inference. Papers [ 71 ] [ 72 ] have suggested a connection between Occam's razor and Kolmogorov complexity . [ 73 ] One of the problems with the original formulation of the razor is that it only applies to models with the same explanatory power (i.e., it only tells us to prefer the simplest of equally good models). A more general form of the razor can be derived from Bayesian model comparison, which is based on Bayes factors and can be used to compare models that do not fit the observations equally well. These methods can sometimes optimally balance the complexity and power of a model. Generally, the exact Occam factor is intractable, but approximations such as Akaike information criterion , Bayesian information criterion , Variational Bayesian methods , false discovery rate , and Laplace's method are used. Many artificial intelligence researchers are now employing such techniques, for instance through work on Occam Learning or more generally on the Free energy principle . Statistical versions of Occam's razor have a more rigorous formulation than what philosophical discussions produce. In particular, they must have a specific definition of the term simplicity , and that definition can vary. For example, in the Kolmogorov – Chaitin minimum description length approach, the subject must pick a Turing machine whose operations describe the basic operations believed to represent "simplicity" by the subject. However, one could always choose a Turing machine with a simple operation that happened to construct one's entire theory and would hence score highly under the razor. This has led to two opposing camps: one that believes Occam's razor is objective, and one that believes it is subjective. The minimum instruction set of a universal Turing machine requires approximately the same length description across different formulations, and is small compared to the Kolmogorov complexity of most practical theories. Marcus Hutter has used this consistency to define a "natural" Turing machine of small size as the proper basis for excluding arbitrarily complex instruction sets in the formulation of razors. [ 74 ] Describing the program for the universal program as the "hypothesis", and the representation of the evidence as program data, it has been formally proven under Zermelo–Fraenkel set theory that "the sum of the log universal probability of the model plus the log of the probability of the data given the model should be minimized." [ 75 ] Interpreting this as minimising the total length of a two-part message encoding model followed by data given model gives us the minimum message length (MML) principle. [ 71 ] [ 72 ] One possible conclusion from mixing the concepts of Kolmogorov complexity and Occam's razor is that an ideal data compressor would also be a scientific explanation/formulation generator. Some attempts have been made to re-derive known laws from considerations of simplicity or compressibility. [ 24 ] [ 76 ] According to Jürgen Schmidhuber , the appropriate mathematical theory of Occam's razor already exists, namely, Solomonoff's theory of optimal inductive inference [ 77 ] and its extensions. [ 78 ] See discussions in David L. Dowe's "Foreword re C. S. Wallace" [ 79 ] for the subtle distinctions between the algorithmic probability work of Solomonoff and the MML work of Chris Wallace , and see Dowe's "MML, hybrid Bayesian network graphical models, statistical consistency, invariance and uniqueness" [ 80 ] both for such discussions and for (in section 4) discussions of MML and Occam's razor. For a specific example of MML as Occam's razor in the problem of decision tree induction, see Dowe and Needham's "Message Length as an Effective Ockham's Razor in Decision Tree Induction". [ 81 ] The no free lunch (NFL) theorems for inductive inference prove that Occam's razor must rely on ultimately arbitrary assumptions concerning the prior probability distribution found in our world. [ 82 ] Specifically, suppose one is given two inductive inference algorithms, A and B, where A is a Bayesian procedure based on the choice of some prior distribution motivated by Occam's razor (e.g., the prior might favor hypotheses with smaller Kolmogorov complexity ). Suppose that B is the anti-Bayes procedure, which calculates what the Bayesian algorithm A based on Occam's razor will predict – and then predicts the exact opposite. Then there are just as many actual priors (including those different from the Occam's razor prior assumed by A) in which algorithm B outperforms A as priors in which the procedure A based on Occam's razor comes out on top. In particular, the NFL theorems show that the "Occam factors" Bayesian argument for Occam's razor must make ultimately arbitrary modeling assumptions. [ 83 ] In software development, the rule of least power argues the correct programming language to use is the one that is simplest while also solving the targeted software problem. In that form the rule is often credited to Tim Berners-Lee since it appeared in his design guidelines for the original Hypertext Transfer Protocol . [ 84 ] Complexity in this context is measured either by placing a language into the Chomsky hierarchy or by listing idiomatic features of the language and comparing according to some agreed to scale of difficulties between idioms. Many languages once thought to be of lower complexity have evolved or later been discovered to be more complex than originally intended; so, in practice this rule is applied to the relative ease of a programmer to obtain the power of the language, rather than the precise theoretical limits of the language. Scientists have discovered that deep neural networks (DNN) prefer simpler mathematical functions while learning. This simplicity bias enables DNNs to overcome overfitting - a scenario where the model gets overwhelmed with noise due to the presence of too many parameters. [ 85 ] Occam's razor is not an embargo against the positing of any kind of entity, or a recommendation of the simplest theory come what may. [ a ] Occam's razor is used to adjudicate between theories that have already passed "theoretical scrutiny" tests and are equally well-supported by evidence. [ b ] Furthermore, it may be used to prioritize empirical testing between two equally plausible but unequally testable hypotheses; thereby minimizing costs and wastes while increasing chances of falsification of the simpler-to-test hypothesis. [ citation needed ] Another contentious aspect of the razor is that a theory can become more complex in terms of its structure (or syntax ), while its ontology (or semantics ) becomes simpler, or vice versa. [ c ] Quine, in a discussion on definition, referred to these two perspectives as "economy of practical expression" and "economy in grammar and vocabulary", respectively. [ 87 ] Galileo Galilei lampooned the misuse of Occam's razor in his Dialogue . The principle is represented in the dialogue by Simplicio. The telling point that Galileo presented ironically was that if one really wanted to start from a small number of entities, one could always consider the letters of the alphabet as the fundamental entities, since one could construct the whole of human knowledge out of them. Instances of using Occam's razor to justify belief in less complex and more simple theories have been criticized as using the razor inappropriately. For instance Francis Crick stated that "While Occam's razor is a useful tool in the physical sciences, it can be a very dangerous implement in biology. It is thus very rash to use simplicity and elegance as a guide in biological research." [ 88 ] Occam's razor has met some opposition from people who consider it too extreme or rash. Walter Chatton ( c. 1290–1343 ) was a contemporary of William of Ockham who took exception to Occam's razor and Ockham's use of it. In response he devised his own anti-razor : "If three things are not enough to verify an affirmative proposition about things, a fourth must be added and so on." Although there have been several philosophers who have formulated similar anti-razors since Chatton's time, no one anti-razor has perpetuated as notably as Chatton's anti-razor, although this could be the case of the Late Renaissance Italian motto of unknown attribution Se non è vero, è ben trovato ("Even if it is not true, it is well conceived") when referred to a particularly artful explanation. Anti-razors have also been created by Gottfried Wilhelm Leibniz (1646–1716), Immanuel Kant (1724–1804), and Karl Menger (1902–1985). Leibniz's version took the form of a principle of plenitude , as Arthur Lovejoy has called it: the idea being that God created the most varied and populous of possible worlds. Kant felt a need to moderate the effects of Occam's razor and thus created his own counter-razor: "The variety of beings should not rashly be diminished." [ 89 ] Karl Menger found mathematicians to be too parsimonious with regard to variables so he formulated his Law Against Miserliness, which took one of two forms: "Entities must not be reduced to the point of inadequacy" and "It is vain to do with fewer what requires more." A less serious but even more extremist anti-razor is 'Pataphysics , the "science of imaginary solutions" developed by Alfred Jarry (1873–1907). Perhaps the ultimate in anti-reductionism, "'Pataphysics seeks no less than to view each event in the universe as completely unique, subject to no laws but its own." Variations on this theme were subsequently explored by the Argentine writer Jorge Luis Borges in his story/mock-essay " Tlön, Uqbar, Orbis Tertius ". Physicist R. V. Jones contrived Crabtree's Bludgeon, which states that "[n]o set of mutually inconsistent observations can exist for which some human intellect cannot conceive a coherent explanation, however complicated." [ 90 ] Recently, American physicist Igor Mazin argued that because high-profile physics journals prefer publications offering exotic and unusual interpretations, the Occam's razor principle is being replaced by an "Inverse Occam's razor", implying that the simplest possible explanation is usually rejected. [ 91 ] Since 2012 [update] , The Skeptic magazine annually awards the Ockham Awards, or simply the Ockhams, named after Occam's razor, at QED . [ 92 ] The Ockhams were introduced by editor-in-chief Deborah Hyde to "recognise the effort and time that have gone into the community's favourite skeptical blogs, skeptical podcasts, skeptical campaigns and outstanding contributors to the skeptical cause." [ 93 ] The trophies , designed by Neil Davies and Karl Derrick, carry the upper text " Ockham's " and the lower text " The Skeptic. Shaving away unnecessary assumptions since 1285. " Between the texts, there is an image of a double-edged safety razorblade , and both lower corners feature an image of William of Ockham's face. [ 93 ]
https://en.wikipedia.org/wiki/Occam's_razor
An occultation is an event that occurs when one object is hidden from the observer by another object that passes between them. The term is often used in astronomy , but can also refer to any situation in which an object in the foreground blocks from view (occults) an object in the background. In this general sense, occultation applies to the visual scene observed from low-flying aircraft (or computer-generated imagery ) when foreground objects obscure distant objects dynamically, as the scene changes over time. If the closer body does not entirely conceal the farther one, the event is called a transit . Both transit and occultation may be referred to generally as occlusion ; and if a shadow is cast onto the observer, it is called an eclipse . The symbol for an occultation, and especially a solar eclipse , is (U+1F775 🝵). [ not verified in body ] The term occultation is most frequently used to describe lunar occultations , those relatively frequent occasions when the Moon passes in front of a star during the course of its orbital motion around the Earth. Since the Moon, with an angular speed with respect to the stars of 0.55 arcsec /s or 2.7 μrad/s, has a very thin atmosphere and stars have an angular diameter of at most 0.057 arcseconds or 0.28 μrad, a star that is occulted by the Moon will disappear or reappear in 0.1 seconds or less on the Moon's edge, or limb. Events that take place on the Moon's dark limb are of particular interest to observers, because the lack of glare allows easier observation and timing. The Moon's orbit is inclined slightly with respect to the ecliptic (see orbit of the Moon ) meaning any star with an ecliptic latitude between –6.6 and +6.6 degrees may be occulted by it. [ 1 ] Three first magnitude stars appear well within that band – Regulus , Spica , and Antares – meaning they may be occulted by the Moon or by planets. [ 2 ] Occultations of Aldebaran are in this epoch only possible by the Moon, because the planets pass Aldebaran to the north. Neither planetary nor lunar occultations of Pollux are currently possible, however several thousand years ago lunar occultations were possible. Some notably close deep-sky objects , such as the Pleiades , can be occulted by the Moon. Within a few kilometres of the edge of an occultation's predicted path, referred to as its northern or southern limit, an observer may see the star intermittently disappearing and reappearing as the irregular limb of the Moon moves past the star, creating what is known as a grazing lunar occultation . From an observational and scientific standpoint, these "grazes" are the most dynamic and interesting of lunar occultations. The accurate timing of lunar occultations is performed regularly by (primarily amateur) astronomers. Lunar occultations timed to an accuracy of a few tenths of a second have various scientific uses, particularly in refining our knowledge of lunar topography . Photoelectric analysis of lunar occultations have also discovered some stars to be very close visual or spectroscopic binaries . Some angular diameters of stars have been measured by timing of lunar occultations, which is useful for determining effective temperatures of those stars. Early radio astronomers found occultations of radio sources by the Moon valuable for determining their exact positions, because the long wavelength of radio waves limited the resolution available through direct observation. This was crucial for the unambiguous identification of the radio source 3C 273 with the optical quasar and its jet, [ 3 ] and a fundamental prerequisite for Maarten Schmidt 's discovery of the cosmological nature of quasars . Several times during the year the Moon can be seen occulting a planet. [ 4 ] Since planets, unlike stars, have significant angular sizes, lunar occultations of planets will create a narrow zone on Earth from which a partial occultation of the planet will occur. An observer located within that narrow zone could observe the planet's disk partly blocked by the slowly moving Moon. The same mechanism can be seen with the Sun, where observers on Earth will view it as a solar eclipse . Therefore, a total solar eclipse is essentially the Moon occulting the Sun. Stars may also be occulted by planets. Occultations of bright stars are rare. In 1959, Venus occulted Regulus , and the next occultation of a bright star (also Regulus by Venus) will be in 2044. [ 2 ] Uranus 's rings were first discovered when that planet occulted a star in 1977. On 3 July 1989, Saturn passed in front of the 5th magnitude star 28 Sagittarii . Pluto occulted stars in 1988, 2002, and 2006, allowing its tenuous atmosphere to be studied via atmospheric limb sounding . In rare cases, one planet can pass in front of another. [ 5 ] If the nearer planet appears larger than the more distant one, the event is called a mutual planetary occultation. The last occultation or transit occurred on 3 January 1818 and the next will occur on 22 November 2065, in both cases involving the same two planets— Venus and Jupiter . [ 6 ] Jupiter rarely occults Saturn . This is one of the rarest events known, [ 7 ] with the next occurrence on February 10, 7541. This event is visible worldwide since the duo would be positioned almost in opposition to the sun, in the border line between the constellations of Orion and Taurus . In some areas this occultation cannot be seen, but when viewed through even small telescopes, both gas giants appear to be in the same part of view through the eyepiece. The last one occurred in 6857 B.C.E. [ 8 ] A further set of occultations are those when a small Solar System body or dwarf planet passes in front of a star, temporarily blocking its light as seen from Earth. [ 9 ] These occultations are useful for measuring the size and position of body much more precisely than can be done by other means. A cross-sectional profile of the shape of a body can even be determined if a number of observers at different, nearby, locations observe the occultation. Occultations have been used to calculate the diameter of trans-Neptunian objects such as 2002 TX 300 , Ixion and Varuna . Software for coordinating observations is available for download at http://www.occultwatcher.net/ In addition, mutual occultation and eclipsing events can occur between a primary and its satellite . A large number of moons have been discovered analyzing the photometric light curves of small bodies and detecting a second, superimposed brightness variation, from which an orbital period for the satellite (secondary), and a secondary-to-primary diameter-ratio (for the binary system ) can often be derived. The Moon or another celestial body can occult multiple celestial bodies at the same time. Because of its relatively large angular diameter the Moon, at any given time, occults an indeterminate number of stars and galaxies. However the Moon occulting (obscuring) two bright objects (e.g. two planets or a bright star and a planet) simultaneously is extremely rare and can be seen only from a small part of the world: the last such event was on 23 April 1998 when it occulted Venus and Jupiter for observers on Ascension Island . The Big Occulting Steerable Satellite (BOSS) was a proposed satellite that would work in conjunction with a telescope to detect planets around distant stars. The satellite consists of a large, very lightweight sheet, and a set of maneuvering thrusters and navigation systems. It would maneuver to a position along the line of sight between the telescope and a nearby star. The satellite would thereby block the radiation from the star, permitting the orbiting planets to be observed. [ 22 ] The proposed satellite would have a dimension of 70 by 70 metres (230 ft × 230 ft), a mass of about 600 kg, and maneuver by means of an ion drive engine in combination with using the sheet as a light sail. Positioned at a distance of 100,000 km from the telescope, it would block more than 99.998% of the starlight. There are two possible configurations of this satellite. The first would work with a space telescope , most likely positioned near the Earth 's L 2 Lagrangian point . The second would place the satellite in a highly elliptical orbit about the Earth, and work in conjunction with a ground telescope. At the apogee of the orbit, the satellite would remain relatively stationary with respect to the ground, allowing longer exposure times. An updated version of this design is called the Starshade , which uses a sunflower -shaped coronagraph disc. A comparable proposal was also made for a satellite to occult bright X-ray sources, called an X-ray Occulting Steerable Satellite or XOSS. [ 23 ]
https://en.wikipedia.org/wiki/Occultation
In macroecology and community ecology , an occupancy frequency distribution ( OFD ) is the distribution of the numbers of species occupying different numbers of areas. [ 1 ] It was first reported in 1918 by the Danish botanist Christen C. Raunkiær in his study on plant communities. The OFD is also known as the species-range size distribution in literature. [ 2 ] [ 3 ] A typical form of OFD is a bimodal distribution , indicating the species in a community is either rare or common, known as Raunkiaer's law of distribution of frequencies. [ 4 ] That is, with each species assigned to one of five 20%-wide occupancy classes, Raunkiaer's law predicts bimodal distributions within homogenous plant formations with modes in the first (0-20%) and last (81-100%) classes. [ 4 ] Although Raunkiaer's law has long been discounted as an index of plant community homogeneity, [ 5 ] the method of using occupancy classes to construct OFDs is still commonly used for both plant and animal assemblages. Henry Gleason commented on this law in a 1929 Ecology article: "In conclusion we may say that Raunkiaer's law is merely an expression of the fact that in any association there are more species with few individuals than with many, that the law is most apparent when quadrats are chosen of the most serviceable size to show frequency, and that it is obscured or lost if the quadrats are either too large or too small." [ 6 ] Evidently, there are different shapes of OFD found in literature. Tokeshi reported that approximately 46% of observations have a right-skewed unimodal shape , 27% bimodal, and 27% uniform . [ 7 ] A recent study reaffirms about 24% bimodal OFDs in among 289 real communities. [ 8 ] As pointed out by Gleason, [ 6 ] the variety shapes of OFD can be explained, to a large degree, by the size of the sampling interval. For instance, McGeoch and Gaston (2002) [ 1 ] show that the number of satellite (rare) species declines with the increase of sampling grains, but the number of core (common) species increases, showing a tendency from a bimodal OFD towards a right-skewed unimodal distribution. This is because species range , measured as occupancy, is strongly affected by the spatial scale and its aggregation structure, [ 9 ] known often as the scaling pattern of occupancy . Such scale dependence of occupancy has a profound effect on other macroecological patterns, such as the occupancy-abundance relationship . Other factors that have been proposed to be able to affect the shape of OFD include the degree of habitat heterogeneity, [ 10 ] [ 11 ] species specificity, [ 12 ] landscape productivity, [ 13 ] position in the geographic range, [ 14 ] species dispersal ability [ 15 ] and the extinction–colonization dynamics. [ 16 ] Three basic models have been proposed to explain the bimodality found in occupancy frequency distributions. Random sampling of individuals from either lognormal or log-series rank abundance distributions (where random choice of an individual from a given species was proportional to its frequency) may produce bimodal occupancy distributions. [ 4 ] [ 17 ] This model is not particularly sensitive or informative as to the mechanisms generating bimodality in occupancy frequency distributions, because the mechanisms generating the lognormal species abundance distribution are still under heavy debate. Bimodality may be generated by colonization-extinction metapopulation dynamics associated with a strong rescue effect . [ 16 ] [ 18 ] This model is appropriate to explain the range structure of a community that is influenced by metapopulation processes, such as dispersal and local extinction . [ 19 ] However, it is not robust because the shape of the occupancy frequency distribution generated by this model is highly sensitive to species immigration and extinction parameters. [ 7 ] [ 20 ] The metapopulation model does also not explain scale dependence in the occupancy frequency distribution. The third model that describes bimodality in the occupancy frequency distribution is based on the scaling pattern of occupancy under a self-similar assumption of species distributions (called the occupancy probability transition [OPT] model). [ 21 ] [ 22 ] The OPT model is based on Harte et al.'s bisection scheme [ 23 ] (although not on their probability rule) and the recursion probability of occupancy at different scales. The OPT model has been shown to support two empirical observations: [ 21 ] The OPT model demonstrates that the sample grain of a study, sampling adequacy, and the distribution of species saturation coefficients (a measure of the fractal dimensionality of a species distribution) in a community are together largely able to explain the patterns commonly found in empirical occupancy distributions. Hui and McGeoch (2007) further show that the self-similarity in species distributions breaks down according to a power relationship with spatial scales, and we therefore adopt a power-scaling assumption for modeling species occupancy distributions. [ 22 ] The bimodality in occupancy frequency distributions that is common in species communities, is confirmed to a result for certain mathematical and statistical properties of the probability distribution of occupancy. The results thus demonstrate that the use of the bisection method in combination with a power-scaling assumption is more appropriate for modeling species distributions than the use of a self-similarity assumption, particularly at fine scales. This model further provokes the Harte-Maddux debate: Harte et al. [ 23 ] demonstrated that the power law form of the species–area relationship may be derived from a bisected, self-similar landscape and a community-level probability rule. [ 24 ] However, Maddux [ 25 ] [ 26 ] showed that this self-similarity model generates biologically unrealistic predictions. Hui and McGeoch (2008) resolve the Harte–Maddux debate by demonstrating that the problems identified by Maddux result from an assumption that the probability of occurrence of a species at one scale is independent of its probability of occurrence at the next, and further illustrate the importance of considering patterns of species co-occurrence, and the way in which species occupancy patterns change with scale, when modeling species distributions. [ 27 ]
https://en.wikipedia.org/wiki/Occupancy_frequency_distribution
In ecology , the occupancy–abundance ( O–A ) relationship is the relationship between the abundance of species and the size of their ranges within a region. This relationship is perhaps one of the most well-documented relationships in macroecology , and applies both intra- and interspecifically (within and among species). In most cases, the O–A relationship is a positive relationship. [ 1 ] Although an O–A relationship would be expected, given that a species colonizing a region must pass through the origin (zero abundance, zero occupancy) and could reach some theoretical maximum abundance and distribution (that is, occupancy and abundance can be expected to co-vary), the relationship described here is somewhat more substantial, in that observed changes in range are associated with greater-than-proportional changes in abundance. Although this relationship appears to be pervasive (e.g. Gaston 1996 [ 1 ] and references therein), and has important implications for the conservation of endangered species , the mechanism(s) underlying it remain poorly understood. [ 2 ] Range – means the total area occupied by the species of interest in the region under study (see below 'Measures of species geographic range') Abundance – means the average density of the species of interest across all occupied patches (i.e. average abundance does not include the area of unoccupied patches) Intraspecific occupancy–abundance relationship – means the relationship between abundance and range size within a single species generated using time series data Interspecific occupancy–abundance relationship – means the relationship between relative abundance and range size of an assemblage of closely related species at a specific point in time (or averaged across a short time period). The interspecific O-A relationship may arise from the combination of the intraspecific O–A relationships within the region [ 3 ] In the discussion of relationships with range size, it is important to define which range is under investigation. Gaston [ 4 ] (following Udvardy [ 5 ] ) describes the potential range of a species as the theoretical maximum range that a species could occupy should all barriers to dispersal be removed, while the realized range is the portion of the potential range that the species currently occupies. The realized range can be further subdivided, for example, into the breeding and non-reproductive ranges. Explicit consideration of a particular portion of the realized range in analysis of range size can significantly influence the results. For example, many seabirds forage over vast areas of ocean, but breed only on small islands, thus the breeding range is significantly smaller than the non-reproductive range. However, in many terrestrial bird species, the pattern is reversed, with the winter (non-reproductive) range somewhat smaller than the breeding range. [ 4 ] The definition of range is further confounded by how the total realized range size is measured. There are two types of measurements commonly in use, the extent of occurrence ( EOO ) (For definition: see ALA and Fig.1 [ 6 ] ) and the area of occupancy (AOO) (see also the Scaling pattern of occupancy , and for a definition, see Fig. 2 and ALA [ 6 ] ). The EOO can best be thought of as the minimum convex polygon encompassing all known normal occurrences of a particular species and is the measure of range most commonly found in field guides. The AOO is the subset of the EOO where the species actually occurs. In essence, the AOO acknowledges that there are holes in the distribution of a species within its EOO, and attempts to correct for these vacancies. A common way to describe the AOO of a species is to divide the study region into a matrix of cells and record if the species is present in or absent from each cell. For example, in describing O–A relationships for common British birds, Quinn et al. [ 7 ] found that the occupancy at the finest resolution (10 x 10 km squares) best explained abundance patterns. In a similar manner, Zuckerberg et al. [ 8 ] used Breeding Bird Atlas data measured on cells 5 × 5 km to describe breeding bird occupancy in New York State. IUCN typically uses a cell size of 2 × 2 km in calculating AOO. [ 6 ] In much of macroecology , the use of EOO as a measure of range size may be appropriate; however, AOO is a more appropriate measure when evaluating O–A relationships. In macroecological investigations that are primarily biogeographical in nature, the variables of interest can be expected to vary most from one extent of occurrence to the opposite, and less so through discontinuities contained within the total EOO. However, when investigating O-A relationships, the area occupied by a species is the variable of interest, and the inclusion of discontinuities within the EOO could significantly influence results. In the extreme case where occupied habitats are distributed at random throughout the EOO, a relationship between abundance and range size (EOO) would not be expected. [ 9 ] Because O–A relationships have strong conservation implications, Gaston and Fuller [ 10 ] have argued that clear distinctions need to be made as to the purpose of the EOO and AOO as measures of range size, and that in association with O-A relationships the AOO is the more useful measure of species abundance. No matter which concept we use in studies, it is essential to realize that occupancy is only a reflection of species distribution under a certain spatial scale. Occupancy, as well as other measures of species distributions (e.g. over-dispersion and spatial autocorrelation), is scale-dependent. [ 11 ] As such, studies on the comparison of O–A relationships should be aware of the issue of scale sensitivity (compare text of Fig 1 & Fig.2). Furthermore, measuring species range, whether it is measured by the convex hull or occupancy (occurrence), is part of the percolation process and can be explained by the percolation theory , [ 12 ] A suite of possible explanations have been proposed to describe why positive intra- and interspecific O–A relationships are observed. Following Gaston et al. 1997 [ 13 ] Gaston and Blackburn 2000 [ 14 ] Gaston et al. 2000, [ 2 ] and Gaston 2003 [ 4 ] these reasons include: One way to deal with observed O–A relationships is, in essence, to deny their existence. An argument against the existence of O–A relationships is that they are merely sampling artefacts. Given that rare species are less likely to be sampled, at a given sampling effort, one can expect to detect rare species occupying fewer sites than common ones, even if the underlying occupancy distribution is the same. However, this explanation makes only one prediction, that is, that with sufficient sampling, no relationship will be found to exist. [ 13 ] This prediction is readily falsified, given that exceptionally well studied taxa such as breeding birds (e.g. Zuckerberg et al. 2009, Gaston [ 2 ] ) show well documented O-A relationships. A second statistical explanation involves the use of statistical distributions such as the Poisson or negative-binomial . This explanation suggests that due to the underlying distribution of aggregation and density, and observed O–A relationship would be expected. However, Gaston et al. [ 13 ] question whether this is a suitably mechanistic explanation. Indeed, Gaston et al. [ 2 ] suggest that "to argue that spatial aggregation explains abundance-occupancy relationships is simply to supplant one poorly understood pattern with another". The phylogenetic non-independence hypothesis is a third statistical explanation, specific to observed interspecific O–A relationships. This hypothesis suggests that, as closely related species are not truly independent their inclusion into analyses artificially inflates the degrees of freedom available for testing the relationship. However Gaston et al. [ 2 ] cite several studies documenting significant O–A relationships in spite of controlling for phylogenetic non-independence. Most evaluations of O–A relationships do not evaluate species over their entire (global) range, but document abundance and occupancy patterns within a specific region. [ 4 ] It is believed that species decline in abundance and become more patchily distributed towards the margin of their range. If this is true, then it can be expected that as a species expands or contracts its range within the region of interest, it will more or less closely resemble populations at the core of its range, leading to a positive intraspecific O–A relationship. In the same manner, an assemblage of species within the study region can be expected to contain some species near the core and some near the periphery of their ranges, leading to a positive interspecific O–A relationship. Although this explanation may contribute to the understanding of O–A relationships where partial ranges are considered, it cannot explain relationships documented for entire geographic ranges. [ 4 ] Brown [ 15 ] suggested that species with a broad ecological niche would, as a consequence, be able to obtain higher local densities, and a wider distribution than species with a narrow niche breadth. This relationship would generate a positive O-A relationship. In a similar manner, a species' niche position, [ 16 ] (niche position represents the absolute distance between the mean environmental conditions where a species occurs and mean environmental conditions across a region) could influence its local abundance and range size, if species with lower niche position are more able to use resources typical of a region. Although intuitive, Gaston et al. [ 13 ] and Gaston and Blackburn [ 14 ] note that, due to the n -dimensional nature of the niche, this hypothesis is, in effect, untestable. Many species exhibit density-dependent dispersal and habitat selection. [ 17 ] [ 18 ] [ 19 ] For species exhibiting this pattern, dispersal into what would otherwise be sub-optimal habitats can occur when local abundances are high in high quality habitats (see Source–sink dynamics ), thus increasing the size of the species geographic range. An initial argument against this hypothesis is that when a species colonizes formerly empty habitats, the average abundance of that species across all occupied habitats drops, negating an O–A relationship. However, all species will occur at low densities in some occupied habitats, while only the abundant species will be able to reach high densities in some of their occupied habitats. Thus it is expected that both common and uncommon species will have similar minimum densities in occupied habitats, but that it is the maximum densities obtained by common species in some habitats that drive the positive relationship between mean densities and AOO. If density-dependent habitat selection were to determine positive O–A relationships, the distribution of a species would follow an Ideal Free Distribution (IFD). Gaston et al. [ 2 ] cites Tyler and Hargrove [ 20 ] who examined the IFD using simulation models and found several instances (e.g. when resources had a fractal distribution, or when the scale of resource distribution poorly matched the organisms dispersal capabilities) where IFDs poorly described species distributions. In a classical metapopulation model, habitat occurs in discrete patches, with a population in any one patch facing a substantial risk of extinction at any given time. Because population dynamics in individual patches are asynchronous, the system is maintained by dispersal between patches (e.g. dispersal from patches with high populations can 'rescue' populations near or at extinction in other patches). Freckleton et al. [ 21 ] have shown that, with a few assumptions (habitat patches of equal suitability, density-independent extinction, and restricted dispersal between patches), varying overall habitat suitability in a metapopulation can generate a positive intraspecific O-A relationship. However, there is currently debate regarding how many populations actually fit a classical metapopulation model. [ 22 ] In experimental systems using moss-dwelling microarthropods [ 23 ] showed that the fragmentation of habitat caused declines in abundance and occupancy. The addition of habitat corridors arrested these declines, providing evidence that metapopulation dynamics (extinction and immigration) maintain the interspecific O-A relationship, however, Warren and Gaston [ 24 ] were able to detect a positive interspecific O–A relationship even in the absence of dispersal, indicating that a more general set of extinction and colonization processes (than metapopulation processes per se) may maintain the O–A relationship. The vital rates of a species (in particular r – the intrinsic rate of increase; see Population dynamics ) interact with the habitat quality of an occupied patch to determine local density, and in multiple patches, can result in an O–A relationship. Holt et al. [ 25 ] modelled a system where dispersal between habitat patches could ensure that all suitable habitat patches were occupied, but where dispersal was sufficiently limited so that immigration did not significantly affect the population size in occupied patches. In this system the population size within any given habitat patch was a function only of birth and death rates. By causing habitat quality to vary (increasing or decreasing birth and death rates) Holt was able to generate a positive intraspecific O–A relationship. Holt et al.'s [ 25 ] model requires many data to test even for intraspecific relationships (i.e. vital rates of all populations through time). Freckleton et al. [ 9 ] use a version of the model proposed by Holt et al., but with varying habitat quality between patches to evaluate parameters that could be observed in species O–A data. Freckleton et al. show that aggregation of individuals within sites, and the skewness of population size should correlate with density and occupancy, depending on specific arrangements of habitat quality, and demonstrate that these parameters vary in accordance with positive intra- and interspecific O–A relationships for common farmland birds in Britain. Figure 2. Holt et al.'s [ 25 ] model under different Hcrit values. Figure 2 a. shows the effect of increasing the critical threshold for occupancy on population size and AOO. Figure 2b. shows the effect of decreasing Hcrit. Because the AOO and total abundance covary, an intraspecific occupancy abundance relationship is expected under situations where habitat quality varies through time (more or less area above Hcrit. Most of the different explanations that have been forwarded to explain the regularities in species abundance and geographic distribution mentioned above similarly predict a positive distribution–abundance relationship. This makes it difficult to test the validity of each explanation. A key challenge is therefore to distinguish between the various mechanisms that have been proposed to underlie these near universal patterns. The effect of either niche dynamics or neutral dynamics represent two opposite views and many explanations take up intermediate positions. Neutral dynamics assume species and habitats are equivalent and patterns in species abundance and distribution arise from stochastic occurrences of birth, death, immigration, extinction and speciation. Modelling this type of dynamics can simulate many of the patterns in species abundance including a positive occupancy–abundance relationship. This does not necessarily imply niche differences among species are not important; being able to accurately model real life patterns does not mean that the model assumptions also reflect the actual mechanisms underlying these real-life patterns. In fact, occupancy–abundance relationship are generated across many species, without taking into account the identity of a species. Therefore, it may not be too surprising that neutral models can accurately describe these community properties. Niche dynamics assume differences among species in their fundamental niche which should give rise to patterns in the abundance and distribution of species (i.e. their realized niches). In this framework, the abundance and distribution of a single species and hence the emergent patterns across multiple species, are driven by causal mechanisms operating at the level of that species. Therefore, examining how differences between individual species shape these patterns, rather than analyzing the pattern itself, may help to understand these patterns. By incorporating specific information on a species' diet, reproduction, dispersal and habitat specialisation Verberk et al. [ 26 ] could successfully explain the contribution of individual species to the overall relationship and they showed that the main mechanisms in operation may be different for different species groups. Neutral dynamics may be relatively important in some cases, depending on the species, environmental conditions and the spatial and temporal scale level under consideration, whereas in other circumstances, niche dynamics may dominate. Thus niche and neutral dynamics may be operating simultaneously, constituting different endpoints of the same continuum. Important implications of both the intra- and interspecific O–A relationships are discussed by Gaston et al. [ 2 ]
https://en.wikipedia.org/wiki/Occupancy–abundance_relationship
Occupational exposure banding , also known as hazard banding , is a process intended to quickly and accurately assign chemicals into specific categories (bands), each corresponding to a range of exposure concentrations designed to protect worker health. These bands are assigned based on a chemical’s toxicological potency and the adverse health effects associated with exposure to the chemical. [ 1 ] The output of this process is an occupational exposure band ( OEB ). Occupational exposure banding has been used by the pharmaceutical sector and by some major chemical companies over the past several decades to establish exposure control limits or ranges for new or existing chemicals that do not have formal OELs. [ 2 ] Furthermore, occupational exposure banding has become an important component of the Hierarchy of Occupational Exposure Limits (OELs). [ 3 ] [ 4 ] The U.S. National Institute for Occupational Safety and Health (NIOSH) has developed a process that could be used to apply occupational exposure banding to a broader spectrum of occupational settings. [ 5 ] The NIOSH occupational exposure banding process utilizes available, but often limited, toxicological data to determine a potential range of chemical exposure levels that can be used as targets for exposure controls to reduce risk among workers. [ 6 ] An OEB is not meant to replace an OEL, rather it serves as a starting point to inform risk management decisions. [ 7 ] Therefore, the OEB process should not be applied to a chemical with an existing OEL. Occupational exposure limits (OELs) play a critical role in protecting workers from exposure to dangerous concentrations of hazardous material. [ 8 ] In the absence of an OEL, determining the controls needed to protect workers from chemical exposures can be challenging. [ 3 ] According to the U.S. Environmental Protection Agency , the Toxic Substances Control Act Chemical Substance Inventory as of 2014 contained over 85,000 chemicals that are commercially available, but a quantitative health-based OEL has been developed for only about 1,000 of these chemicals. [ 9 ] Furthermore, the rate at which new chemicals are being introduced into commerce significantly outpaces OEL development, creating a need for guidance on thousands of chemicals that lack reliable exposure limits. [ 6 ] [ 10 ] [ 11 ] The NIOSH occupational exposure banding process has been created to provide a reliable approximation of a safe exposure level for potentially hazardous and unregulated chemicals in the workplace. [ 6 ] Occupational exposure banding uses limited chemical toxicity data to group chemicals into one of five bands. Occupational exposure bands: [ 7 ] The NIOSH occupational exposure banding process utilizes a three-tiered approach. [ 1 ] Each tier of the process has different requirements for data sufficiency, which allows stakeholders to use the occupational exposure banding process in many different situations. Selection of the most appropriate tier for a specific banding situation depends on the quantity and quality of the available data and the training and expertise of the user. The process places chemicals into one of five bands, designated A through E. Each band is associated with a specific range of exposure concentrations. Band E represents the lowest range of exposure concentrations, while Band A represents the highest range. Assignment of a chemical to a band is based on both the potency of the chemical and the severity of the health effect. Band A and band B include chemicals with reversible health effects or produce adverse effects at only high concentration levels. Band C, band D, or band E include chemicals with serious or irreversible effects and those that cause problems at low concentration ranges. [ 1 ] The resulting airborne concentration target ranges are shown in the graphic: [ 7 ] Tier 1, the qualitative tier, produces an occupational exposure band (OEB) assignment based on qualitative data from the Globally Harmonized System of Classification and Labeling of Chemicals (GHS); it involves assigning the OEB based on criteria aligned with specific GHS hazard codes and categories. These hazard codes are typically pulled from GESTIS , ECHA Annex VI, or safety data sheets . [ 7 ] The Tier 1 process can be performed by a health and safety generalist, and takes only minutes to complete with the NIOSH OEB e-tool. The e-tool is free to use and can be accessed through the NIOSH website. Tier 2, the semi-quantitative tier, produces an OEB assignment based on quantitative and qualitative data from secondary sources; it involves assigning the OEB on the basis of key findings from prescribed literature sources, including use of data from specific types of studies. Tier 2 focuses on nine toxicological endpoints. [ 7 ] The Tier 2 process can be performed by an occupational hygienist but requires some formal training. Tier 2 banding is also incorporated into the NIOSH OEB e-tool but can take hours instead of minutes to complete for a given chemical. However, the resulting band is considered more robust than a Tier 1 band due to the in-depth retrieval of published data. [ 7 ] NIOSH recommends users complete at least the Tier 2 process to produce reliable OEBs. Tier 3, the expert judgement tier, relies on expert judgement to produce a band based on primary and secondary data that is available to the user. [ 6 ] This level of OEB would require the advanced knowledge and experience held by a toxicologist or veteran occupational hygienist. The Tier 3 process allows the professional to incorporate their own raw data in conjunction with the availability of data drawn from published literature. [ 7 ] Since unveiling the occupational exposure banding technique in 2017, NIOSH has sought feedback from its users and has evaluated the reliability of this tool. There has been an overwhelming response of positive feedback. Users have described Tier 1 as a helpful screening tool, Tier 2 as a basic assessment for a new chemical on the worksite, and Tier 3 as a personalized in-depth analysis. [ 12 ] During pilot testing, NIOSH evaluated the Tier 1 and Tier 2 protocols using chemicals with OELs and compared the banding results to OELs. [ 13 ] [ 14 ] For >90% of these chemicals, the resultant Tier 1 and Tier 2 bands were found to be equally or more stringent than the OELs. [ 7 ] This demonstrates the confidence health & safety professionals can have in the OEB process when making risk management decisions for chemicals without OELs. Although occupational exposure banding holds a great deal of promise for the occupational hygiene profession, there are potential limitations that should be considered. As with any analysis, the outcome of the NIOSH occupational exposure banding process – the OEB – is dependent upon the quantity and the quality of data used and the expertise of the individual using the process. [ 6 ] In order to maximize data quality, NIOSH has compiled a list of NIOSH-recommended sources which can provide data that can be used for banding. [ 15 ] Furthermore, for some chemicals the amount of quality data may not be sufficient to derive an OEB. It is important to note that the lack of data does not indicate that the chemical is safe. Other risk management strategies, such as control banding , can then be applied. [ 16 ] The NIOSH occupational exposure banding process guides a user through the evaluation and selection of critical health hazard information to select an OEB from among five categories of severity. For OEBs, the process uses only hazard-based data (e.g., studies on human health effects or toxicology studies) to identify an overall level of hazard potential and associated airborne concentration range for chemicals with similar hazard profiles. While the output of this process can be used by informed occupational safety and health professionals to make risk management and exposure control decisions, the process does not supply such recommendations directly. [ 17 ] In contrast, control banding is a strategy that groups workplace risks into control categories or bands based on combinations of both hazard and exposure information. [ 11 ] [ 18 ] [ 19 ] Control banding combines hazard banding with exposure risk management to directly link hazards to specific control measures. [ 19 ] [ 20 ] [ 21 ] [ 22 ] Various toolkit models for control banding have been developed in the UK, Germany, and the Netherlands. [ 23 ] COSHH Essentials was the first widely adopted banding scheme. Other banding schemes are also available, such as Stoffenmanager, EMKG, and International Chemical Control Toolkit of the ILO. Evaluation of these and other control banding systems have yielded varying results. [ 24 ] Occupational exposure banding has emerged as a helpful supplementary exposure assessment tool. [ 25 ] When conducting a workplace hazard assessment, occupational hygienists may find it useful to start with occupational exposure banding to identify potential hazards and exposure ranges, before moving on to control banding. Together, these tools will aid the health & safety professional in selecting the appropriate risk mitigation strategies.
https://en.wikipedia.org/wiki/Occupational_exposure_banding
An occupational exposure limit is an upper limit on the acceptable concentration of a hazardous substance in workplace air for a particular material or class of materials. It is typically set by competent national authorities and enforced by legislation to protect occupational safety and health . It is an important tool in risk assessment and in the management of activities involving handling of dangerous substances. [ 1 ] There are many dangerous substances for which there are no formal occupational exposure limits. In these cases, hazard banding or control banding strategies can be used to ensure safe handling. Personal air sampling is routinely conducted on workers to determine whether exposures are acceptable or unacceptable. These samples are collected and analyzed using validated sampling and analytical methods. These methods are available from OSHA Technical Manual and NIOSH Manual of Analytical Methods. [ 2 ] Statistical tools are available to assess exposure monitoring data against OELs. The statistical tools are typically free but do require some previous knowledge with statistical concepts. A popular exposure data statistical tool called IHSTAT is available from AIHA ( American Industrial Hygiene Association ). IH STAT has 14 languages including English and is available for free. [ 3 ] Methods for performing occupational exposure assessments can be found in the book A Strategy for Assessing and Managing Occupational Exposures, Third Edition , edited by Joselito S. Ignacio and William H. Bullock. [ 4 ] With the World Health Organization and the International Labour Office having now quantified the global burden of disease from psychosocial occupational hazards, [ 5 ] identification of OELs for such hazards is increasingly becoming a focus of attention for occupational safety and health policy and practice. The database "GESTIS - International limit values for chemical agents" [ 6 ] contains a collection of occupational limit values for hazardous substances collected from 35 lists from 29 countries: various EU member states , Australia , Canada , Israel , Japan , New Zealand , Singapore , South Korea , Switzerland , China , Turkey , and the United States . The database comprises values of more than 2,000 substances. The present database was elaborated in co-operation with experts from various international occupational safety and health institutions. It aims to give an overview of limit values in different countries. Since the limit values vary in their handling, the level of protection, and their legal relevance, the original lists of limit values and the explanations there should be considered as primary sources. Also the chemical nomenclature is diverging, synonyms can for example be found in the GESTIS Substance Database . The database is also available as an app for mobile terminals with Android or iOS operating systems. [ 7 ] [ 8 ]
https://en.wikipedia.org/wiki/Occupational_exposure_limit
Occupational safety and health ( OSH ) or occupational health and safety ( OHS ) is a multidisciplinary field concerned with the safety , health , and welfare of people at work (i.e., while performing duties required by one's occupation). OSH is related to the fields of occupational medicine and occupational hygiene [ a ] and aligns with workplace health promotion initiatives. OSH also protects all the general public who may be affected by the occupational environment. [ 4 ] According to the official estimates of the United Nations , the WHO / ILO Joint Estimate of the Work-related Burden of Disease and Injury , almost 2 million people die each year due to exposure to occupational risk factors. [ 5 ] Globally, more than 2.78 million people die annually as a result of workplace-related accidents or diseases, corresponding to one death every fifteen seconds. There are an additional 374 million non-fatal work-related injuries annually. It is estimated that the economic burden of occupational-related injury and death is nearly four per cent of the global gross domestic product each year. The human cost of this adversity is enormous. [ 6 ] In common-law jurisdictions, employers have the common law duty (also called duty of care) to take reasonable care of the safety of their employees. [ 7 ] Statute law may, in addition, impose other general duties, introduce specific duties, and create government bodies with powers to regulate occupational safety issues. Details of this vary from jurisdiction to jurisdiction. Prevention of workplace incidents and occupational diseases is addressed through the implementation of occupational safety and health programs at company level. [ 8 ] The International Labour Organization (ILO) and the World Health Organization (WHO) share a common definition of occupational health. [ b ] It was first adopted by the Joint ILO/WHO Committee on Occupational Health at its first session in 1950: [ 10 ] [ 11 ] Occupational health should aim at the promotion and maintenance of the highest degree of physical, mental and social well-being of workers in all occupations; the prevention amongst workers of departures from health caused by their working conditions; the protection of workers in their employment from risks resulting from factors adverse to health; the placing and maintenance of the worker in an occupational environment adapted to his physiological and psychological capabilities and; to summarize: the adaptation of work to man and of each man to his job. In 1995, a consensus statement was added: [ 10 ] [ 11 ] The main focus in occupational health is on three different objectives: (i) the maintenance and promotion of workers' health and working capacity; (ii) the improvement of working environment and work to become conducive to safety and health and (iii) development of work organizations and working cultures in a direction which supports health and safety at work and in doing so also promotes a positive social climate and smooth operation and may enhance productivity of the undertakings. The concept of working culture is intended in this context to mean a reflection of the essential value systems adopted by the undertaking concerned. Such a culture is reflected in practice in the managerial systems, personnel policy, principles for participation, training policies and quality management of the undertaking. An alternative definition for occupational health given by the WHO is: "occupational health deals with all aspects of health and safety in the workplace and has a strong focus on primary prevention of hazards." [ 12 ] The expression "occupational health", as originally adopted by the WHO and the ILO, refers to both short- and long-term adverse health effects. In more recent times, the expressions "occupational safety and health" and "occupational health and safety" have come into use (and have also been adopted in works by the ILO), [ 13 ] based on the general understanding that occupational health refers to hazards associated to disease and long-term effects, while occupational safety hazards are those associated to work accidents causing injury and sudden severe conditions. [ 14 ] Research and regulation of occupational safety and health are a relatively recent phenomenon. As labor movements arose in response to worker concerns in the wake of the industrial revolution, workers' safety and health entered consideration as a labor-related issue. [ 15 ] Written works on occupational diseases began to appear by the end of the 15th century, when demand for gold and silver was rising due to the increase in trade and iron, copper, and lead were also in demand from the nascent firearms market. Deeper mining became common as a consequence. In 1473, Ulrich Ellenbog [ de ] , a German physician, wrote a short treatise On the Poisonous Wicked Fumes and Smokes , focused on coal , nitric acid , lead , and mercury fumes encountered by metal workers and goldsmiths. In 1587, Paracelsus (1493–1541) published the first work on the mine and smelter workers diseases. In it, he gave accounts of miners' " lung sickness ". In 1526, Georgius Agricola 's (1494–1553) De re metallica , a treaty on metallurgy, described accidents and diseases prevalent among miners and recommended practices to prevent them. Like Paracelsus, Agricola mentioned the dust that "eats away the lungs, and implants consumption." [ 16 ] The seeds of state intervention to correct social ills were sown during the reign of Elizabeth I by the Poor Laws , which originated in attempts to alleviate hardship arising from widespread poverty. While they were perhaps more to do with a need to contain unrest than morally motivated, they were significant in transferring responsibility for helping the needy from private hands to the state. [ 15 ] In 1713, Bernardino Ramazzini (1633–1714), often described as the father of occupational medicine and a precursor to occupational health, published his De morbis artificum diatriba ( Dissertation on Workers' Diseases ), which outlined the health hazards of chemicals, dust, metals, repetitive or violent motions, odd postures, and other disease-causative agents encountered by workers in more than fifty occupations. It was the first broad-ranging presentation of occupational diseases. [ 16 ] [ 17 ] [ 18 ] Percivall Pott (1714–1788), an English surgeon, described cancer in chimney sweeps ( chimney sweeps' carcinoma ), the first recognition of an occupational cancer in history. [ 16 ] The United Kingdom was the first nation to industrialize. Soon shocking evidence emerged of serious physical and moral harm suffered by children and young persons in the cotton textile mills , as a result of exploitation of cheap labor in the factory system . Responding to calls for remedial action from philanthropists and some of the more enlightened employers, in 1802 Sir Robert Peel , himself a mill owner, introduced a bill to Parliament with the aim of improving their conditions. This would engender the Health and Morals of Apprentices Act 1802 , generally believed to be the first attempt to regulate conditions of work in the United Kingdom. The act applied only to cotton textile mills and required employers to keep premises clean and healthy by twice yearly washings with quicklime , to ensure there were sufficient windows to admit fresh air, and to supply " apprentices " (i.e., pauper and orphan employees) with "sufficient and suitable" clothing and accommodation for sleeping. [ 15 ] It was the first of the 19th century Factory Acts . Charles Thackrah (1795–1833), another pioneer of occupational medicine, wrote a report on The State of Children Employed in Cotton Factories , which was sent to the Parliament in 1818. Thackrah recognized issues of inequalities of health in the workplace, with manufacturing in towns causing higher mortality than agriculture. [ 16 ] The Factory Act 1833 created a dedicated professional Factory Inspectorate . [ 19 ] The initial remit of the Inspectorate was to police restrictions on the working hours in the textile industry of children and young persons (introduced to prevent chronic overwork, identified as leading directly to ill-health and deformation, and indirectly to a high accident rate). [ 15 ] In 1840 a royal commission published its findings on the state of conditions for the workers of the mining industry that documented the appallingly dangerous environment that they had to work in and the high frequency of accidents. The commission sparked public outrage which resulted in the Mines and Collieries Act 1842 . The act set up an inspectorate for mines and collieries which resulted in many prosecutions and safety improvements, and by 1850, inspectors were able to enter and inspect premises at their discretion. [ 20 ] On the urging of the Factory Inspectorate, a further Factories Act 1844 giving similar restrictions on working hours for women in the textile industry introduced a requirement for machinery guarding (but only in the textile industry, and only in areas that might be accessed by women or children). [ 21 ] The latter act was the first to take a significant step toward improvement of workers' safety, as the former focused on health aspects alone. [ 15 ] The first decennial British Registrar-General 's mortality report was issued in 1851. Deaths were categorized by social classes, with class I corresponding to professionals and executives and class V representing unskilled workers. The report showed that mortality rates increased with the class number. [ 16 ] Otto von Bismarck inaugurated the first social insurance legislation in 1883 and the first worker's compensation law in 1884 – the first of their kind in the Western world. Similar acts followed in other countries, partly in response to labor unrest. [ 16 ] The United States are responsible for the first health program focusing on workplace conditions. This was the Marine Hospital Service , inaugurated in 1798 and providing care for merchant seamen. This was the beginning of what would become the US Public Health Service (USPHS). [ 16 ] The first worker compensation acts in the United States were passed in New York in 1910 and in Washington and Wisconsin in 1911. Later rulings included occupational diseases in the scope of the compensation, which was initially restricted to accidents. [ 16 ] In 1914 the USPHS set up the Office of Industrial Hygiene and Sanitation, the ancestor of the current National Institute for Safety and Health (NIOSH). In the early 20th century, workplace disasters were still common. For example, in 1911 a fire at the Triangle Shirtwaist Company in New York killed 146 workers, mostly women and immigrants. Most died trying to open exits that had been locked. Radium dial painter cancers ," phossy jaw ", mercury and lead poisonings, silicosis, and other pneumoconioses were extremely common. [ 16 ] The enactment of the Federal Coal Mine Health and Safety Act of 1969 was quickly followed by the 1970 Occupational Safety and Health Act , which established the Occupational Safety and Health Administration (OSHA) and NIOSH in their current form`. [ 16 ] A wide array of workplace hazards can damage the health and safety of people at work. These include but are not limited to, "chemicals, biological agents, physical factors, adverse ergonomic conditions, allergens, a complex network of safety risks," as well a broad range of psychosocial risk factors. [ 23 ] Personal protective equipment can help protect against many of these hazards. [ 24 ] A landmark study conducted by the World Health Organization and the International Labour Organization found that exposure to long working hours is the occupational risk factor with the largest attributable burden of disease, i.e. an estimated 745,000 fatalities from ischemic heart disease and stroke events in 2016. [ 25 ] This makes overwork the globally leading occupational health risk factor. [ 26 ] Physical hazards affect many people in the workplace. Occupational hearing loss is the most common work-related injury in the United States, with 22 million workers exposed to hazardous occupational noise levels at work and an estimated $242 million spent annually on worker's compensation for hearing loss disability. [ 27 ] Falls are also a common cause of occupational injuries and fatalities, especially in construction, extraction, transportation, healthcare, and building cleaning and maintenance. [ 28 ] Machines have moving parts, sharp edges, hot surfaces and other hazards with the potential to crush, burn , cut , shear , stab or otherwise strike or wound workers if used unsafely. [ 29 ] Biological hazards (biohazards) include infectious microorganisms such as viruses, bacteria and toxins produced by those organisms such as anthrax . Biohazards affect workers in many industries; influenza , for example, affects a broad population of workers. [ 30 ] Outdoor workers, including farmers, landscapers, and construction workers, risk exposure to numerous biohazards, including animal bites and stings, [ 31 ] [ 32 ] [ 33 ] urushiol from poisonous plants, [ 34 ] and diseases transmitted through animals such as the West Nile virus and Lyme disease. [ 35 ] [ 36 ] Health care workers, including veterinary health workers, risk exposure to blood-borne pathogens and various infectious diseases, [ 37 ] [ 38 ] especially those that are emerging . [ 39 ] Dangerous chemicals can pose a chemical hazard in the workplace. There are many classifications of hazardous chemicals, including neurotoxins , immune agents, dermatologic agents, carcinogens, reproductive toxins, systemic toxins, asthmagens, pneumoconiotic agents, and sensitizers. [ 40 ] Authorities such as regulatory agencies set occupational exposure limits to mitigate the risk of chemical hazards. [ 41 ] International investigations are ongoing into the health effects of mixtures of chemicals, given that toxins can interact synergistically instead of merely additively. For example, there is some evidence that certain chemicals are harmful at low levels when mixed with one or more other chemicals. Such synergistic effects may be particularly important in causing cancer. Additionally, some substances (such as heavy metals and organohalogens) can accumulate in the body over time, thereby enabling small incremental daily exposures to eventually add up to dangerous levels with little overt warning. [ 42 ] Psychosocial hazards include risks to the mental and emotional well-being of workers, such as feelings of job insecurity, long work hours, and poor work-life balance. [ 43 ] Psychological abuse has been found present within the workplace as evidenced by previous research. A study by Gary Namie on workplace emotional abuse found that 31% of women and 21% of men who reported workplace emotional abuse exhibited three key symptoms of post-traumatic stress disorder ( hypervigilance , intrusive imagery , and avoidance behaviors ). [ 44 ] Sexual harassment is a serious hazard that can be found in workplaces. [ 45 ] Specific occupational safety and health risk factors vary depending on the specific sector and industry. Construction workers might be particularly at risk of falls, for instance, whereas fishermen might be particularly at risk of drowning . Similarly psychosocial risks such as workplace violence are more pronounced for certain occupational groups such as health care employees, police, correctional officers and teachers. [ 46 ] Agriculture workers are often at risk of work-related injuries, lung disease, noise-induced hearing loss, skin disease, as well as certain cancers related to chemical use or prolonged sun exposure. On industrialized farms , injuries frequently involve the use of agricultural machinery . The most common cause of fatal agricultural injuries in the United States is tractor rollovers, which can be prevented by the use of roll over protection structures which limit the risk of injury in case a tractor rolls over. [ 47 ] Pesticides and other chemicals used in farming can also be hazardous to worker health, [ 48 ] and workers exposed to pesticides may experience illnesses or birth defects. [ 49 ] As an industry in which families, including children, commonly work alongside their families, agriculture is a common source of occupational injuries and illnesses among younger workers. [ 50 ] Common causes of fatal injuries among young farm worker include drowning, machinery and motor vehicle-related accidents. [ 51 ] The 2010 NHIS-OHS found elevated prevalence rates of several occupational exposures in the agriculture, forestry, and fishing sector which may negatively impact health. These workers often worked long hours. The prevalence rate of working more than 48 hours a week among workers employed in these industries was 37%, and 24% worked more than 60 hours a week. [ 52 ] Of all workers in these industries, 85% frequently worked outdoors compared to 25% of all US workers. Additionally, 53% were frequently exposed to vapors, gas, dust, or fumes, compared to 25% of all US workers. [ 53 ] The mining industry still has one of the highest rates of fatalities of any industry. [ 54 ] There are a range of hazards present in surface and underground mining operations. In surface mining, leading hazards include such issues as geological instability, [ 55 ] contact with plant and equipment, rock blasting , thermal environments (heat and cold), respiratory health ( black lung ), etc. [ 56 ] In underground mining, operational hazards include respiratory health, explosions and gas (particularly in coal mine operations), geological instability, electrical equipment, contact with plant and equipment, heat stress, inrush of bodies of water, falls from height, confined spaces , ionising radiation , etc. [ 57 ] According to data from the 2010 NHIS-OHS, workers employed in mining and oil and gas extraction industries had high prevalence rates of exposure to potentially harmful work organization characteristics and hazardous chemicals. Many of these workers worked long hours: 50% worked more than 48 hours a week and 25% worked more than 60 hours a week in 2010. Additionally, 42% worked non-standard shifts (not a regular day shift). These workers also had high prevalence of exposure to physical/chemical hazards. In 2010, 39% had frequent skin contact with chemicals. Among nonsmoking workers, 28% of those in mining and oil and gas extraction industries had frequent exposure to secondhand smoke at work. About two-thirds were frequently exposed to vapors, gas, dust, or fumes at work. [ 58 ] Construction is one of the most dangerous occupations in the world, incurring more occupational fatalities than any other sector in both the United States and in the European Union . [ 59 ] [ 60 ] In 2009, the fatal occupational injury rate among construction workers in the United States was nearly three times that for all workers. [ 59 ] Falls are one of the most common causes of fatal and non-fatal injuries among construction workers. [ 59 ] Proper safety equipment such as harnesses and guardrails and procedures such as securing ladders and inspecting scaffolding can curtail the risk of occupational injuries in the construction industry. [ 61 ] Due to the fact that accidents may have disastrous consequences for employees as well as organizations, it is of utmost importance to ensure health and safety of workers and compliance with HSE construction requirements. Health and safety legislation in the construction industry involves many rules and regulations. For example, the role of the Construction Design Management (CDM) Coordinator as a requirement has been aimed at improving health and safety on-site. [ 62 ] The 2010 National Health Interview Survey Occupational Health Supplement (NHIS-OHS) identified work organization factors and occupational psychosocial and chemical/physical exposures which may increase some health risks. Among all US workers in the construction sector, 44% had non-standard work arrangements (were not regular permanent employees) compared to 19% of all US workers, 15% had temporary employment compared to 7% of all US workers, and 55% experienced job insecurity compared to 32% of all US workers. Prevalence rates for exposure to physical/chemical hazards were especially high for the construction sector. Among nonsmoking workers, 24% of construction workers were exposed to secondhand smoke while only 10% of all US workers were exposed. Other physical/chemical hazards with high prevalence rates in the construction industry were frequently working outdoors (73%) and frequent exposure to vapors, gas, dust, or fumes (51%). [ 63 ] The service sector comprises diverse workplaces. Each type of workplace has its own health risks. While some occupations have become mobile, others still require desk work. As the number of service sector jobs has risen in developed countries, many jobs have turned sedentary , presenting an array of health problems that differ from previous health concerns associated with manufacturing and the primary sector. Contemporary health problems include obesity . Some working conditions, such as occupational stress , workplace bullying , and overwork , have negative consequences for physical and mental health. [ 64 ] [ 65 ] Tipped wage workers are at a higher risk of negative mental health outcomes like addiction or depression. The higher rates of mental health issues may be attributed to the precarious nature of their employment, characterized by low and unpredictable incomes, inadequate access to benefits, wage exploitation, and minimal control over work schedules and assigned shifts. [ 66 ] Close to 70% of tipped wage workers are women. [ 67 ] Additionally, "almost 40 percent of people who work for tips are people of color: 18 percent are Latino, 10 percent are African American, and 9 percent are Asian. Immigrants are also overrepresented in the tipped workforce." [ 68 ] According to data from the 2010 NHIS-OHS, hazardous physical and chemical exposures in the service sector were lower than national averages. However, harmful organizational practices and psychosocial risks were fairly prevalent in this sector. Among all workers in the service industry, 30% experienced job insecurity in 2010, 27% worked non-standard shifts (not a regular day shift), 21% had non-standard work arrangements (were not regular permanent employees). [ 69 ] In addition to these organizational risks, some industries pose significant physical dangers due to the manual labor involved. For instance, on a per employee basis, the US Postal Service, UPS and FedEx are the 4th, 5th and 7th most dangerous companies to work for in the United States, respectively. [ 70 ] In general, healthcare workers are exposed to many hazards that can adversely affect their health and well-being. [ 71 ] Long hours, changing shifts, physically demanding tasks, violence, and exposures to infectious diseases and harmful chemicals are examples of hazards that put these workers at risk for illness and injury. Musculoskeletal injury (MSI) is the most common health hazard in for healthcare workers and in workplaces overall. [ 72 ] Injuries can be prevented by using proper body mechanics. [ 73 ] According to the Bureau of Labor statistics, US hospitals recorded 253,700 work-related injuries and illnesses in 2011, which is 6.8 work-related injuries and illnesses for every 100 full-time employees. [ 74 ] The injury and illness rate in hospitals is higher than the rates in construction and manufacturing – two industries that are traditionally thought to be relatively hazardous. [ citation needed ] An estimated 2.90 million work-related deaths occurred in 2019, increased from 2.78 million death from 2015. About, one-third of the total work-related deaths (31%) were due to circulatory diseases , while cancer contributed 29%, respiratory diseases 17%, and occupational injuries contributed 11% (or about 319,000 fatalities). Other diseases such as work-related communicable diseases contributed 6%, while neuropsychiatric conditions contributed 3% and work-related digestive disease and genitourinary diseases contributed 1% each. The contribution of cancers and circulatory diseases to total work-related deaths increased from 2015, while deaths due to occupational injuries decreased. Although work-related injury deaths and non-fatal injuries rates were on a decreasing trend, the total deaths and non-fatal outcomes were on the rise. Cancers represented the most significant cause of mortality in high-income countries. The number of non-fatal occupational injuries for 2019 was estimated to be 402 million. [ 75 ] Mortality rate is unevenly distributed, with male mortality rate (108.3 per 100,000 employed male individuals) being significantly higher than female rate (48.4 per 100,000). 6.7% of all deaths globally are represented by occupational fatalities. [ 76 ] Certain EU member states admit to having lacking quality control in occupational safety services, to situations in which risk analysis takes place without any on-site workplace visits and to insufficient implementation of certain EU OSH directives. Disparities between member states result in different impact of occupational hazards on the economy. In the early 2000s, the total societal costs of work-related health problems and accidents varied from 2.6% to 3.8% of the national GDPs across the member states. [ 77 ] In 2021, in the EU-27 as a whole, 93% of deaths due to injury were of males. [ 78 ] One of the decisions taken by the communist regime under Stalin was to reduce the number of accidents and occupational diseases to zero. [ 80 ] The tendency to decline remained in the Russian Federation in the early 21st century. However, as in previous years, data reporting and publication was incomplete and manipulated, so that the actual number of work-related diseases and accidents are unknown. [ 81 ] The ILO reports that, according to the information provided by the Russian government, there are 190,000 work-related fatalities each year, of which 15,000 due to occupational accidents. [ 82 ] After the demise of the USSR, enterprises became owned by oligarchs who were not interested in upholding safe and healthy conditions in the workplace. Expenditure on equipment modernization was minimal and the share of harmful workplaces increased. [ 83 ] The government did not interfere in this, and sometimes it helped employers. [ citation needed ] At first, the increase in occupational diseases and accidents was slow, due to the fact that in the 1990s it was compensated by mass deindustrialization. [ citation needed ] However, in the 2000s deindustrialization slowed and occupational diseases and injuries started to rise in earnest. Therefore, in the 2010s the Ministry of Labor adopted federal law no. 426-FZ. This piece of legislation has been described as ineffective and based on the superficial assumption that the issuance of personal protective equipment to the employee means real improvement of working conditions. Meanwhile, the Ministry of Health made significant changes in the methods of risk assessment in the workplace. [ 84 ] However, specialists from the Izmerov Research Institute of Occupational Health found that the post-2014 apparent decrease in the share of employees engaged in hazardous working conditions is due to the change in definitions consequent to the Ministry of Health's decision, but does not reflect actual improvements. This was most clearly shown in the results for the aluminum industry. [ 85 ] Further problems in the accounting of workplace fatalities arise from the fact that multiple Russian federal entities collect and publish records, a practice that should be avoided. In 2008 alone, 2074 accidents at work may have not been reported in official government sources. [ 86 ] In the UK there were 135 fatal injuries at work in financial year 2022–2023, compared with 651 in 1974 (the year when the Health and Safety at Work Act was promulgated). The fatal injury rate declined from 2.1 fatalities per 100,000 workers in 1981 to 0.41 in financial year 2022–2023. [ 87 ] Over recent decades reductions in both fatal and non-fatal workplace injuries have been very significant. However, illnesses statistics have not uniformly improved: while musculoskeletal disorders have diminished, the rate of self-reported work-related stress, depression or anxiety has increased, and the rate of mesothelioma deaths has remained broadly flat (due to past asbestos exposures). [ 88 ] The Occupational Safety and Health Statistics (OSHS) program in the Bureau of Labor Statistics of the United States Department of Labor compiles information about workplace fatalities and non-fatal injuries in the United States . The OSHS program produces three annual reports: The Bureau also uses tools like AgInjuryNews.org to identify and compile additional sources of fatality reports for their datasets. [ 90 ] [ 91 ] Between 1913 and 2013, workplace fatalities dropped by approximately 80%. [ 92 ] In 1970, an estimated 14,000 workers were killed on the job. By 2021, in spite of the workforce having since more than doubled, workplace deaths were down to about 5,190. [ 93 ] According to the census of occupational injuries 5,486 people died on the job in 2022, up from the 2021 total of 5,190. The fatal injury rate was 3.7 per 100,000 full-time equivalent workers. [ 94 ] The decrease in the mortality rate is only partly (about 10–15%) explained by the deindustrialization of the US in the last 40 years. [ 95 ] About 3.5 million nonfatal workplace injuries and illnesses were reported by private industry employers in 2022, occurring at a rate of 3.0 cases per 100 full-time workers. [ 96 ] [ 97 ] employees Companies may adopt a safety and health management system (SMS), [ c ] either voluntarily or because required by applicable regulations, to deal in a structured and systematic way with safety and health risks in their workplace. An SMS provides a systematic way to assess and improve prevention of workplace accidents and incidents based on structured management of workplace risks and hazards. It must be adaptable to changes in the organization's business and legislative requirements. It is usually based on the Deming cycle, or plan-do-check-act (PDCA) principle . [ 98 ] An effective SMS should: Management standards across a range of business functions such as environment, quality and safety are now being designed so that these traditionally disparate elements can be integrated and managed within a single business management system and not as separate and stand-alone functions. Therefore, some organizations dovetail other management system functions, such as process safety , environmental resource management or quality management together with safety management to meet both regulatory requirements, industry sector requirements and their own internal and discretionary standard requirements. The ILO published ILO-OSH 2001 on Guidelines on Occupational Safety and Health Management Systems to assist organizations with introducing OSH management systems. These guidelines encouraged continual improvement in employee health and safety, achieved via a constant process of policy; organization; planning and implementation; evaluation; and action for improvement, all supported by constant auditing to determine the success of OSH actions. [ 99 ] From 1999 to 2018, OHSAS 18001 was adopted and widely used internationally. It was developed by a selection of national standards bodies , academic bodies, accreditation bodies, certification bodies and occupational health and safety institutions to address a gap where no third-party certifiable international standard existed. [ 100 ] It was designed for integration with ISO 9001 and ISO 14001 . [ 101 ] OHSAS 18001 was replaced by ISO 45001 , which was published in March 2018 and implemented in March 2021. [ citation needed ] National management system standards for occupational health and safety include AS / NZS 4801 for Australia and New Zealand (now superseded by ISO 45001), [ 102 ] [ 103 ] CSA Z1000:14 for Canada (which is due to be discontinued in favor of CSA Z45001:19, the Canadian adoption of ISO 45000) [ 104 ] and ANSI / ASSP Z10 for the United States. [ 105 ] In Germany, the Bavarian state government, in collaboration with trade associations and private companies, issued their OHRIS standard for occupational health and safety management systems. A new revision was issued in 2018. [ 106 ] The Taiwan Occupational Safety and Health Management System (TOSHMS) was issued in 1997 under the auspices of Taiwan's Occupational Safety and Health Administration. [ 107 ] The terminology used in OSH varies between countries, but generally speaking: "Hazard", "risk", and "outcome" are used in other fields to describe e.g., environmental damage or damage to equipment. However, in the context of OSH, "harm" generally describes the direct or indirect degradation, temporary or permanent, of the physical, mental, or social well-being of workers. For example, repetitively carrying out manual handling of heavy objects is a hazard. The outcome could be a musculoskeletal disorder (MSD) or an acute back or joint injury. The risk can be expressed numerically (e.g., a 0.5 or 50/50 chance of the outcome occurring during a year), in relative terms (e.g., "high/medium/low"), or with a multi-dimensional classification scheme (e.g., situation-specific risks). [ citation needed ] Hazard identification is an important step in the overall risk assessment and risk management process. It is where individual work hazards are identified, assessed and controlled or eliminated as close to source (location of the hazard) as reasonably practicable. As technology, resources, social expectation or regulatory requirements change, hazard analysis focuses controls more closely toward the source of the hazard. Thus, hazard control is a dynamic program of prevention. Hazard-based programs also have the advantage of not assigning or implying there are "acceptable risks" in the workplace. [ 109 ] A hazard-based program may not be able to eliminate all risks, but neither does it accept "satisfactory" – but still risky – outcomes. And as those who calculate and manage the risk are usually managers, while those exposed to the risks are a different group, a hazard-based approach can bypass conflict inherent in a risk-based approach. [ citation needed ] The information that needs to be gathered from sources should apply to the specific type of work from which the hazards can come from. Examples of these sources include interviews with people who have worked in the field of the hazard, history and analysis of past incidents, and official reports of work and the hazards encountered. Of these, the personnel interviews may be the most critical in identifying undocumented practices, events, releases, hazards and other relevant information. Once the information is gathered from a collection of sources, it is recommended for these to be digitally archived (to allow for quick searching) and to have a physical set of the same information in order for it to be more accessible. One innovative way to display the complex historical hazard information is with a historical hazards identification map, which distills the hazard information into an easy-to-use graphical format. [ citation needed ] Modern occupational safety and health legislation usually demands that a risk assessment be carried out prior to making an intervention. This assessment should: The calculation of risk is based on the likelihood or probability of the harm being realized and the severity of the consequences. This can be expressed mathematically as a quantitative assessment (by assigning low, medium and high likelihood and severity with integers and multiplying them to obtain a risk factor ), or qualitatively as a description of the circumstances by which the harm could arise. [ citation needed ] The assessment should be recorded and reviewed periodically and whenever there is a significant change to work practices. The assessment should include practical recommendations to control the risk. Once recommended controls are implemented, the risk should be re-calculated to determine if it has been lowered to an acceptable level. Generally speaking, newly introduced controls should lower risk by one level, i.e., from high to medium or from medium to low. [ 110 ] Occupational safety and health practice vary among nations with different approaches to legislation, regulation, enforcement, and incentives for compliance. In the EU, for example, some member states promote OSH by providing public monies as subsidies, grants or financing, while others have created tax system incentives for OSH investments. A third group of EU member states has experimented with using workplace accident insurance premium discounts for companies or organizations with strong OSH records. [ 111 ] [ 112 ] In Australia , four of the six states and both territories have enacted and administer harmonized work health and safety legislation in accordance with the Intergovernmental Agreement for Regulatory and Operational Reform in Occupational Health and Safety. [ 113 ] Each of these jurisdictions has enacted work health and safety legislation and regulations based on the Commonwealth Work Health and Safety Act 2011 and common codes of practice developed by Safe Work Australia . [ 114 ] Some jurisdictions have also included mine safety under the model approach. However, most have retained separate legislation for the time being. In August 2019, Western Australia committed to join nearly every other state and territory in implementing the harmonized Model WHS Act, Regulations and other subsidiary legislation. [ 115 ] Victoria has retained its own regime, although the Model WHS laws themselves drew heavily on the Victorian approach. [ citation needed ] In Canada , workers are covered by provincial or federal labor codes depending on the sector in which they work. Workers covered by federal legislation (including those in mining, transportation, and federal employment) are covered by the Canada Labour Code ; all other workers are covered by the health and safety legislation of the province in which they work. [ 116 ] [ 117 ] The Canadian Centre for Occupational Health and Safety (CCOHS), an agency of the Government of Canada, was created in 1978 by an act of parliament. CCOHS is mandated to promote safe and healthy workplaces and help prevent work-related injuries and illnesses. [ 118 ] There are significant common elements across relevant provincial OHS legislation. The foundation of each of these legislative frameworks is the belief that all Canadians have "a fundamental right to a healthy and safe working environment." In general, provincial workplace safety laws in Canada are designed to promote shared responsibility, prevent accidents, and ensure accountability at all levels of an organization. Employers, supervisors, and workers are expected to work together to minimize risks. Employers, in particular, are legally obligated to take every reasonable precaution to protect workers. If the workplace has more than a few employees, they are required to develop written health and safety policies and procedures. Employers must also provide and maintain equipment and machinery in a safe working condition. Additionally, employers must inform, instruct, and supervise workers to ensure safe work practices are followed. Employers are also responsible for supplying necessary protective equipment and ensuring it is used correctly, whether it involves machine guards or personal protective equipment (PPE). Supervisors have a duty to ensure that workers use all required safety devices and comply with established procedures. They must also communicate information about existing or potential hazards and provide guidance on how to work safely. Workers also have the right to refuse work if they believe it is unsafe and poses a danger to themselves or others. [ 119 ] [ additional citation(s) needed ] In workplaces with a set minimum number of employees (twenty in the case of workplaces under federal jurisdiction [ 120 ] ), it is mandatory to have a health and safety committee. This, made up of both worker and management representatives, meets regularly to identify hazards, investigate incidents, and make recommendations to improve workplace safety. These committees are crucial for fostering collaboration and addressing safety concerns in a timely manner. [ 121 ] Law also requires employers to take defined steps to prevent workplace violence and harassment . They must create a workplace violence policy along with a program that identifies risks and outlines procedures for addressing them. A separate workplace harassment policy must explain how complaints should be reported and investigated. Employers are required to train employees on these policies to ensure awareness and compliance. All incidents involving violence, threats, or persistent harassment must be taken seriously and handled appropriately. [ 122 ] [ 123 ] In severe cases involving serious injury or death due to negligence, organizations and individuals can be prosecuted under the Criminal Code of Canada through the provisions introduced by Bill C-45 . In some provinces, like Ontario , this introduces serious criminal consequences for safety violations. [ 124 ] Workplaces are also subject to federal regulations under WHMIS, the Workplace Hazardous Materials Information System . WHMIS governs the labeling, documentation, and communication of hazardous materials. Employers must ensure that all hazardous substances are properly labeled, that material safety data sheets are readily available, and that workers are trained on how to handle these materials safely. [ 125 ] As an example of arrangements at a provincial level, Ontario's primary workplace safety legislation is the Occupational Health and Safety Act (OHSA). This law sets out the responsibilities of employers, supervisors, and workers to promote a safe and healthy work environment. Ontario's occupational health and safety framework is built around the concept known as the "Internal Responsibility System," which means that everyone in the workplace shares responsibility for recognizing and addressing safety concerns. The OHSA is enforced by Ontario’s Ministry of Labour, Immigration, Training and Skills Development . Ministry inspectors have the authority to visit workplaces, investigate complaints, and issue orders. Failure to comply with the law can lead to substantial fines and penalties, and individual supervisors or managers may also be held personally liable. [ 126 ] [ 127 ] In China , the Ministry of Health is responsible for occupational disease prevention and the State Administration of Work Safety workplace safety issues. [ citation needed ] The Work Safety Law (安全生产法) was issued on 1 November 2002. [ 128 ] [ 129 ] The Occupational Disease Control Act came into force on 1 May 2002. [ 130 ] In 2018, the National Health Commission (NHC) was formally established to formulating national health policies. The NHC formulated the "National Occupational Disease Prevention and Control Plan (2021–2025)" in the context of the activities leading to the "Healthy China 2030" initiative. [ 128 ] The European Agency for Safety and Health at Work was founded in 1994. In the European Union , member states have enforcing authorities to ensure that the basic legal requirements relating to occupational health and safety are met. In many EU countries, there is strong cooperation between employer and worker organizations (e.g., unions ) to ensure good OSH performance, as it is recognized this has benefits for both the worker (through maintenance of health) and the enterprise (through improved productivity and quality ). [ citation needed ] Member states have all transposed into their national legislation a series of directives that establish minimum standards on occupational health and safety. These directives (of which there are about 20 on a variety of topics) follow a similar structure requiring the employer to assess workplace risks and put in place preventive measures based on a hierarchy of hazard control . This hierarchy starts with elimination of the hazard and ends with personal protective equipment . [ citation needed ] per 10,000 full-time employees [ 131 ] In Denmark , occupational safety and health is regulated by the Danish Act on Working Environment and Cooperation at the Workplace. [ 132 ] The Danish Working Environment Authority ( Arbejdstilsynet ) carries out inspections of companies, draws up more detailed rules on health and safety at work and provides information on health and safety at work. [ 133 ] The result of each inspection is made public on the web pages of the Danish Working Environment Authority so that the general public, current and prospective employees, customers and other stakeholders can inform themselves about whether a given organization has passed the inspection. [ citation needed ] In the Netherlands , the laws for safety and health at work are registered in the Working Conditions Act ( Arbeidsomstandighedenwet and Arbeidsomstandighedenbeleid ). Apart from the direct laws directed to safety and health in working environments, the private domain has added health and safety rules in Working Conditions Policies ( Arbeidsomstandighedenbeleid ), which are specified per industry. The Ministry of Social Affairs and Employment (SZW) monitors adherence to the rules through their inspection service. This inspection service investigates industrial accidents and it can suspend work and impose fines when it deems the Working Conditions Act has been violated. Companies can get certified with a VCA certificate for safety, health and environment performance. All employees have to obtain a VCA certificate too, with which they can prove that they know how to work according to the current and applicable safety and environmental regulations. [ citation needed ] The main health and safety regulation in Ireland is the Safety, Health and Welfare at Work Act 2005, [ 134 ] which replaced earlier legislation from 1989. The Health and Safety Authority , based in Dublin , is responsible for enforcing health and safety at work legislation. [ 134 ] In Spain , occupational safety and health is regulated by the Spanish Act on Prevention of Labor Risks. The Ministry of Labor is the authority responsible for issues relating to labor environment. [ citation needed ] The National Institute for Safety and Health at Work ( Instituto Nacional de Seguridad y Salud en el Trabajo , INSST) is the government's scientific and technical organization specialized in occupational safety and health. [ 135 ] In Sweden , occupational safety and health is regulated by the Work Environment Act. [ 136 ] The Swedish Work Environment Authority ( Arbetsmiljöverket ) is the government agency responsible for issues relating to the working environment. The agency works to disseminate information and furnish advice on OSH, has a mandate to carry out inspections, and a right to issue stipulations and injunctions to any non-compliant employer. [ 137 ] In India , the Ministry of Labour and Employment formulates national policies on occupational safety and health in factories and docks with advice and assistance from its Directorate General Factory Advice Service and Labour Institutes (DGFASLI), and enforces its policies through inspectorates of factories and inspectorates of dock safety. The DGFASLI provides technical support in formulating rules, conducting occupational safety surveys and administering occupational safety training programs. [ 138 ] In Indonesia , the Ministry of Manpower ( Kementerian Ketenagakerjaan , or Kemnaker) is responsible to ensure the safety, health and welfare of workers. Important OHS acts include the Occupational Safety Act 1970 and the Occupational Health Act 1992. [ 139 ] Sanctions, however, are still low (with a maximum of 15 million rupiahs fine and/or a maximum of one year in prison) and violations are still very frequent. [ 140 ] The Japanese Ministry of Health, Labor and Welfare (MHLW) is the governmental agency overseeing occupational safety and health in Japan . The MHLW is responsible for enforcing Industrial Safety and Health Act of 1972 – the key piece of OSH legislation in Japan –, setting regulations and guidelines, supervising labor inspectors who monitor workplaces for compliance with safety and health standards, investigating accidents, and issuing orders to improve safety conditions. The Labor Standards Bureau is an arm of MHLW tasked with supervising and guiding businesses, inspecting manufacturing facilities for safety and compliance, investigating accidents, collecting statistics, enforcing regulations and administering fines for safety violations, and paying accident compensation for injured workers. [ 141 ] [ 142 ] The Japan Industrial Safety and Health Association [ jp ] (JISHA) is a non-profit organization established under the Industrial Safety and Health Act of 1972. It works closely with MHLW, the regulatory body, to promote workplace safety and health. The responsibilities of JISHA include: Providing education and training on occupational safety and health, conducting research and surveys on workplace safety and health issues, offering technical guidance and consultations to businesses, disseminating information and raising awareness about occupational safety and health, and collaborating with international organizations to share best practices and improve global workplace safety standards. [ 143 ] The Japan National Institute of Occupational Safety and Health [ jp ] (JNIOSH) conducts research to support governmental policies in occupational safety and health. The organization categorizes its research into project studies, cooperative research, fundamental research, and government-requested research. Each category focuses on specific themes, from preventing accidents and ensuring workers' health, to addressing changes in employment structure. The organization sets clear goals, develops road maps, and collaborates with the Ministry of Health, Labor and Welfare to discuss progress and policy contributions. [ 144 ] In Malaysia , the Department of Occupational Safety and Health (DOSH) under the Ministry of Human Resources is responsible to ensure that the safety, health and welfare of workers in both the public and private sector is upheld. DOSH is responsible to enforce the Factories and Machinery Act 1967 and the Occupational Safety and Health Act 1994 . Malaysia has a statutory mechanism for worker involvement through elected health and safety representatives and health and safety committees. [ 145 ] This followed a similar approach originally adopted in Scandinavia. [ citation needed ] In Saudi Arabia , the Ministry of Human Resources and Social Development administrates workers' rights and the labor market as a whole, consistent with human rights rules upheld by the Human Rights Commission of the kingdom. [ 146 ] In Singapore , the Ministry of Manpower (MOM) is the government agency in charge of OHS policies and enforcement. The key piece of legislation regulating aspects of OHS is the Workplace Safety and Health Act . [ 147 ] The MOM promotes and manages campaigns against unsafe work practices, such as when working at height, operating cranes and in traffic management. Examples include Operation Cormorant and the Falls Prevention Campaign. [ 148 ] In South Africa the Department of Employment and Labour is responsible for occupational health and safety inspection and enforcement in the commercial and industrial sectors, with the exclusion of mining, where the Department of Mineral Resources is responsible. [ 149 ] [ 150 ] The main statutory legislation on health and safety in the jurisdiction of the Department of Employment and Labour is the OHS Act or OHSA (Act No. 85 of 1993: Occupational Health and Safety Act, as amended by the Occupational Health and Safety Amendment Act, No. 181 of 1993) . [ 149 ] Regulations implementing the OHS Act include: [ 151 ] In Syria , health and safety is the responsibility of the Ministry of Social Affairs and Labor ( Arabic : وزارة الشؤون الاجتماعية والعمل , romanized : Wizārat al-Shuʼūn al-ijtimāʻīyah wa-al-ʻamal ). [ 159 ] In Taiwan , the Occupational Safety and Health Administration [ zh ] of the Ministry of Labor is in charge of occupational safety and health. [ 160 ] The matter is governed under the Occupational Safety and Health Act [ zh ] . [ 161 ] In the United Arab Emirates , national OSH legislation is based on the Federal Law on Labor (1980). Order No. 32 of 1982 on Protection from Hazards and Ministerial Decision No. 37/2 of 1982 are also of importance. [ 162 ] The competent authority for safety and health at work at the federal level is the Ministry of Human Resources and Emiratisation (MoHRE). [ 163 ] Health and safety legislation in the UK is drawn up and enforced by the Health and Safety Executive and local authorities under the Health and Safety at Work etc. Act 1974 (HASAWA or HSWA). [ 164 ] [ 165 ] HASAWA introduced (section 2) a general duty on an employer to ensure, so far as is reasonably practicable , the health, safety and welfare at work of all his employees, with the intention of giving a legal framework supporting codes of practice not in themselves having legal force but establishing a strong presumption as to what was reasonably practicable (deviations from them could be justified by appropriate risk assessment). The previous reliance on detailed prescriptive rule-setting was seen as having failed to respond rapidly enough to technological change, leaving new technologies potentially unregulated or inappropriately regulated. [ 166 ] HSE has continued to make some regulations giving absolute duties (where something must be done with no "reasonable practicability" test) but in the UK the regulatory trend is away from prescriptive rules, and toward goal setting and risk assessment. Recent major changes to the laws governing asbestos and fire safety management embrace the concept of risk assessment. The other key aspect of the UK legislation is a statutory mechanism for worker involvement through elected health and safety representatives and health and safety committees. This followed a similar approach in Scandinavia, and that approach has since been adopted in countries such as Australia, Canada, New Zealand and Malaysia. [ citation needed ] The Health and Safety Executive service dealing with occupational medicine has been the Employment Medical Advisory Service . In 2014 a new occupational health organization, the Health and Work Service , was created to provide advice and assistance to employers in order to get back to work employees on long-term sick-leave . [ 167 ] The service, funded by the government, offers medical assessments and treatment plans, on a voluntary basis, to people on long-term absence from their employer; in return, the government no longer foots the bill for statutory sick pay provided by the employer to the individual. [ citation needed ] In the United States , President Richard Nixon signed the Occupational Safety and Health Act into law on 29 December 1970. The act created the three agencies which administer OSH: the Occupational Safety and Health Administration (OSHA), the National Institute for Occupational Safety and Health (NIOSH), and the Occupational Safety and Health Review Commission (OSHRC). [ 168 ] The act authorized OSHA to regulate private employers in the 50 states, the District of Columbia , and territories . [ 169 ] It includes a general duty clause (29 U.S.C. §654, 5(a)) requiring an employer to comply with the Act and regulations derived from it, and to provide employees with "employment and a place of employment which are free from recognized hazards that are causing or are likely to cause [them] death or serious physical harm." [ 170 ] OSHA was established in 1971 under the Department of Labor . It has headquarters in Washington, DC, and ten regional offices, further broken down into districts, each organized into three sections: compliance, training, and assistance. Its stated mission is "to ensure safe and healthful working conditions for workers by setting and enforcing standards and by providing training, outreach, education and assistance." [ 169 ] The original plan was for OSHA to oversee 50 state plans with OSHA funding 50% of each plan, but this did not work out that way: As of 2023 [update] there are 26 approved state plans (with four covering only public employees) and OSHA manages the plan in the states not participating. [ 93 ] OSHA develops safety standards in the Code of Federal Regulations and enforces those safety standards through compliance inspections conducted by Compliance Officers; enforcement resources are focused on high-hazard industries. Worksites may apply to enter OSHA's Voluntary Protection Program (VPP). A successful application leads to an on-site inspection; if this is passed, the site gains VPP status and OSHA no longer inspect it annually nor (normally) visit it unless there is a fatal accident or an employee complaint until VPP revalidation (after three–five years). VPP sites generally have injury and illness rates less than half the average for their industry. [ citation needed ] OSHA has a number of specialists in local offices to provide information and training to employers and employees at little or no cost. [ 4 ] Similarly OSHA produces a range of publications and funds consultation services available for small businesses. [ citation needed ] OSHA has strategic partnership and alliance programs to develop guidelines, assist in compliance, share resources, and educate workers in OHS. [ 93 ] OSHA manages Susan B. Harwood grants to non-profit organizations to train workers and employers to recognize, avoid, and prevent safety and health hazards in the workplace. [ 171 ] Grants focus on small business, hard-to-reach workers and high-hazard industries. [ 172 ] The National Institute for Occupational Safety and Health (NIOSH), also created under the Occupational Safety and Health Act, is the federal agency responsible for conducting research and making recommendations for the prevention of work-related injury and illness. NIOSH is part of the Centers for Disease Control and Prevention (CDC) within the Department of Health and Human Services . [ 173 ] Those in the field of occupational safety and health come from a wide range of disciplines and professions including medicine , occupational medicine , epidemiology , physiotherapy and rehabilitation , psychology , human factors and ergonomics , and many others. Professionals advise on a broad range of occupational safety and health matters. These include how to avoid particular pre-existing conditions causing a problem in the occupation, correct posture, frequency of rest breaks, preventive actions that can be undertaken, and so forth. The quality of occupational safety is characterized by (1) the indicators reflecting the level of industrial injuries, (2) the average number of days of incapacity for work per employer, (3) employees' satisfaction with their work conditions and (4) employees' motivation to work safely. [ 174 ] The main tasks undertaken by the OSH practitioner include: OSH specialists examine worksites for environmental or physical factors that could harm employee health, safety, comfort or performance. They then find ways to improve potential risk factors. For example, they may notice potentially hazardous conditions inside a chemical plant and suggest changes to lighting, equipment, materials, or ventilation. OSH technicians assist specialists by collecting data on work environments and implementing the worksite improvements that specialists plan. Technicians also may check to make sure that workers are using required protective gear, such as masks and hardhats. OSH specialists and technicians may develop and conduct employee training programs. These programs cover a range of topics, such as how to use safety equipment correctly and how to respond in an emergency. In the event of a workplace safety incident, specialists and technicians investigate its cause. They then analyze data from the incident, such as the number of people impacted, and look for trends in occurrence. This evaluation helps them to recommend improvements to prevent future incidents. [ 175 ] Given the high demand in society for health and safety provisions at work based on reliable information, OSH professionals should find their roots in evidence-based practice. A new term is "evidence-informed decision making". Evidence-based practice can be defined as the use of evidence from literature, and other evidence-based sources, for advice and decisions that favor the health, safety, well-being, and work ability of workers. Therefore, evidence-based information must be integrated with professional expertise and the workers' values. Contextual factors must be considered related to legislation, culture, financial, and technical possibilities. Ethical considerations should be heeded. [ 176 ] The roles and responsibilities of OSH professionals vary regionally but may include evaluating working environments, developing, endorsing and encouraging measures that might prevent injuries and illnesses, providing OSH information to employers, employees, and the public, providing medical examinations, and assessing the success of worker health programs. [ citation needed ] In the Netherlands, the required tasks for health and safety staff are only summarily defined and include: [ 177 ] Dutch law influences the job of the safety professional mainly through the requirement on employers to use the services of a certified working-conditions service for advice. A certified service must employ sufficient numbers of four types of certified experts to cover the risks in the organizations which use the service: In 2004, 14% of health and safety practitioners in the Netherlands had an MSc and 63% had a BSc . 23% had training as an OSH technician. [ 178 ] In Norway, the main required tasks of an occupational health and safety practitioner include: In 2004, 37% of health and safety practitioners in Norway had an MSc and 44% had a BSc . 19% had training as an OSH technician. [ 178 ] There are multiple levels of training applicable to the field of occupational safety and health. Programs range from individual non-credit certificates and awareness courses focusing on specific areas of concern, to full doctoral programs. The University of Southern California was one of the first schools in the US to offer a PhD program focusing on the field. Further, multiple master's degree programs exist, such as that of the Indiana State University who offer MSc and MA programs. Other masters-level qualifications include the MSc and Master of Research (MRes) degrees offered by the University of Hull in collaboration with the National Examination Board in Occupational Safety and Health (NEBOSH). Graduate programs are designed to train educators, as well as high-level practitioners. [ citation needed ] Many OSH generalists focus on undergraduate studies; programs within schools, such as that of the University of North Carolina 's online BSc in environmental health and safety, fill a large majority of hygienist needs. However, smaller companies often do not have full-time safety specialists on staff, thus, they appoint a current employee to the responsibility. Individuals finding themselves in positions such as these, or for those enhancing marketability in the job-search and promotion arena, may seek out a credit certificate program. For example, the University of Connecticut 's online OSH certificate [ 179 ] provides students familiarity with overarching concepts through a 15-credit (5-course) program. Programs such as these are often adequate tools in building a strong educational platform for new safety managers with a minimal outlay of time and money. Further, most hygienists seek certification by organizations that train in specific areas of concentration, focusing on isolated workplace hazards. The American Society of Safety Professionals (ASSP), Board for Global EHS Credentialing (BGC), and American Industrial Hygiene Association (AIHA) offer individual certificates on many different subjects from forklift operation to waste disposal and are the chief facilitators of continuing education in the OSH sector. [ citation needed ] In the US, the training of safety professionals is supported by NIOSH through their NIOSH Education and Research Centers . In the UK, both NEBOSH and the Institution of Occupational Safety and Health (IOSH) develop health and safety qualifications and courses which cater to a mixture of industries and levels of study. Although both organizations are based in the UK, their qualifications are recognized and studied internationally as they are delivered through their own global networks of approved providers. The Health and Safety Executive has also developed health and safety qualifications in collaboration with the NEBOSH. [ citation needed ] In Australia, training in OSH is available at the vocational education and training level, and at university undergraduate and postgraduate level. Such university courses may be accredited by an accreditation board of the Safety Institute of Australia . The institute has produced a Body of Knowledge which it considers is required by a generalist safety and health professional and offers a professional qualification. [ 180 ] The Australian Institute of Health and Safety has instituted the national Eric Wigglesworth OHS Education Medal to recognize achievement in OSH doctorate education. [ 181 ] Informal or field training may be delivered in the workplace or during off-site training sessions. One form of training delivered in the workplace is known as a toolbox talk . According to the UK's Health and Safety Executive, a toolbox talk is a short presentation to the workforce on a single aspect of health and safety. [ 182 ] Such talks are often used, especially in the construction industry , by site supervisors, frontline managers and owners of small construction firms to prepare and deliver advice on matters of health, safety and the environment and to obtain feedback from the workforce. [ 183 ] Virtual reality is a novel tool to deliver safety training in many fields. Some applications have been developed and tested especially for fire and construction safety training. [ 184 ] [ 185 ] Preliminary findings seem to support that virtual reality is more effective than traditional training in knowledge retention. [ 186 ] On an international scale, the World Health Organization (WHO) and the International Labour Organization (ILO) have begun focusing on labor environments in developing nations with projects such as Healthy Cities . [ 187 ] Many of these developing countries are stuck in a situation in which their relative lack of resources to invest in OSH leads to increased costs due to work-related illnesses and accidents. [ citation needed ] The ILO estimates that work-related illness and accidents cost up to 10% of GDP in Latin America, compared with just 2.6% to 3.8% in the EU. [ 188 ] There is continued use of asbestos, a notorious hazard, in some developing countries. So asbestos-related disease is expected to continue to be a significant problem well into the future. [ citation needed ] There are several broad aspects of artificial intelligence (AI) that may give rise to specific hazards. Many hazards of AI are psychosocial in nature due to its potential to cause changes in work organization. [ 189 ] For example, AI is expected to lead to changes in the skills required of workers, requiring retraining of existing workers, flexibility, and openness to change. [ 190 ] Increased monitoring may lead to micromanagement or perception of surveillance , and thus to workplace stress. There is also the risk of people being forced to work at a robot's pace, or to monitor robot performance at nonstandard hours. Additionally, algorithms may show algorithmic bias through being trained on past decisions may mimic undesirable human biases, for example, past discriminatory hiring and firing practices . [ 191 ] Some approaches to accident analysis may be biased to safeguard a technological system and its developers by assigning blame to the individual human operator instead. [ 192 ] Physical hazards in the form of human–robot collisions may arise from robots using AI, especially collaborative robots ( cobots ). Cobots are intended to operate in close proximity to humans, which makes it impossible to implement the common hazard control of isolating the robot using fences or other barriers, which is widely used for traditional industrial robots . Automated guided vehicles are a type of cobot in common use, often as forklifts or pallet jacks in warehouses or factories. [ 193 ] Both applications and hazards arising from AI can be considered as part of existing frameworks for occupational health and safety risk management. As with all hazards, risk identification is most effective and least costly when done in the design phase. [ 189 ] AI, in common with other computational technologies, requires cybersecurity measures to stop software breaches and intrusions, [ 194 ] as well as information privacy measures. [ 195 ] Communication and transparency with workers about data usage is a control for psychosocial hazards arising from security and privacy issues. [ 195 ] Workplace health surveillance , the collection and analysis of health data on workers, is challenging for AI because labor data are often reported in aggregate, does not provide breakdowns between different types of work, and is focused on economic data such as wages and employment rates rather than skill content of jobs. [ 196 ] The National Institute of Occupational Safety and Health (NIOSH) National Occupational Research Agenda Manufacturing Council established an externally-lead COVID-19 workgroup to provide exposure control information specific to working in manufacturing environments. The workgroup identified disseminating information most relevant to manufacturing workplaces as a priority, and that would include providing content in Wikipedia. This includes evidence-based practices for infection control plans, [ 197 ] and communication tools. Nanotechnology is an example of a new, relatively unstudied technology. A Swiss survey of 138 companies using or producing nanoparticulate matter in 2006 resulted in forty completed questionnaires. Sixty-five per cent of respondent companies stated they did not have a formal risk assessment process for dealing with nanoparticulate matter. [ 198 ] Nanotechnology already presents new issues for OSH professionals that will only become more difficult as nanostructures become more complex. The size of the particles renders most containment and personal protective equipment ineffective. The toxicology values for macro sized industrial substances are rendered inaccurate due to the unique nature of nanoparticulate matter. As nanoparticulate matter decreases in size its relative surface area increases dramatically, increasing any catalytic effect or chemical reactivity substantially versus the known value for the macro substance. This presents a new set of challenges in the near future to rethink contemporary measures to safeguard the health and welfare of employees against a nanoparticulate substance that most conventional controls have not been designed to manage. [ 199 ] Occupational health inequalities refer to differences in occupational injuries and illnesses that are closely linked with demographic, social, cultural, economic, and/or political factors. [ 200 ] Although many advances have been made to rectify gaps in occupational health within the past half century, still many persist due to the complex overlapping of occupational health and social factors. [ 201 ] There are three main areas of research on occupational health inequities: Immigrant worker populations often are at greater risk for workplace injuries and fatalities. For example within the United States, immigrant Mexican workers have one of the highest rates of fatal workplace injuries out of all of the working population. Statistics like these are explained through a combination of social, structural, and physical aspects of the workplace. These workers struggle to access safety information and resources in their native languages because of lack of social and political inclusion. In addition to linguistically tailored interventions, it is also critical for the interventions to be culturally appropriate. [ 205 ] Those residing in a country to work without a visa or other formal authorization may also not have access to legal resources and recourse that are designed to protect most workers. Health and Safety organizations that rely on whistleblowers instead of their own independent inspections may be especially at risk of having an incomplete picture of worker health. Comprehensive Employment and Training Act
https://en.wikipedia.org/wiki/Occupational_safety_and_health
Occupational toxicology is the application of toxicology to chemical hazards in the workplace. It focuses on substances and conditions that people may be exposed to in workplaces, including inhalation and dermal exposures , which are most prevalent when discussing occupational toxicology. These environmental and individual exposures can impact health, and there is a focus on identifying early adverse affects that are more subtle than those presented in clinical medicine. Occupational toxicology interfaces heavily with other subfields of occupational safety and health . Occupational epidemiology studies may inspire toxicological study of causative agents, and toxicological investigations are important in establishing biomarkers for workplace health surveillance . Occupational toxicology studies may suggest or evaluate hazard controls used by industrial hygienists . Toxicological studies are also an important input for performing occupational risk assessment , and establishing standards and regulation such as occupational exposure limits . As of 1983, around 60,000 chemical compounds were considered to be of occupational consequence. [ 1 ] Certain sectors have an increased potential for exposure to chemical and biological agents, including manufacturing , construction , mining , logging , and agriculture , as well as service sector workplaces such as in automobile repair, gasoline stations, pipelines , truck and rail transportation, waste management and remediation, and botanical gardens. [ 2 ] These sectors contain an increased risk of exposure largely due to the fact that they are working with heavy machinery that can emit potentially harmful fumes when being operated. [ 3 ] Additionally, these sectors involve directly handling various substances that can possibly contain harmful chemical compounds. Toxicological studies are experimental laboratory studies on the response of organisms and biological pathways to a substance, and can generate data that are used for other occupational safety and health activities. [ 4 ] These studies can range anywhere from 2 weeks to 2 years and primarily focus on determining whether or not the compound is toxic/carcinogenic and how toxic it is if so. [ 5 ] To discover if a compound is toxic/carcinogenic, toxicologists expose mice to the compound being studied and examine them over a given amount of time. These toxicologists then look for any patterns in the mice that may suggest toxicity or carcinogenicity and draw a conclusion from this data. [ 5 ] Occupational toxicology generates data that is used to identify hazards and their physiological effects, and quantify dose–response relationships . [ 4 ] A major use of this data is for establishing standards and regulation. These may take the form of occupational exposure limits , which are based on ambient concentration levels of toxicants. They also include biological exposure indices, which are based on biomonitoring of a toxicant, its metabolites , or other biomarkers . [ 2 ] Toxicologists have a large role in determining what biomarkers may be used for biomonitoring during exposure assessment and workplace health surveillance activities. [ 4 ] Occupational toxicology is complementary to occupational epidemiology , to a greater degree than toxicology and epidemiology in general. For example, outbreaks identified through epidemiological studies such as exposure assessment case studies or workplace health surveillance may inspire toxicological study of suspected or confirmed causative agents. [ 1 ] [ 2 ] Conversely, the results of toxicological investigation are important in establishing biomarkers for workplace health surveillance to identify overexposure and to test the validity of occupational exposure limits. These biomarkers are intended to aid in prevention by identifying early adverse affects, unlike diagnostics for clinical medicine that are designed to reveal advanced pathological states. [ 2 ] Toxicological studies have the benefit over epidemiology that they can study new substances before there is exposure in commerce, [ 2 ] or when epidemiological data are not available. [ 4 ] Toxicology also has the advantage of elucidating not only overt health outcomes, but intermediate biochemical steps such as biotransformation processes, as well as early cellular changes. These can aid in developing measures to prevent or treat toxicity. [ 4 ] Occupational toxicology studies may also suggest or evaluate hazard controls used by industrial hygienists . [ 1 ] Occupational toxicology differs from environmental toxicology in that the former has smaller number of exposed individuals, but with a wider range of exposure levels. Environmental toxicology tends to focus on situations with low exposure levels for larger numbers of people, where adverse effects may be concentrated in people who are especially susceptible to a given toxicant due to genetic or other factors. [ 6 ] Occupational toxicology has the challenge of performing studies that mimic actual workplace conditions, for which inhalation exposure and dermal exposure are most important, [ 1 ] [ 2 ] although in medical industries, injection exposure through needlestick injuries is a hazard. [ 4 ] In particular, experimental inhalation exposure studies require more complex methodology and equipment than for oral administration experiments. For example, measurement and control of particle size distribution is important, and the degree and location of particle retention within the respiratory tract. [ 2 ] Inhalation and injection exposure are often more dangerous than dermal exposure, where a major function of skin is to provide a barrier to outside toxins, and ingestion exposure, where toxins may be broken down by the gastrointestinal tract and liver . [ 4 ] There is often exposure to mixtures of chemicals, whose effects may not be simply additive, as different toxins may interact in a way that enhances or reduces their toxicity relative to each toxin alone. [ 4 ] Mixtures may include undesired contaminants in a product, or products that deviate from manufacturer specifications. Exposures are not always acute, but may be at low levels prolonged over decades. [ 2 ] Workers may be exposed to toxic substances at higher levels than the general public, who are mainly exposed through consumer products and the environment. [ 7 ] Establishing a causal relationship between a worker's illness and work conditions is often difficult because work-related illnesses are often indistinguishable from those with other causes, and there may be a long interval between the exposure and the onset of disease. [ 2 ] While the dose of a toxicant is a strong predictor of health outcomes, occupational diseases are often influenced or confounded by other environmental factors, or personal host factors such as preexisting health conditions, host genetics, or patterns of worker behavior. These affect the relationship between the concentration, duration, and frequency of the exposure, and the actual toxicant dose that reaches a target tissue and interacts with metabolic processes. For example, the ultimate dose from inhalation exposure depends on respiratory rate and breathing volume, and the dose from dermal exposure depends on the absorption rate through the skin, which is influenced by the chemical properties of the chemical, the thickness of the skin at the exposed location on the body, and whether the skin is intact. [ 2 ] Occupational toxicology uses methods common to other forms of toxicology. Animal testing is used to identify adverse effects and establish acceptable exposure levels, as well as studying the mechanism of action and dose–response relationship . There are a number of in vitro alternatives to animal testing in a number of specific cases such as predicting skin sensitizers and potential for eye injuries, as well as quantitative structure–activity relationship models. Sometimes, controlled human challenge studies are performed in cases where the risk for volunteers is negligible; these are used to verify whether results from animal studies translate to humans. [ 2 ] Many types of measurements may be made in occupational toxicology. These include external measurements of exposure, the internal dose measured via tissues and bodily fluids, the "biologically effective dose" measuring the compound that has actually interacted with host biomolecules such as DNA and proteins, and measuring downstream effects of mutations, cytogenetic effects, and aberrant gene expression . [ 8 ] Experimentation may focus on the operation and regulation of biotransformation processes that may detoxify or activate toxins. These processes are subject to difference between individuals, which is studied through the field of toxicogenomics . [ 4 ] While the health hazards of substances used in the workplace have been recognized since antiquity , the first experimental studies of hazardous substances came in the late 19th and early 20th centuries, including the work of John Scott Haldane on mine gases , Karl Bernhard Lehmann on organic substances, and Ernest Kennaway on occupational skin cancer . [ 9 ] Biomarkers began to be used in occupational toxicology and epidemiology in the 1970s, and the 1990s showed increasing focus on molecular mechanisms such as identifying specific enzymes that interact with toxicants, and studying their variation across individuals. [ 8 ]
https://en.wikipedia.org/wiki/Occupational_toxicology
In computer science , the occurs check is a part of algorithms for syntactic unification . It causes unification of a variable V and a structure S to fail if S contains V . In theorem proving , unification without the occurs check can lead to unsound inference . For example, the Prolog goal X = f ( X ) {\displaystyle X=f(X)} will succeed, binding X to a cyclic structure which has no counterpart in the Herbrand universe . As another example, [ 1 ] without occurs-check, a resolution proof can be found for the non-theorem [ 2 ] ( ∀ x ∃ y . p ( x , y ) ) → ( ∃ y ∀ x . p ( x , y ) ) {\displaystyle (\forall x\exists y.p(x,y))\rightarrow (\exists y\forall x.p(x,y))} : the negation of that formula has the conjunctive normal form p ( X , f ( X ) ) ∧ ¬ p ( g ( Y ) , Y ) {\displaystyle p(X,f(X))\land \lnot p(g(Y),Y)} , with f {\displaystyle f} and g {\displaystyle g} denoting the Skolem function for the first and second existential quantifier, respectively. Without occurs check, the literals p ( X , f ( X ) ) {\displaystyle p(X,f(X))} and p ( g ( Y ) , Y ) {\displaystyle p(g(Y),Y)} are unifiable, producing the refuting empty clause. Prolog implementations usually omit the occurs check for reasons of efficiency, which can lead to circular data structures and looping. By not performing the occurs check, the worst case complexity of unifying a term t 1 {\displaystyle t_{1}} with term t 2 {\displaystyle t_{2}} is reduced in many cases from O ( size ( t 1 ) + size ( t 2 ) ) {\displaystyle O({\text{size}}(t_{1})+{\text{size}}(t_{2}))} to O ( min ( size ( t 1 ) , size ( t 2 ) ) ) {\displaystyle O({\text{min}}({\text{size}}(t_{1}),{\text{size}}(t_{2})))} ; in the particular, frequent case of variable-term unifications, runtime shrinks to O ( 1 ) {\displaystyle O(1)} . [ nb 1 ] Modern implementations, based on Colmerauer's Prolog II, [ 4 ] [ 5 ] [ 6 ] [ 7 ] use rational tree unification to avoid looping. However it is difficult to keep the complexity time linear in the presence of cyclic terms. Examples where Colmerauers algorithm becomes quadratic [ 8 ] can be readily constructed, but refinement proposals exist. See image for an example run of the unification algorithm given in Unification (computer science)#A unification algorithm , trying to solve the goal c o n s ( x , y ) = ? c o n s ( 1 , c o n s ( x , c o n s ( 2 , y ) ) ) {\displaystyle cons(x,y){\stackrel {?}{=}}cons(1,cons(x,cons(2,y)))} , however without the occurs check rule (named "check" there); applying rule "eliminate" instead leads to a cyclic graph (i.e. an infinite term) in the last step. ISO Prolog implementations have the built-in predicate unify_with_occurs_check/2 for sound unification but are free to use unsound or even looping algorithms when unification is invoked otherwise, provided the algorithm works correctly for all cases that are "not subject to occurs-check" (NSTO). [ 9 ] The built-in acyclic_term/1 serves to check the finiteness of terms. Implementations offering sound unification for all unifications are Qu-Prolog and Strawberry Prolog and (optionally, via a runtime flag): XSB , SWI-Prolog , CxProlog , Tau Prolog , Trealla Prolog and Scryer Prolog . A variety [ 10 ] [ 11 ] of optimizations can render sound unification feasible for common cases. W.P. Weijland (1990). "Semantics for Logic Programs without Occur Check" . Theoretical Computer Science . 71 : 155– 174. doi : 10.1016/0304-3975(90)90194-m .
https://en.wikipedia.org/wiki/Occurs_check
The Ocean Biodiversity Information System ( OBIS ), formerly Ocean Biogeographic Information System , is a web-based access point to information about the distribution and abundance of living species in the ocean . It was developed as the information management component of the ten year Census of Marine Life (CoML) (2001-2010), but is not limited to CoML-derived data, and aims to provide an integrated view of all marine biodiversity data that may be made available to it on an open access basis by respective data custodians. According to its web site as at July 2018, OBIS "is a global open-access data and information clearing-house on marine biodiversity for science, conservation and sustainable development." 8 specific objectives are listed in the OBIS site, of which the leading item is to "Provide [the] world's largest scientific knowledge base on the diversity, distribution and abundance of all marine organisms in an integrated and standardized format". [ 1 ] Initial ideas for OBIS were developed at a CoML meeting on benthic (bottom-dwelling) ocean life in October 1997. Recommendations from this workshop led to a web site ( http://marine.rutgers.edu/OBIS ) at Rutgers in 1998 to demonstrate the initial OBIS concept. [ 2 ] An inaugural OBIS International Workshop was held on November 3–4, 1999 in Washington, DC, which led to scoping of the project and outreach to potential partners, with selected contributions published in a special issue of Oceanography magazine, [ 3 ] within which OBIS founder Dr. J. F. Grassle articulated the vision of OBIS as "an on-line, worldwide atlas for accessing, modeling and mapping marine biological data in a multidimensional geographic context." [ 4 ] In May 2000, US Government Agencies in the National Oceanographic Partnership Program together with the Alfred P. Sloan Foundation funded eight research projects to initiate OBIS. In May 2001, the US National Science Foundation funded Rutgers University to develop a global portal for OBIS. Also in 2001, an OBIS International Committee was formed and its first meeting was held in August 2001. [ 5 ] The production version of the OBIS Portal was launched at Rutgers University in 2002 as the web site http://www.iobis.org , serving 430,000 species-based georeferenced data records from 8 partner databases including fish records from FishBase , cephalopods from CephBase , corals from Biogeoinformatics of Hexacorals , mollusks from the Indo-Pacific Mollusc Database and more. [ 6 ] By May 2006, the OBIS Portal was able to access 9.5 million records of 59,000 species from 112 databases, [ 7 ] and by December 2010 (at the conclusion of the Census of Marine Life) provided access to 27.7 million records representing 167,000 taxon names. [ 8 ] As at July 2018, the OBIS website states that the system provides access to over 45 million observations of nearly 120,000 marine species (the reduced number of names cited being as a result of synonym resolution, i.e. reduction of taxa recorded under multiple names to a single accepted name), based on contributions from 500 institutions from 56 countries. [ 9 ] In 2009 OBIS was adopted as a project by International Oceanographic Data and Information Exchange (IODE) programme of the Intergovernmental Oceanographic Commission (IOC) of UNESCO and in 2011, with the cessation of funding for the Rutgers-based secretariat and portal from the Sloan Foundation, an offer of hosting by the Flanders Marine Institute (VLIZ) in Ostend , Belgium was accepted to become the long term host for the system and also the OBIS secretariat moved from Rutgers University to the IOC Project Office for IODE in Ostend from where OBIS is presently maintained and additional development is carried out. The web address changed to [1] . OBIS is thus now located in Ostend, in the same building which is also home to VLIZ. VLIZ maintains two taxonomic databases, the World Register of Marine Species and IRMNG , the Interim Register of Marine and Nonmarine Genera , both of which feed into taxonomic decisions used to control the display of species-based information in OBIS and also provide the taxonomic hierarchy via which OBIS content can be navigated. OBIS is currently under the direction of IODE with advice from a steering group, the IODE Steering Group for OBIS (SG-OBIS); operational activities are directed by an OBIS Executive Committee (OBIS-EC) with support from various OBIS Task Teams and ad-hoc OBIS project teams. The OBIS secretariat, hosted at the UNESCO/IOC project office for IODE in Ostend (Belgium), includes the OBIS project manager and data manager and in addition to maintaining the OBIS system also provides training and technical assistance to its data providers, guides new data standards and technical developments, and encourages international cooperation to foster the group benefits of the network. [ 10 ] Data available via OBIS cover all groups of organisms that have any association with marine or estuarine habitats, also including shorelines and the atmosphere above the ocean, such as marine vertebrates (fishes, marine mammals, turtles, seabirds, etc.); marine invertebrates (including zooplankton ); marine bacteria ; and marine plants (e.g. phytoplankton, seaweeds, mangroves). [ citation needed ] As available web technologies have developed, the OBIS Portal has been through a number of iterations since its inception in 2002. Initially the system retrieved remote data in real time in response to a user query and used the KGS Mapper to visualize the results. In 2004, centralized metadata indexing and cacheing was introduced leading to faster and more reliable results, and the c-squares mapper was added to options for data visualization. [ 11 ] In 2010, a full web GIS based system was introduced for the first time along with a new version of the web site which resulted in considerably more detailed and flexible presentation of search results along with a number of new search options. [ 12 ] In April 2018, funding was announced to develop a new "2.0" version of OBIS with improved capabilities., [ 13 ] and is released on 29 January 2019. [ 14 ] The website URL changed from iobis.org to obis.org. Over the period 2004–present, an international network of Regional OBIS Nodes has also been established, that are facilitating the connection of data sources in their region to the master OBIS data network and also increasingly provide specialised services or views of OBIS data to users in their particular region. [ citation needed ]
https://en.wikipedia.org/wiki/Ocean_Biodiversity_Information_System
The Ocean Worlds Exploration Program ( OWEP ) is a NASA program [ 1 ] to explore ocean worlds in the outer Solar System that could possess subsurface oceans to assess their habitability and to seek biosignatures of simple extraterrestrial life . Prime targets include moons that harbor hidden oceans beneath a shell of ice: Europa , Enceladus , and Titan . A host of other bodies in the outer Solar System are inferred by a single type of observation or by theoretical modeling to have subsurface oceans. The US House Appropriations Committee approved the bill on May 20, 2015, and directed NASA to create the Ocean Worlds Exploration Program. [ 2 ] The "Roadmaps to Ocean Worlds" (ROW) was started in 2016, [ 3 ] [ 4 ] and was presented in January 2019. [ 5 ] The formal program is being implemented within the agency by supporting the Europa Clipper orbiter mission to Europa, [ 3 ] [ 6 ] and the Dragonfly mission to Titan. The program is also supporting concept studies for a proposed Europa Lander , [ 7 ] and concepts to explore the moon Triton . [ 8 ] [ 5 ] Amanda Hendrix and Terry A. Hurford are the co-leads of the NASA Roadmaps to Oceans World Group. [ 5 ] [ 9 ] The chief author of NASA's budget proposal is John Culberson , who was at the time the head of the science subcommittee in the House of Representatives . In Spring 2015 he presented a budget request, creating the possibility of an all-new NASA mission program. [ 11 ] [ 12 ] The House Appropriations Committee approved its version of the FY2016 House Appropriations Commerce-Justice-Science (CJS) bill on May 20, 2015. [ 6 ] Therefore, the Committee directed NASA to create the Ocean Worlds Exploration Program whose primary goal is to discover extant life on another world using a mix of Discovery , New Frontiers and Flagship class missions consistent with the recommendations of current and future Planetary Science Decadal Surveys . [ 3 ] In the FY2017 Budget Request, the committee recommended $348 million for "Outer Planets" and "Ocean Worlds," of which not less than $260 million is for the Europa Clipper orbiter and lander, with launch of the orbiter in 2025 [ 13 ] and the potential Europa Lander shortly after. [ 14 ] A 2017 technical analysis stated that the technical challenges are enormous, and that "Without a genuinely strategic program plan, the great promise of an OWEP [Ocean Worlds Exploration Program] is highly likely to remain unfulfilled." [ 15 ] The report noted that development of OWEP-enabling technologies must currently compete for priority with other Solar System objectives, which is not useful for strategic planning. The report recommends common, multi-mission technical infrastructure and secure funding to develop it. [ 15 ] The Roadmaps to Ocean Worlds (ROW) report was submitted and it was published in January 2019. [ 5 ] On Earth , itself an ocean world, liquid water is essential to life as we know it. A question is whether the dark, alien oceans of the outer Solar System could be habitable for simple life forms, and if so, what their biochemistry might be. [ 16 ] The goals of the Ocean Worlds Exploration Program are to "identify ocean worlds, characterize their oceans, evaluate their habitability, search for life, and ultimately understand any life we find." [ 5 ] Exploring these moons could help to answer the question of how life arose on Earth and whether it exists anywhere else in the Solar System. [ 17 ] It may also be possible to find pre-biotic chemistry occurring, which could provide clues to how life started on Earth. [ 18 ] Any life detected at the remote ocean worlds in the outer Solar System would likely have formed and evolved along an independent path from life on Earth, giving us a deeper understanding of the potential for life in the universe . [ 4 ] Oceanographers , biologists and astrobiologists are part of the team developing the strategy roadmap. [ 3 ] The planning also considers implementing planetary protection measures to avoid contaminating extraterrestrial habitable environments with resilient stowaway bacteria on their landers. [ 3 ] [ 4 ] Ocean worlds identified in the Solar System so far with reasonable certainty are the major moons Europa , Enceladus , Titan , Ganymede , and Callisto . [ 5 ] Of these, Europa and Enceladus have the highest priority because their icy shells are thinner than the others (Enceladus’ is less than 10 km; Europa's is about 40 km) and there is some evidence their oceans are in contact with the rocky mantle , which could provide both energy and chemicals for life to form. [ 5 ] Enceladus' ice crust has fractures at the south pole that allow ice and gas from the ocean to escape to space, where it has been sampled by mass spectrometers aboard the Cassini Saturn orbiter with tantalizing results. [ 19 ] Titan's ocean is the deepest, at 50 to 100 km, and no evidence for active plumes or ice volcanism have been observed. Bodies such as Triton , Pluto , Ceres , Miranda , Ariel , and Dione are considered candidate ocean worlds, based on hints from limited spacecraft observations. [ 5 ] The Ocean Worlds Exploration Program (OWEP) is supporting the Europa Clipper orbiter mission to Europa, which is the first planned target of this program to be launched in 2024. [ 3 ] [ 6 ] The second is the Dragonfly mission to Titan. [ 5 ] The program is also supporting concept studies for a proposed Europa Lander , [ 7 ] and a concept to explore the moon Triton with Trident , a mission selected as a finalist in NASA's Discovery Program in 2020. [ 8 ] [ 5 ] Astrobiology mission concepts to water worlds in the outer Solar System:
https://en.wikipedia.org/wiki/Ocean_Worlds_Exploration_Program
Ocean acidification is the ongoing decrease in the pH of the Earth's ocean . Between 1950 and 2020, the average pH of the ocean surface fell from approximately 8.15 to 8.05. [ 2 ] Carbon dioxide emissions from human activities are the primary cause of ocean acidification, with atmospheric carbon dioxide (CO 2 ) levels exceeding 422 ppm (as of 2024 [update] ). [ 3 ] CO 2 from the atmosphere is absorbed by the oceans. This chemical reaction produces carbonic acid ( H 2 CO 3 ) which dissociates into a bicarbonate ion ( HCO − 3 ) and a hydrogen ion ( H + ). The presence of free hydrogen ions ( H + ) lowers the pH of the ocean, increasing acidity (this does not mean that seawater is acidic yet; it is still alkaline , with a pH higher than 8). Marine calcifying organisms , such as mollusks and corals , are especially vulnerable because they rely on calcium carbonate to build shells and skeletons. [ 4 ] A change in pH by 0.1 represents a 26% increase in hydrogen ion concentration in the world's oceans (the pH scale is logarithmic, so a change of one in pH units is equivalent to a tenfold change in hydrogen ion concentration). Sea-surface pH and carbonate saturation states vary depending on ocean depth and location. Colder and higher latitude waters are capable of absorbing more CO 2 . This can cause acidity to rise, lowering the pH and carbonate saturation levels in these areas. There are several other factors that influence the atmosphere-ocean CO 2 exchange, and thus local ocean acidification. These include ocean currents and upwelling zones, proximity to large continental rivers, sea ice coverage, and atmospheric exchange with nitrogen and sulfur from fossil fuel burning and agriculture . [ 5 ] [ 6 ] [ 7 ] A lower ocean pH has a range of potentially harmful effects for marine organisms . Scientists have observed for example reduced calcification, lowered immune responses , and reduced energy for basic functions such as reproduction. [ 8 ] Ocean acidification can impact marine ecosystems that provide food and livelihoods for many people. About one billion people are wholly or partially dependent on the fishing, tourism, and coastal management services provided by coral reefs . Ongoing acidification of the oceans may therefore threaten food chains linked with the oceans. [ 9 ] [ 10 ] One of the only solutions that would address the root cause of ocean acidification is reducing carbon dioxide emissions. This is one of the main objectives of climate change mitigation measures. The removal of carbon dioxide from the atmosphere would also help to reverse ocean acidification. In addition, there are some specific ocean-based mitigation methods , for example ocean alkalinity enhancement and enhanced weathering . These strategies are under investigation, but generally have a low technology readiness level and many risks. [ 11 ] [ 12 ] [ 13 ] Ocean acidification has happened before in Earth's geologic history. [ 14 ] The resulting ecological collapse in the oceans had long-lasting effects on the global carbon cycle and climate . In 2021, atmospheric carbon dioxide (CO 2 ) levels of around 415 ppm were around 50% higher than preindustrial concentrations. [ 16 ] According to the National Oceanic and Atmospheric Administration in 2023, atmospheric CO2 levels have risen from approximately 280 parts per million (ppm) in the pre-industrial era to over 410 ppm today, primarily due to human activities such as fossil fuel combustion and deforestation. [ 17 ] The current elevated levels and rapid growth rates are unprecedented in the past 55 million years of the geological record. The sources of this excess CO 2 are clearly established as human driven: they include anthropogenic fossil fuel, industrial, and land-use/land-change emissions. One source of this is fossil fuels, which are burned for energy. When burned, CO 2 is released into the atmosphere as a byproduct of combustion, which is a significant contributor to the increasing levels of CO 2 in the Earth's atmosphere. [ 18 ] The ocean acts as a carbon sink for anthropogenic CO 2 and takes up roughly a quarter of total anthropogenic CO 2 emissions. [ 19 ] However, the additional CO 2 in the ocean results in a wholesale shift in seawater acid-base chemistry toward more acidic, lower pH conditions and lower saturation states for carbonate minerals used in many marine organism shells and skeletons. [ 19 ] Accumulated since 1850, the ocean sink holds up to 175 ± 35 gigatons of carbon, with more than two-thirds of this amount ( 120 Gt C) being taken up by the global ocean since 1960. Over the historical period, the ocean sink increased in pace with the exponential anthropogenic emissions increase. From 1850 until 2022, the ocean has absorbed 26% of total anthropogenic emissions. [ 16 ] Emissions during the period 1850–2021 amounted to 670 ± 65 gigatons of carbon and were partitioned among the atmosphere (41%), ocean (26%), and land (31%). [ 16 ] The carbon cycle describes the fluxes of carbon dioxide ( CO 2 ) between the oceans, terrestrial biosphere , lithosphere , [ 20 ] and atmosphere . The carbon cycle involves both organic compounds such as cellulose and inorganic carbon compounds such as carbon dioxide , carbonate ion , and bicarbonate ion , together referenced as dissolved inorganic carbon (DIC). These inorganic compounds are particularly significant in ocean acidification, as they include many forms of dissolved CO 2 present in the Earth's oceans. [ 21 ] When CO 2 dissolves, it reacts with water to form a balance of ionic and non-ionic chemical species: dissolved free carbon dioxide ( CO 2(aq) ), carbonic acid ( H 2 CO 3 ), bicarbonate ( HCO − 3 ) and carbonate ( CO 2− 3 ). The ratio of these species depends on factors such as seawater temperature , pressure and salinity (as shown in a Bjerrum plot ). These different forms of dissolved inorganic carbon are transferred from an ocean's surface to its interior by the ocean's solubility pump . The resistance of an area of ocean to absorbing atmospheric CO 2 is known as the Revelle factor . The ocean's chemistry is changing due to the uptake of anthropogenic carbon dioxide (CO 2 ). [ 5 ] [ 22 ] : 395 Ocean pH, carbonate ion concentrations ([CO 3 2− ]), and calcium carbonate mineral saturation states (Ω) have been declining as a result of the uptake of approximately 30% of the anthropogenic carbon dioxide emissions over the past 270 years (since around 1750). This process, commonly referred to as "ocean acidification", is making it harder for marine calcifiers to build a shell or skeletal structure, endangering coral reefs and the broader marine ecosystems. [ 5 ] Ocean acidification has been called the "evil twin of global warming " and "the other CO 2 problem". [ 23 ] [ 24 ] Increased ocean temperatures and oxygen loss act concurrently with ocean acidification and constitute the "deadly trio" of climate change pressures on the marine environment. [ 25 ] The impacts of this will be most severe for coral reefs and other shelled marine organisms, [ 26 ] [ 27 ] as well as those populations that depend on the ecosystem services they provide. Dissolving CO 2 in seawater increases the hydrogen ion ( H + ) concentration in the ocean, and thus decreases ocean pH, as follows: [ 28 ] In shallow coastal and shelf regions, a number of factors interplay to affect air-ocean CO 2 exchange and resulting pH change. [ 29 ] [ 30 ] These include biological processes, such as photosynthesis and respiration, [ 31 ] as well as water upwelling. [ 32 ] Also, ecosystem metabolism in freshwater sources reaching coastal waters can lead to large, but local, pH changes. [ 29 ] Freshwater bodies also appear to be acidifying, although this is a more complex and less obvious phenomenon. [ 33 ] [ 34 ] The absorption of CO 2 from the atmosphere does not affect the ocean's alkalinity . [ 35 ] : 2252 This is important to know in this context as alkalinity is the capacity of water to resist acidification . [ 36 ] Ocean alkalinity enhancement has been proposed as one option to add alkalinity to the ocean and therefore buffer against pH changes. Changes in ocean chemistry can have extensive direct and indirect effects on organisms and their habitats. One of the most important repercussions of increasing ocean acidity relates to the production of shells out of calcium carbonate ( CaCO 3 ). [ 4 ] This process is called calcification and is important to the biology and survival of a wide range of marine organisms. Calcification involves the precipitation of dissolved ions into solid CaCO 3 structures, structures for many marine organisms, such as coccolithophores , foraminifera , crustaceans , mollusks , etc. After they are formed, these CaCO 3 structures are vulnerable to dissolution unless the surrounding seawater contains saturating concentrations of carbonate ions ( CO 2− 3 ). Very little of the extra carbon dioxide that is added into the ocean remains as dissolved carbon dioxide. The majority dissociates into additional bicarbonate and free hydrogen ions. The increase in hydrogen is larger than the increase in bicarbonate, [ 37 ] creating an imbalance in the reaction: To maintain chemical equilibrium, some of the carbonate ions already in the ocean combine with some of the hydrogen ions to make further bicarbonate. Thus the ocean's concentration of carbonate ions is reduced, removing an essential building block for marine organisms to build shells, or calcify: The increase in concentrations of dissolved carbon dioxide and bicarbonate, and reduction in carbonate, are shown in the Bjerrum plot . Disruption of the food chain is also a possible effect as many marine organisms rely on calcium carbonate-based organisms at the base of the food chain for food and habitat. This can potentially have detrimental effects throughout the food web and potentially lead to a decline in availability of fish stocks which would have an impact on human livelihoods. [ 38 ] The saturation state (known as Ω) of seawater for a mineral is a measure of the thermodynamic potential for the mineral to form or to dissolve, and for calcium carbonate is described by the following equation: Here Ω is the product of the concentrations (or activities ) of the reacting ions that form the mineral (Ca 2+ and CO 3 2− ), divided by the apparent solubility product at equilibrium (K sp ), that is, when the rates of precipitation and dissolution are equal. [ 40 ] In seawater, dissolution boundary is formed as a result of temperature, pressure, and depth, and is known as the saturation horizon. [ 4 ] Above this saturation horizon, Ω has a value greater than 1, and CaCO 3 does not readily dissolve. Most calcifying organisms live in such waters. [ 4 ] Below this depth, Ω has a value less than 1, and CaCO 3 will dissolve. The carbonate compensation depth is the ocean depth at which carbonate dissolution balances the supply of carbonate to sea floor, therefore sediment below this depth will be void of calcium carbonate. [ 41 ] Increasing CO 2 levels, and the resulting lower pH of seawater, decreases the concentration of CO 3 2− and the saturation state of CaCO 3 therefore increasing CaCO 3 dissolution. Calcium carbonate most commonly occurs in two common polymorphs (crystalline forms): aragonite and calcite . Aragonite is much more soluble than calcite, so the aragonite saturation horizon, and aragonite compensation depth, is always nearer to the surface than the calcite saturation horizon. [ 4 ] This also means that those organisms that produce aragonite may be more vulnerable to changes in ocean acidity than those that produce calcite. [ 42 ] Ocean acidification and the resulting decrease in carbonate saturation states raise the saturation horizons of both forms closer to the surface. [ 4 ] This decrease in saturation state is one of the main factors leading to decreased calcification in marine organisms because the inorganic precipitation of CaCO 3 is directly proportional to its saturation state and calcifying organisms exhibit stress in waters with lower saturation states. [ 43 ] Already now large quantities of water undersaturated in aragonite are upwelling close to the Pacific continental shelf area of North America, from Vancouver to Northern California . [ 44 ] These continental shelves play an important role in marine ecosystems, since most marine organisms live or are spawned there. Other shelf areas may be experiencing similar effects. [ 44 ] At depths of 1000s of meters in the ocean, calcium carbonate shells begin to dissolve as increasing pressure and decreasing temperature shift the chemical equilibria controlling calcium carbonate precipitation. [ 45 ] The depth at which this occurs is known as the carbonate compensation depth . Ocean acidification will increase such dissolution and shallow the carbonate compensation depth on timescales of tens to hundreds of years. [ 45 ] Zones of downwelling are being affected first. [ 46 ] In the North Pacific and North Atlantic, saturation states are also decreasing (the depth of saturation is getting more shallow). [ 22 ] : 396 Ocean acidification is progressing in the open ocean as the CO 2 travels to deeper depth as a result of ocean mixing. In the open ocean, this causes carbonate compensation depths to become more shallow, meaning that dissolution of calcium carbonate will occur below those depths. In the North Pacific these carbonate saturations depths are shallowing at a rate of 1–2 m / year . [ 22 ] : 396 It is expected that ocean acidification in the future will lead to a significant decrease in the burial of carbonate sediments for several centuries, and even the dissolution of existing carbonate sediments. [ 47 ] Between 1950 and 2020, the average pH value of the ocean surface is estimated to have decreased from approximately 8.15 to 8.05. [ 2 ] This represents an increase of around 26% in hydrogen ion concentration in the world's oceans (the pH scale is logarithmic, so a change of one in pH unit is equivalent to a tenfold change in hydrogen ion concentration). [ 50 ] For example, in the 15-year period 1995–2010 alone, acidity has increased 6 percent in the upper 100 meters of the Pacific Ocean from Hawaii to Alaska. [ 51 ] The IPCC Sixth Assessment Report in 2021 stated that "present-day surface pH values are unprecedented for at least 26,000 years and current rates of pH change are unprecedented since at least that time. [ 52 ] : 76 The pH value of the ocean interior has declined over the last 20–30 years everywhere in the global ocean. [ 52 ] : 76 The report also found that "pH in open ocean surface water has declined by about 0.017 to 0.027 pH units per decade since the late 1980s". [ 53 ] : 716 The rate of decline differs by region. This is due to complex interactions between different types of forcing mechanisms: [ 53 ] : 716 "In the tropical Pacific, its central and eastern upwelling zones exhibited a faster pH decline of minus 0.022 to minus 0.026 pH unit per decade." This is thought to be "due to increased upwelling of CO 2 -rich sub-surface waters in addition to anthropogenic CO 2 uptake." [ 53 ] : 716 Some regions exhibited a slower acidification rate: a pH decline of minus 0.010 to minus 0.013 pH unit per decade has been observed in warm pools in the western tropical Pacific. [ 53 ] : 716 The rate at which ocean acidification will occur may be influenced by the rate of surface ocean warming , because warm waters will not absorb as much CO 2 . [ 54 ] Therefore, greater seawater warming could limit CO 2 absorption and lead to a smaller change in pH for a given increase in CO 2 . [ 54 ] The difference in changes in temperature between basins is one of the main reasons for the differences in acidification rates in different localities. Current rates of ocean acidification have been likened to the greenhouse event at the Paleocene–Eocene boundary (about 56 million years ago), when surface ocean temperatures rose by 5–6 °C . In that event, surface ecosystems experienced a variety of impacts, but bottom-dwelling organisms in the deep ocean actually experienced a major extinction. [ 55 ] Currently, the rate of carbon addition to the atmosphere-ocean system is about ten times the rate that occurred at the Paleocene–Eocene boundary. [ 56 ] Extensive observational systems are now in place or being built for monitoring seawater CO 2 chemistry and acidification for both the global open ocean and some coastal systems. [ 19 ] Ocean acidification has occurred previously in Earth's history. [ 14 ] It happened during the Capitanian mass extinction , [ 65 ] [ 66 ] [ 67 ] at the end-Permian extinction , [ 68 ] [ 69 ] [ 70 ] during the end-Triassic extinction , [ 71 ] [ 72 ] [ 73 ] and during the Cretaceous–Palaeogene extinction event . [ 74 ] Three of the big five mass extinction events in the geologic past were associated with a rapid increase in atmospheric carbon dioxide, probably due to volcanism and/or thermal dissociation of marine gas hydrates . [ 75 ] Elevated CO 2 levels impacted biodiversity. [ 76 ] Decreased CaCO 3 saturation due to seawater uptake of volcanogenic CO 2 has been suggested as a possible kill mechanism during the marine mass extinction at the end of the Triassic . [ 71 ] The end-Triassic biotic crisis is still the most well-established example of a marine mass extinction due to ocean acidification, because (a) carbon isotope records suggest enhanced volcanic activity that decreased the carbonate sedimentation which reduced the carbonate compensation depth and the carbonate saturation state, and a marine extinction coincided precisely in the stratigraphic record, [ 73 ] [ 72 ] [ 77 ] and (b) there was pronounced selectivity of the extinction against organisms with thick aragonitic skeletons, [ 73 ] [ 78 ] [ 79 ] which is predicted from experimental studies. [ 80 ] Ocean acidification has also been suggested as a one cause of the end-Permian mass extinction [ 69 ] [ 68 ] and the end-Cretaceous crisis. [ 74 ] Overall, multiple climatic stressors, including ocean acidification, was likely the cause of geologic extinction events. [ 75 ] The most notable example of ocean acidification is the Paleocene-Eocene Thermal Maximum (PETM), which occurred approximately 56 million years ago when massive amounts of carbon entered the ocean and atmosphere, and led to the dissolution of carbonate sediments across many ocean basins. [ 76 ] Relatively new geochemical methods of testing for pH in the past indicate the pH dropped 0.3 units across the PETM. [ 81 ] [ 82 ] One study that solves the marine carbonate system for saturation state shows that it may not change much over the PETM, suggesting the rate of carbon release at our best geological analogy was much slower than human-induced carbon emissions. However, stronger proxy methods to test for saturation state are needed to assess how much this pH change may have affected calcifying organisms. Importantly, the rate of change in ocean acidification is much higher than in the geological past. This faster change prevents organisms from gradually adapting, and prevents climate cycle feedbacks from kicking in to mitigate ocean acidification. Ocean acidification is now on a path to reach lower pH levels than at any other point in the last 300 million years. [ 83 ] [ 74 ] The rate of ocean acidification (i.e. the rate of change in pH value) is also estimated to be unprecedented over that same time scale. [ 84 ] [ 14 ] These expected changes are considered unprecedented in the geological record. [ 85 ] [ 86 ] [ 87 ] In combination with other ocean biogeochemical changes, this drop in pH value could undermine the functioning of marine ecosystems and disrupt the provision of many goods and services associated with the ocean, beginning as early as 2100. [ 88 ] The extent of further ocean chemistry changes, including ocean pH, will depend on climate change mitigation efforts taken by nations and their governments. [ 52 ] Different scenarios of projected socioeconomic global changes are modelled by using the Shared Socioeconomic Pathways (SSP) scenarios. Under a very high emission scenario (SSP5-8.5) , model projections estimate that surface ocean pH could decrease by as much as 0.44 units by the end of this century, compared to the end of the 19th century. [ 89 ] : 608 This would mean a pH as low as about 7.7, and represents a further increase in H+ concentrations of two to four times beyond the increase to date. The full ecological consequences of the changes in calcification due to ocean acidification are complex but it appears likely that many calcifying species will be adversely affected by ocean acidification. [ 19 ] [ 22 ] : 413 Increasing ocean acidification makes it more difficult for shell-accreting organisms to access carbonate ions, essential for the production of their hard exoskeletal shell. [ 90 ] Oceanic calcifying organism span the food chain from autotrophs to heterotrophs and include organisms such as coccolithophores , corals , foraminifera , echinoderms , crustaceans and molluscs . [ 88 ] [ 91 ] Overall, all marine ecosystems on Earth will be exposed to changes in acidification and several other ocean biogeochemical changes. [ 92 ] Ocean acidification may force some organisms to reallocate resources away from productive endpoints in order to maintain calcification. [ 93 ] For example, the oyster Magallana gigas is recognized to experience metabolic changes alongside altered calcification rates due to energetic tradeoffs resulting from pH imbalances. [ 94 ] Under normal conditions, calcite and aragonite are stable in surface waters since the carbonate ions are supersaturated with respect to seawater. However, as ocean pH falls, the concentration of carbonate ions also decreases. Calcium carbonate thus becomes undersaturated, and structures made of calcium carbonate are vulnerable to calcification stress and dissolution. [ 95 ] In particular, studies show that corals, [ 96 ] [ 97 ] coccolithophores, [ 91 ] [ 29 ] [ 98 ] coralline algae, [ 99 ] foraminifera, [ 100 ] shellfish and pteropods [ 101 ] experience reduced calcification or enhanced dissolution when exposed to elevated CO 2 . Even with active marine conservation practices it may be impossible to bring back many previous shellfish populations. [ 102 ] Some studies have found different responses to ocean acidification, with coccolithophore calcification and photosynthesis both increasing under elevated atmospheric pCO 2 , [ 103 ] and an equal decline in primary production and calcification in response to elevated CO 2 , [ 104 ] or the direction of the response varying between species. [ 105 ] Similarly, the sea star, Pisaster ochraceus , shows enhanced growth in waters with increased acidity. [ 106 ] Reduced calcification from ocean acidification may affect the ocean's biologically driven sequestration of carbon from the atmosphere to the ocean interior and seafloor sediment , weakening the so-called biological pump . [ 74 ] Seawater acidification could also reduce the size of Antarctic phytoplankton, making them less effective at storing carbon. [ 107 ] Such changes are being increasingly studied and synthesized through the use of physiological frameworks, including the Adverse Outcome Pathway (AOP) framework. [ 94 ] A coccolithophore is a unicellular , eukaryotic phytoplankton ( alga ). Understanding calcification changes in coccolithophores may be particularly important because a decline in the coccolithophores may have secondary effects on climate: it could contribute to global warming by decreasing the Earth's albedo via their effects on oceanic cloud cover. [ 108 ] A study in 2008 examined a sediment core from the North Atlantic and found that the species composition of coccolithophorids remained unchanged over the past 224 years (1780 to 2004). But the average coccolith mass had increased by 40% during the same period. [ 103 ] Warm water corals are clearly in decline, with losses of 50% over the last 30–50 years due to multiple threats from ocean warming, ocean acidification, pollution and physical damage from activities such as fishing, and these pressures are expected to intensify. [ 109 ] [ 22 ] : 416 The fluid in the internal compartments (the coelenteron) where corals grow their exoskeleton is also extremely important for calcification growth. When the saturation state of aragonite in the external seawater is at ambient levels, the corals will grow their aragonite crystals rapidly in their internal compartments, hence their exoskeleton grows rapidly. If the saturation state of aragonite in the external seawater is lower than the ambient level, the corals have to work harder to maintain the right balance in the internal compartment. When that happens, the process of growing the crystals slows down, and this slows down the rate of how much their exoskeleton is growing. Depending on the aragonite saturation state in the surrounding water, the corals may halt growth because pumping aragonite into the internal compartment will not be energetically favorable. [ 110 ] Under the current progression of carbon emissions, around 70% of North Atlantic cold-water corals will be living in corrosive waters by 2050–60. [ 111 ] Acidified conditions primarily reduce the coral's capacity to build dense exoskeletons, rather than affecting the linear extension of the exoskeleton. The density of some species of corals could be reduced by over 20% by the end of this century. [ 112 ] An in situ experiment, conducted on a 400 m2 patch of the Great Barrier Reef , to decrease seawater CO 2 level (raise pH) to near the preindustrial value showed a 7% increase in net calcification. [ 113 ] A similar experiment to raise in situ seawater CO 2 level (lower pH) to a level expected soon after the 2050 found that net calcification decreased 34%. [ 114 ] However, a field study of the coral reef in Queensland and Western Australia from 2007 to 2012 found that corals are more resistant to the environmental pH changes than previously thought, due to internal homeostasis regulation; this makes thermal change ( marine heatwaves ), which leads to coral bleaching , rather than acidification, the main factor for coral reef vulnerability due to climate change. [ 115 ] In some places carbon dioxide bubbles out from the sea floor, locally changing the pH and other aspects of the chemistry of the seawater. Studies of these carbon dioxide seeps have documented a variety of responses by different organisms. [ 116 ] Coral reef communities located near carbon dioxide seeps are of particular interest because of the sensitivity of some corals species to acidification. In Papua New Guinea , declining pH caused by carbon dioxide seeps is associated with declines in coral species diversity. [ 117 ] However, in Palau carbon dioxide seeps are not associated with reduced species diversity of corals, although bioerosion of coral skeletons is much higher at low pH sites. Pteropods and brittle stars both form the base of the Arctic food webs and are both seriously damaged from acidification. Pteropods shells dissolve with increasing acidification and the brittle stars lose muscle mass when re-growing appendages . [ 118 ] For pteropods to create shells they require aragonite which is produced through carbonate ions and dissolved calcium and strontium. Pteropods are severely affected because increasing acidification levels have steadily decreased the amount of water supersaturated with carbonate. [ 119 ] The degradation of organic matter in Arctic waters has amplified ocean acidification; some Arctic waters are already undersaturated with respect to aragonite. [ 120 ] [ 121 ] [ 122 ] The brittle star's eggs die within a few days when exposed to expected conditions resulting from Arctic acidification. [ 123 ] Similarly, when exposed in experiments to pH reduced by 0.2 to 0.4, larvae of a temperate brittle star , a relative of the common sea star , fewer than 0.1 percent survived more than eight days. [ 88 ] Aside from the slowing and/or reversal of calcification, organisms may suffer other adverse effects, either indirectly through negative impacts on food resources, or directly as reproductive or physiological effects. [ 4 ] For example, the elevated oceanic levels of CO 2 may produce CO 2 -induced acidification of body fluids, known as hypercapnia . [ 125 ] Increasing acidity has been observed to reduce metabolic rates in jumbo squid [ 126 ] and depress the immune responses of blue mussels. [ 127 ] Atlantic longfin squid eggs took longer to hatch in acidified water, and the squid's statolith was smaller and malformed in animals placed in sea water with a lower pH. [ 128 ] However, these studies are ongoing and there is not yet a full understanding of these processes in marine organisms or ecosystems . [ 129 ] Another potential route to ecosystem impacts is through bioacoustics . This may occur as ocean acidification can alter the acoustic properties of seawater, allowing sound to propagate further, and increasing ocean noise. [ 130 ] This impacts all animals that use sound for echolocation or communication . [ 131 ] Another possible effect would be an increase in harmful algal bloom events, which could contribute to the accumulation of toxins ( domoic acid , brevetoxin , saxitoxin ) in small organisms such as anchovies and shellfish , in turn increasing occurrences of amnesic shellfish poisoning , neurotoxic shellfish poisoning and paralytic shellfish poisoning . [ 132 ] Although algal blooms can be harmful, other beneficial photosynthetic organisms may benefit from increased levels of carbon dioxide. Most importantly, seagrasses will benefit. [ 106 ] Research found that as seagrasses increased their photosynthetic activity, calcifying algae's calcification rates rose, likely because localized photosynthetic activity absorbed carbon dioxide and elevated local pH. [ 106 ] Ocean acidification can also have effects on marine fish larvae . It internally affects their olfactory systems, which is a crucial part of their early development. Orange clownfish larvae mostly live on oceanic reefs that are surrounded by vegetative islands [ clarification needed ] . [ 115 ] Larvae are known to use their sense of smell to detect the differences between reefs surrounded by vegetative islands and reefs not surrounded by vegetative islands. [ 115 ] Clownfish larvae need to be able to distinguish between these two destinations to be able to find a suitable area for their growth. Another use for marine fish olfactory systems is to distinguish between their parents and other adult fish, in order to avoid inbreeding. In an experimental aquarium facility, clownfish were sustained in non-manipulated seawater with pH 8.15 ± 0.07, which is similar to our current ocean's pH. [ 115 ] To test for effects of different pH levels, the seawater was modified to two other pH levels, which corresponded with climate change models that predict future atmospheric CO 2 levels. [ 115 ] In the year 2100 the model projects possible CO 2 levels of 1,000 ppm, which correlates with the pH of 7.8 ± 0.05. This experiment showed that when larvae are exposed to a pH of 7.8 ± 0.05 their reaction to environmental cues differs drastically from their reaction to cues at pH equal to current ocean levels. [ 115 ] At pH 7.6 ± 0.05 larvae had no reaction to any type of cue. However, a meta-analysis published in 2022 found that the effect sizes of published studies testing for ocean acidification effects on fish behavior have declined by an order of magnitude over the past decade, and have been negligible for the past five years. [ 133 ] Eel embryos, a "critically endangered" species [ 134 ] yet profound [ clarification needed ] in aquaculture, are also being affected by ocean acidification, specifically the European eel . Although they spend most of their lives in fresh water, usually in rivers, streams, or estuaries, they go to spawn and die in the Sargasso Sea . Here is where European eels are experiencing the effects of acidification in one of their key life stages. Fish embryos and larvae are usually more sensitive to pH changes than adults, as organs for pH regulation are not full developed. [ 135 ] Because of this, European eel embryos are more vulnerable to changes in pH in the Sargasso Sea. A study of the European Eel in the Sargasso Sea was conducted in 2021 to analyze the specific effects of ocean acidification on embryos. The study found that exposure to predicted end-of-century ocean pCO 2 conditions may affect normal development of this species in nature during sensitive early life history stages with limited physiological response capacities, while extreme acidification would negatively influence embryonic survival and development under hatchery conditions. [ 136 ] There is a substantial body of research showing that a combination of ocean acidification and elevated ocean temperature have a compounded effect on marine life and the ocean environment. This effect far exceeds the individual harmful impact of either. [ 139 ] In addition, ocean warming, along with increased productivity of phytoplankton from higher CO 2 levels exacerbates ocean deoxygenation . Deoxygenation of ocean waters is an additional stressor on marine organisms that increases ocean stratification therefore limiting nutrients over time and reducing biological gradients. [ 140 ] [ 141 ] Meta analyses have quantified the direction and magnitude of the harmful effects of combined ocean acidification, warming and deoxygenation on the ocean. [ 142 ] [ 143 ] These meta-analyses have been further tested by mesocosm studies that simulated the interaction of these stressors and found a catastrophic effect on the marine food web: thermal stress more than negates any primary producer to herbivore increase in productivity from elevated CO 2 . [ 144 ] [ 145 ] The increase of ocean acidity decelerates the rate of calcification in salt water, leading to smaller and slower growing coral reefs which supports approximately 25% of marine life. [ 146 ] [ 147 ] Impacts are far-reaching from fisheries and coastal environments down to the deepest depths of the ocean. [ 19 ] The increase in ocean acidity is not only killing the coral, but also the wildly diverse population of marine inhabitants which coral reefs support. [ 148 ] The threat of acidification includes a decline in commercial fisheries and the coast-based tourism industry . Several ocean goods and services are likely to be undermined by future ocean acidification potentially affecting the livelihoods of some 400 to 800 million people, depending upon the greenhouse gas emission scenario . [ 88 ] Some 1 billion people are completely or partially dependent on the fishing, tourism, and coastal management services provided by coral reefs. Ongoing acidification of the oceans may therefore threaten future food chains linked with the oceans. [ 9 ] [ 10 ] In the Arctic, commercial fisheries are threatened because acidification harms calcifying organisms which form the base of the Arctic food webs (pteropods and brittle stars, see above).  Acidification threatens Arctic food webs from the base up. Arctic food webs are considered simple, meaning there are few steps in the food chain from small organisms to larger predators. For example, pteropods are "a key prey item of a number of higher predators – larger plankton, fish, seabirds, whales". [ 149 ] Both pteropods and sea stars serve as a substantial food source and their removal from the simple food web would pose a serious threat to the whole ecosystem. The effects on the calcifying organisms at the base of the food webs could potentially destroy fisheries. The shellfish industry is an important part of the United Kingdom economy. [ 150 ] In 2013, the shellfish industry contributed 37% of total landings by value. [ 150 ] England and Scotland are the highest producers of shellfish within the United Kingdom. [ 150 ] It has been found that annually fishers catch 66,000 t and 61,000 t. [ 150 ] In terms of value, the wild-captured shellfish are worth 203 million pounds per year. [ 150 ] However, ocean acidification is causing a decrease in the growth of many shellfish species. [ 150 ] This is causing a drastic economic loss in the United Kingdom economy. [ 150 ] It is predicted that by 2100 there will be an economy-wide economic loss of shellfish production in the United Kingdom. [ 150 ] The direct potential loss ranges from 14 to 28 percent of fishery output. [ 150 ] That is a total loss of about 23 to 88 million pounds. [ 150 ] The financial losses vary regionally due to different patterns of wild-caught shellfish and the exploitation of species with differing sensitivities to ocean acidification. [ 150 ] Shellfish resources in the United Kingdom will require regional, national, or international solutions to reduce the impacts of ocean acidification on shellfish species and stabilize the economy. [ 150 ] The value of fish caught from US commercial fisheries in 2007 was valued at $3.8 billion and of that 73% was derived from calcifiers and their direct predators. [ 151 ] Other organisms are directly harmed as a result of acidification. For example, decrease in the growth of marine calcifiers such as the American lobster , ocean quahog , and scallops means there is less shellfish meat available for sale and consumption. [ 152 ] Red king crab fisheries are also at a serious threat because crabs are also calcifiers. Baby red king crab when exposed to increased acidification levels experienced 100% mortality after 95 days. [ 153 ] In 2006, red king crab accounted for 23% of the total guideline harvest levels and a serious decline in red crab population would threaten the crab harvesting industry. [ 154 ] Reducing carbon dioxide emissions (i.e. climate change mitigation measures) is the only solution that addresses the root cause of ocean acidification. For example, some mitigation measures focus on carbon dioxide removal (CDR) from the atmosphere (e.g. direct air capture (DAC), bioenergy with carbon capture and storage (BECCS)). These would also slow the rate of acidification. Approaches that remove carbon dioxide from the ocean include ocean nutrient fertilization , artificial upwelling /downwelling, seaweed farming , ecosystem recovery, ocean alkalinity enhancement, enhanced weathering and electrochemical processes. [ 155 ] : 12–36 All of these methods use the ocean to remove CO 2 from the atmosphere to store it in the ocean. These methods could assist with mitigation but they can have side-effects on marine life. The research field for all CDR methods has grown a lot since 2019. [ 87 ] In total, "ocean-based methods have a combined potential to remove 1–100 gigatons of CO 2 per year". [ 156 ] : TS-94 Their costs are in the order of US$ 40–500 per ton of CO 2 . For example, enhanced weathering could remove 2–4 gigatons of CO 2 per year. This technology comes with a cost of US$ 50–200 per ton of CO 2 . [ 156 ] : TS-94 Some carbon removal techniques add alkalinity to the ocean and therefore immediately buffer pH changes which might help the organisms in the region that the extra alkalinity is added to. The two technologies that fall into this category are ocean alkalinity enhancement and electrochemical methods. [ 87 ] Eventually, due to diffusion, that alkalinity addition will be quite small to distant waters. This is why the term local ocean acidification mitigation is used. Both of these technologies have the potential to operate on a large scale and to be efficient at removing carbon dioxide. [ 87 ] : Table 9.1 However, they are expensive, have many risks and side effects and currently have a low technology readiness level . [ 155 ] : 12–36 Ocean alkalinity enhancement (OAE) is a proposed "carbon dioxide removal (CDR) method that involves deposition of alkaline minerals or their dissociation products at the ocean surface". [ 35 ] : 2241 The process would increase surface total alkalinity. It would work to increase ocean absorption of CO 2 . The process involves increasing the amount of bicarbonate (HCO 3 -) through accelerated weathering ( enhanced weathering ) of rocks ( silicate , limestone and quicklime ). [ 87 ] : 181 This process mimics the silicate-carbonate cycle. The CO 2 either becomes bicarbonate, remaining in that form for more than 100 years, or may precipitate into calcium carbonate (CaCO 3 ). When calcium carbonate is buried in the deep ocean, it can hold the carbon indefinitely when utilizing silicate rocks. Enhanced weathering is one type of ocean alkalinity enhancement. Enhanced weathering increases alkalinity by scattering fine rock particles. This can happen on land and in the ocean (even though the outcome eventually affects the ocean). In addition to sequestering CO 2 , alkalinity addition buffers the pH of the ocean therefore reducing ocean acidification. However, little is known about how organisms respond to added alkalinity, even from natural sources. [ 87 ] For example, weathering of some silicate rocks could release a large amount of trace metals at the weathering site. Cost and energy consumed by ocean alkalinity enhancement (mining, pulverizing, transport) is high compared to other CDR techniques. [ 87 ] The cost is estimated to be US$ 20–50 per ton of CO 2 (for "direct addition of alkaline minerals to the ocean"). [ 155 ] : 12–50 Carbon sequestered as bicarbonate in the ocean amounts to about 30% of carbon emissions since the Industrial Revolution . Experimental materials include limestone, brucite , olivine and alkaline solutions. Another approach is to use electricity to raise alkalinity during desalination to capture waterborne CO 2 . [ 157 ] Electrochemical methods, or electrolysis , can strip carbon dioxide directly from seawater. [ 87 ] Electrochemical process are a type of ocean alkalinity enhancement, too. Some methods focus on direct CO 2 removal (in the form of carbonate and CO 2 gas) while others increase the alkalinity of seawater by precipitating metal hydroxide residues, which absorbs CO 2 in a matter described in the ocean alkalinity enhancement section. The hydrogen produced during direct carbon capture can then be upcycled to form hydrogen for energy consumption, or other manufactured laboratory reagents such as hydrochloric acid . However, implementation of electrolysis for carbon capture is expensive and the energy consumed for the process is high compared to other CDR techniques. [ 87 ] In addition, research to assess the environmental impact of this process is ongoing. Some complications include toxic chemicals in wastewaters, and reduced DIC in effluents; both of these may negatively impact marine life. [ 87 ] As awareness about ocean acidification grows, policies geared towards increasing monitoring efforts of ocean acidification have been drafted. [ 158 ] Previously in 2015, ocean scientist Jean-Pierre Gattuso had remarked that "The ocean has been minimally considered at previous climate negotiations. Our study provides compelling arguments for a radical change at the UN conference (in Paris) on climate change". [ 159 ] International efforts, such as the Wider Caribbean's Cartagena Convention (entered into force in 1986), [ 160 ] may enhance the support provided by regional governments to highly vulnerable areas in response to ocean acidification. [ 161 ] Many countries, for example in the Pacific Islands and Territories, have constructed regional policies, or National Ocean Policies, National Action Plans, National Adaptation Plans of Action and Joint National Action Plans on Climate Change and Disaster Risk Reduction, to help work towards SDG 14 . Ocean acidification is now starting to be considered within those frameworks. [ 162 ] The UN Ocean Decade has a program called "Ocean acidification research for sustainability". It was proposed by the Global Ocean Acidification Observing Network (GOA-ON) and its partners, and has been formally endorsed as a program of the UN Decade of Ocean Science for Sustainable Development . [ 163 ] [ 164 ] The OARS program builds on the work of GOA-ON and has the following aims: to further develop the science of ocean acidification; to increase observations of ocean chemistry changes; to identify the impacts on marine ecosystems on local and global scales; and to provide decision makers with the information needed to mitigate and adapt to ocean acidification. The importance of ocean acidification is reflected in its inclusion as one of seven Global Climate Indicators. [ 165 ] These Indicators are a set of parameters that describe the changing climate without reducing climate change to only rising temperature . The Indicators include key information for the most relevant domains of climate change: temperature and energy, atmospheric composition, ocean and water as well as the cryosphere. The Global Climate Indicators have been identified by scientists and communication specialists in a process led by Global Climate Observing System (GCOS). [ 166 ] The Indicators have been endorsed by the World Meteorological Organization (WMO). They form the basis of the annual WMO Statement of the State of the Global Climate, which is submitted to the Conference of Parties (COP) of the United Nations Framework Convention on Climate Change (UNFCCC). Additionally, the Copernicus Climate Change Service (C3S) of the European Commission uses the Indicators for their annual "European State of the Climate". In 2015, the United Nations adopted the 2030 Agenda and a set of 17 Sustainable Development Goals (SDG), including a goal dedicated to the ocean, Sustainable Development Goal 14 , [ 167 ] which calls to "conserve and sustainably use the oceans, seas and marine resources for sustainable development". Ocean acidification is directly addressed by the target SDG 14.3. The full title of Target 14.3 is: "Minimize and address the impacts of ocean acidification, including through enhanced scientific cooperation at all levels". [ 168 ] This target has one indicator: Indicator 14.3.1 which calls for the "Average marine acidity ( pH ) measured at agreed suite of representative sampling stations". [ 169 ] The Intergovernmental Oceanographic Commission (IOC) of UNESCO was identified as the custodian agency for the SDG 14.3.1 Indicator. In this role, IOC-UNESCO is tasked with developing the SDG 14.3.1 Indicator Methodology, the annual collection of data towards the SDG 14.3.1 Indicator and the reporting of progress to the United Nations. [ 170 ] [ 171 ] In the United States, the Federal Ocean Acidification Research And Monitoring Act of 2009 supports government coordination, such as the National Oceanic Atmospheric Administration 's (NOAA) "Ocean Acidification Program". [ 172 ] [ 173 ] In 2015, USEPA denied a citizens petition that asked EPA to regulate CO 2 under the Toxic Substances Control Act of 1976 in order to mitigate ocean acidification. [ 174 ] [ 175 ] In the denial, the EPA said that risks from ocean acidification were being "more efficiently and effectively addressed" under domestic actions, e.g., under the Presidential Climate Action Plan , and that multiple avenues are being pursued to work with and in other nations to reduce emissions and deforestation and promote clean energy and energy efficiency. [ 176 ] Research into the phenomenon of ocean acidification, as well as awareness raising about the problem, has been going on for several decades. The fundamental research really began with the creation of the pH scale by Danish chemist Søren Peder Lauritz Sørensen in 1909. [ 177 ] By around the 1950s the massive role of the ocean in absorbing fossil fuel CO 2 was known to specialists, but not appreciated by the greater scientific community. [ 178 ] Throughout much of the 20th century, the dominant focus has been the beneficial process of oceanic CO 2 uptake, which has enormously ameliorated climate change. The concept of "too much of a good thing" has been late in developing and was triggered only by some key events, and the oceanic sink for heat and CO 2 is still critical as the primary buffer against climate change. [ 178 ] In the early 1970s questions over the long-term impact of the accumulation of fossil fuel CO 2 in the sea were already arising around the world and causing strong debate. Researchers commented on the accumulation of fossil CO 2 in the atmosphere and sea and drew attention to the possible impacts on marine life. By the mid-1990s, the likely impact of CO 2 levels rising so high with the inevitable changes in pH and carbonate ion became a concern of scientists studying the fate of coral reefs. [ 178 ] By the end of the 20th century the trade-offs between the beneficial role of the ocean in absorbing some 90% of all heat created, and the accumulation of some 50% of all fossil fuel CO 2 emitted, and the impacts on marine life were becoming more clear. By 2003, the time of planning for the "First Symposium on the Ocean in a High-CO 2 World" meeting to be held in Paris in 2004, many new research results on ocean acidification were published. [ 178 ] In 2009, members of the InterAcademy Panel called on world leaders to "Recognize that reducing the build up of CO 2 in the atmosphere is the only practicable solution to mitigating ocean acidification". [ 179 ] The statement also stressed the importance to "Reinvigorate action to reduce stressors, such as overfishing and pollution , on marine ecosystems to increase resilience to ocean acidification". [ 180 ] For example, research in 2010 found that in the 15-year period 1995–2010 alone, acidity had increased 6 percent in the upper 100 meters of the Pacific Ocean from Hawaii to Alaska. [ 51 ] According to a statement in July 2012 by Jane Lubchenco , head of the U.S. National Oceanic and Atmospheric Administration "surface waters are changing much more rapidly than initial calculations have suggested. It's yet another reason to be very seriously concerned about the amount of carbon dioxide that is in the atmosphere now and the additional amount we continue to put out." [ 181 ] A 2013 study found acidity was increasing at a rate 10 times faster than in any of the evolutionary crises in Earth's history. [ 182 ] The "Third Symposium on the Ocean in a High-CO 2 World" took place in Monterey, California, in 2012. The summary for policy makers from the conference stated that "Ocean acidification research is growing rapidly". [ 96 ] In a synthesis report published in Science in 2015, 22 leading marine scientists stated that CO 2 from burning fossil fuels is changing the oceans' chemistry more rapidly than at any time since the Great Dying (Earth's most severe known extinction event). [ 159 ] Their report emphasized that the 2 °C maximum temperature increase agreed upon by governments reflects too small a cut in emissions to prevent "dramatic impacts" on the world's oceans. [ 159 ] A study done in 2020 argues that ocean acidification is not only negatively affecting marine life, but also human health. Food quality, respiratory issues, and human health are all negatively affected by ocean acidification. [ 183 ]
https://en.wikipedia.org/wiki/Ocean_acidification
The Arctic Ocean covers an area of 14,056,000 square kilometers, and supports a diverse and important socioeconomic food web of organisms, despite its average water temperature being 32 degrees Fahrenheit. [ 1 ] Over the last three decades, the Arctic Ocean has experienced drastic changes due to climate change. [ 1 ] One of the changes is in the acidity levels of the ocean, which have been consistently increasing at twice the rate of the Pacific and Atlantic oceans. [ 2 ] Arctic Ocean acidification is a result of feedback from climate system mechanisms, and is having negative impacts on Arctic Ocean ecosystems and the organisms that live within them. Ocean acidification is caused by the equilibration of the atmosphere with the ocean, a process that occurs worldwide. Carbon dioxide in the atmosphere equilibrates and dissolves into the ocean. During this reaction, carbon dioxide reacts with water to form carbonic acid . The carbonic acid then dissociates into bicarbonate ions and hydrogen ions. [ 3 ] This reaction causes the pH of the water to lower, effectively acidifying it. [ 3 ] Ocean acidification is occurring in every ocean across the world. Since the beginning of the Industrial Revolution , the World's oceans have absorbed approximately 525 billion tons of carbon dioxide. [ 1 ] During this time, world ocean pH has collectively decreased from 8.2 to 8.1, with climatic modeling predicting a further decrease of pH by 0.3 units by 2100. [ 1 ] However, the Arctic Ocean has been affected more due to the cold water temperatures and increased solubility of gases as water temperature decreases. The cold Arctic water is able to absorb higher amounts of carbon dioxide compared to the warmer Pacific and Atlantic Oceans. [ 4 ] The chemical changes caused by the acidification of the Arctic Ocean are having negative ecological and socioeconomic repercussions. With the changes in the chemistry of their environment, arctic organisms are challenged with new stressors. These stressors can have damaging effects on these organisms, with some being affected more than others. Calcifying organisms specifically appear to be the most impacted by this changing water composition, as they rely on carbonate availability to survive. Dissolved carbonate concentrations decrease with increasing carbon dioxide and lowered pH in the water. [ 5 ] Ecological food webs are also altered by the acidification. Acidification lowers the ability of many fish to grow, which not only impacts food webs but humans that rely on these fisheries as well. [ 1 ] Economic effects are resulting from shifting food webs that decrease popular fish populations. These fish populations provide jobs to people who work in the fisheries industry . [ 6 ] As is apparent, ocean acidification lacks any positive benefits, and as a result has been placed high on a priority list within the United States and other organizations such as the Scientific Committee on Oceanic Research, UNESCO's Intergovernmental Oceanographic Commission , the Ocean Carbon and Biogeochemistry Program, the Integrated Marine Biogeochemistry and Ecosystem Research Project, and the Consortium for Ocean Leadership. [ 1 ] Arctic sea ice has experienced an extreme reduction over the past few decades, with the minimum area of sea ice being 4.32 million km 2 in 2019, [ 7 ] a sharp 38% decrease from 1980, when the minimum area was 7.01 million km 2 . [ 8 ] Sea ice plays an important role in the health of the Arctic Ocean, and its decline has had detrimental effects on Arctic Ocean chemistry. All oceans equilibrate with the atmosphere by pulling carbon dioxide out of the atmosphere and into the ocean, which lowers the pH of the water. [ 9 ] Sea ice limits the air-sea gas exchange with carbon dioxide [ 10 ] by protecting the water from being completely exposed to the atmosphere. Low carbon dioxide levels are important to the Arctic Ocean due to intense cooling, fresh water runoff, and photosynthesis from marine organisms. [ 10 ] Reductions in sea ice have allowed more carbon dioxide to equilibrate with the arctic water, resulting in increased acidification. The decrease in sea ice has also allowed more Pacific Ocean water to flow into in the Arctic Ocean during the winter, called Pacific winter water. Pacific Ocean water is high in carbon dioxide, and with decreased amounts of sea ice, more Pacific Ocean water has been able to enter the Arctic Ocean, carrying carbon dioxide with it. This Pacific winter water has further acidified the Arctic Ocean, as well as increased the depth of acidified water. [ 2 ] Climate change is causing destabilization of multiple climate systems within the Arctic Ocean. One system that climate change is impacting is methane hydrates. Methane hydrates are located along the continental margins, and are stabilized by high pressure, as well as uniformly low temperatures. Climate change has begun to destabilize these methane hydrates within the Arctic Ocean by decreasing pressure and increasing temperatures, allowing methane hydrates to melt and release methane into the arctic waters. [ 11 ] When methane is released into the water, it can either be used via anaerobic metabolism or aerobic metabolism by microorganisms in the ocean sediment , or be released from sea into the atmosphere. [ 11 ] Most impactful to ocean acidification is aerobic oxidation by microorganisms in the water column. [ 11 ] Carbon dioxide is produced by the reaction of methane and oxygen in water. Carbon dioxide then equilibrates with water, producing carbonic acid , which then equilibrates to release hydrogen ions and bicarbonate and further contributes to ocean acidification. Organisms in Arctic waters are under high environmental stress such as extremely cold water. It is believed that this high stress environment will cause ocean acidification factors to have a stronger effect on these organisms. It could also cause these effects to appear in the Arctic before it appears in other parts of the ocean. There is a significant variation in the sensitivity of marine organisms to increased ocean acidification. Calcifying organisms generally exhibit larger negative responses from ocean acidification than non-calcifying organisms across numerous response variables, with the exception of crustaceans , which calcify but don't seem to be negatively affected. [ 12 ] This is due, mainly, to the process of marine biogenic calcification , that calcifying organisms utilize. Carbonate ions (CO 3 2- ) are essential in marine calcifying organisms, like plankton and shellfish, as they are required to produce their calcium carbonate ( CaCO 3 ) shells and skeletons. [ 13 ] As the ocean acidifies, the increased uptake of CO 2 by seawater increases the concentration of hydrogen ions , which lowers the pH of the water. [ 14 ] This change in the chemical equilibrium of the inorganic carbon system reduces the concentration of these carbonate ions. This reduces the ability of these organisms to create their shells and skeletons. The two polymorphs of calcium carbonate that are produced by marine organisms are aragonite and calcite . These are the materials that makes up most of the shells and skeletons of these calcifying organisms. Aragonite, for example, makes up nearly all mollusc shells, as well as the exoskeleton of corals. [ 13 ] The formation of these materials is dependent on the saturation state of CaCO 3 in ocean water. Waters which are saturated in CaCO 3 are favorable to precipitation and formation of CaCO 3 shells and skeletons, but waters which are undersaturated are corrosive to CaCO 3 shells. In the absence of protective mechanisms, dissolution of calcium carbonate will occur. As colder arctic water absorbs more CO 2 , the concentration of CO 3 2- is reduced, therefore the saturation of calcium carbonate is lower in high-latitude oceans than it is in tropical or temperate oceans. [ 10 ] The undersaturation of CaCO 3 causes the shells of calcifying organisms to dissolve, which can have devastating consequences to the ecosystem. [ 15 ] As the shells dissolve, the organisms struggle to maintain proper health, which can lead to mass mortality. The loss of many of these species can lead to intense consequences on the marine food web in the Arctic Ocean, as many of these marine calcifying organisms are keystone species. Laboratory experiments on various marine biota in an elevated CO 2 environment show that changes in aragonite saturation cause substantial changes in overall calcification rates for many species of marine organisms, including coccolithophore , foraminifera , pteropods , mussels , and clams . [ 10 ] Although the undersaturation of arctic water has been proven to have an effect on the ability of organisms to precipitate their shells, recent studies have shown that the calcification rate of calcifiers, such as corals , coccolithophores, foraminiferans and bivalves, decrease with increasing p CO 2 , even in seawater supersaturated with respect to CaCO 3 . Additionally, increased p CO 2 has been found to have complex effects on the physiology, growth and reproductive success of various marine calcifiers. [ 16 ] CO 2 tolerance seems to differ between various marine organisms, as well as CO 2 tolerance at different life cycle stages (e.g. larva and adult). The first stage in the life cycle of marine calcifiers at serious risk from high CO 2 content is the planktonic larval stage. The larval development of several marine species, primarily sea urchins and bivalves , are highly affected by elevations of seawater p CO 2 . [ 16 ] In laboratory tests, numerous sea urchin embryos were reared under different CO 2 concentrations until they developed to the larval stage. It was found that once they reached this stage, larval and arm sizes were significantly smaller, as well as abnormal skeleton morphology was noted with increasing p CO 2 . [ 16 ] Similar findings have been found in CO 2 treated-mussel larvae, which showed a larval size decrease of about 20% and showed morphological abnormalities such as convex hinges, weaker and thinner shells and protrusion of mantle. [ 17 ] The larval body size also impacts the encounter and clearance rates of food particles, and if larval shells are smaller or deformed, these larvae are more prone to starvation. CaCO 3 structures also serve vital functions for calcified larvae, such as defense against predation, as well as roles in feeding, buoyancy control and pH regulation. [ 16 ] Another example of a species which may be seriously impacted by ocean acidification is Pteropods, which are shelled pelagic molluscs which play an important role in the food-web of various ecosystems. Since they harbour an aragonitic shell, they could be very sensitive to ocean acidification driven by the increase of anthropogenic CO 2 emissions. Laboratory tests showed that calcification exhibits a 28% decrease of the pH value of the Arctic ocean expected for the year 2100, compared to the present pH value. This 28% decline of calcification in the lower pH condition is within the range reported for other calcifying organisms such as corals. [ 5 ] In contrast with sea urchin and bivalve larvae, corals and marine shrimps are more severely impacted by ocean acidification after settlement, while they developed into the polyp stage. From laboratory tests, the morphology of the CO 2 -treated polyp endoskeleton of corals was disturbed and malformed compared to the radial pattern of control polyps. [ 16 ] This variability in the impact of ocean acidification on different life cycle stages of different organisms can be partially explained by the fact that most echinoderms and mollusks start shell and skeleton synthesis at their larval stage, while corals start at the settlement stage. [ 16 ] Hence, these stages are highly susceptible to the potential effects of ocean acidification. Most calcifiers, such as corals, echinoderms, bivalves and crustaceans, play important roles in coastal ecosystems as keystone species, bioturbators and ecosystem engineers. [ 16 ] The food web in the arctic ocean is somewhat truncated, meaning it is short and simple. Any impacts to key species in the food web can cause exponentially devastating effects on the rest of the food chain as a whole, as they will no longer have a reliable food source. If these larger organisms no longer have any source of nutrients, they too will eventually die off, and the entire Arctic ocean ecosystem will be affected. This would have a huge impact on the arctic people who catch arctic fish for a living, as well as the economic repercussions which would follow such a major shortage of food and living income for these families. Ocean acidification not only has impacts on aquatic life, but also on human communities and the overall livelihood of people living near these waters. For example, as a result of crustaceans being unable to produce their shells and skeletons due to reduced amounts of carbonate ions, populations such as crabs have significantly decreased in some areas in the Northern hemisphere. This has resulted in numerous fisheries in these areas to close down as a result of multi-million dollar losses. In addition, increased temperatures have caused a swift increase in toxic algal blooms, which are known to produce a neurotoxin called domoic acid that can accumulate inside the bodies of certain shellfish. [ 18 ] If ingested by humans this toxin can cause severe health issues, which has forced many additional fisheries to close down. [ 19 ] Since the carbon cycle is tightly connected to the issue of ocean acidification, the most effective method for minimizing the effects of ocean acidification is to slow climate change. Anthropogenic inputs of CO 2 can be reduced through methods such as limiting the use of fossil fuels and employing renewable energies. This will ultimately lower the amount of CO 2 in the atmosphere and reduce the amount dissolved into the oceans. More intrusive methods to mitigate acidification involve a technique called enhanced weathering where powdered minerals like silicate are applied to the land or ocean surface. [ 20 ] The powdered minerals enable accelerated dissolution, releasing cations, converting CO 2 to bicarbonate and increasing the pH of the oceans. [ 20 ] Other mitigation methods, like ocean iron fertilization , still need more experimentation and evaluation in order to be deemed effective. [ 21 ] Ocean iron fertilization in particular has been shown to increase acidification in the deep ocean while only slightly reducing acidification at the surface. [ 21 ]
https://en.wikipedia.org/wiki/Ocean_acidification_in_the_Arctic_Ocean
Ocean acidification threatens the Great Barrier Reef by reducing the viability and strength of coral reefs . The Great Barrier Reef, considered one of the seven natural wonders of the world and a biodiversity hotspot , is located in Australia. Similar to other coral reefs, it is experiencing degradation due to ocean acidification. Ocean acidification results from a rise in atmospheric carbon dioxide , which is taken up by the ocean. [ 1 ] [ 2 ] This process can increase sea surface temperature , decrease aragonite , and lower the pH of the ocean. The more humanity consumes fossil fuels, the more the ocean absorbs released CO₂, furthering ocean acidification. This decreased health of coral reefs, particularly the Great Barrier Reef, can result in reduced biodiversity . Organisms can become stressed due to ocean acidification and the disappearance of healthy coral reefs, such as the Great Barrier Reef, is a loss of habitat for several taxa . Ocean acidification makes it harder for organisms to reproduce affecting the ecosystem in the Great Barrier Reef. Species of fish can be affected immensely from ocean acidification which disrupts the overall ecosystem. There is a possible solution that can reverse the effects of ocean acidification called alkalization injection. Alkalization injection injects a solution into the ocean and increases the pH of the water. Coral reefs are very important to society and the economy. Atmospheric carbon dioxide has risen from 280 to 409 ppm [ 3 ] since the industrial revolution . [ 4 ] Around 30% of carbon dioxide released from humans have been absorbed into the ocean during that era. [ 5 ] This increase in carbon dioxide has led to a 0.1 decrease in pH, and it could decrease by 0.5 by 2100. [ 6 ] [ 7 ] When carbon dioxide meets seawater, it forms carbonic acid ; the molecules dissociate into hydrogen, bicarbonate , and carbonate, and they lower the pH of the ocean. [ 8 ] Sea surface temperature, ocean acidity, and dissolved inorganic carbon are also positively correlated with atmospheric carbon dioxide. [ 9 ] Ocean acidification can cause hypercapnia and increase stress in marine organisms, thereby leading to decreased biodiversity. [ 4 ] Coral reefs themselves can also be negatively affected by ocean acidification, as calcification rates decrease and acidity increases. [ 10 ] Aragonite is impacted by the process of ocean acidification because it is a form of calcium carbonate. [ 8 ] It is essential in coral viability and health because it is found in coral skeletons and is more readily soluble than calcite. [ 8 ] Increasing carbon dioxide levels can reduce coral growth rates from 9 to 56% due to the lack of available carbonate ions needed for the calcification process. [ 10 ] [ 11 ] Other calcifying organisms, such as bivalves and gastropods, experience negative effects due to ocean acidification as well. [ 10 ] The excess hydrogen ions in the acidic water dissolve their shells, limiting their shelter and reproduction rates. [ 12 ] As a biodiversity hotspot, the many taxa of the Great Barrier Reef are threatened by ocean acidification. [ 13 ] Rare and endemic species are in greater danger due to ocean acidification, because they rely upon the Great Barrier Reef more extensively. Additionally, the risk of coral reefs collapsing due to acidification poses a threat to biodiversity. [ 14 ] The stress of ocean acidification could also negatively affect other biological processes, such as reducing photosynthesis or reproduction and allowing organisms to become vulnerable to disease. [ 15 ] The Great Barrier Reef is susceptible to poor water quality and the impacts of ocean acidification. There are thirty five major rivers that discharge nutrient and sediment loads, there is about five to eight times the amount of discharge then prior to European settlement. These discharges lead to elevated seawater nutrients and turbidity which further promotes the impacts Ocean acidification. [ 16 ] Coral is a calcifying organism, putting it at high risk for decay and slow growth rates as ocean acidification increases. [ 10 ] Aragonite assists the coral as they build their skeletons because it is another form of calcium carbonate (CaCO 3 ) that is more soluble . When the pH of the water decreases, aragonite decreases as well, leading to the loss of calcium carbonate uptake in corals. [ 17 ] Levels of aragonite have decreased by 16% since industrialization and could be lower in some portions of the Great Barrier Reef due to the current, which allows northern corals to take up more aragonite than southern corals. [ 17 ] Aragonite is predicted to reduce by 0.1 by 2100 which could greatly hinder coral growth. [ 17 ] Since 1990, calcification rates of Porites , a common large reef-building coral in the Great Barrier Reef, have decreased by 14.2% annually. [ 10 ] Aragonite levels across the Great Barrier Reef itself are not equal; due to currents and circulation, some portions of the Great Barrier Reef can have half as much aragonite as others. [ 17 ] Levels of aragonite are also affected by calcification and production, which can vary from reef to reef. [ 17 ] If atmospheric carbon dioxide reaches 560 ppm, most ocean surface waters will be adversely undersaturated with respect to aragonite, and the pH will have reduced by about 0.24 units, from almost 8.2 today to just over 7.9. At this point (sometime in the third quarter of this century, at current rates of carbon dioxide increase), only a few parts of the Pacific will have levels of aragonite saturation adequate for coral growth. Additionally, if atmospheric carbon dioxide reaches 800 ppm, the ocean surface water pH decrease will be 0.4 units, and the total dissolved carbonate ion concentration will have decreased by at least 60%. [ 15 ] Recent estimates state that with business-as-usual emission levels, the atmospheric carbon dioxide could reach 800 ppm by the year 2100. [ 18 ] At this point, it is almost certain that all the reefs in the world will be in erosional states. Increasing the pH and replicating pre-industrialization ocean chemistry conditions in the Great Barrier Reef, however, led to an increase in coral growth rates of 7%. [ 19 ] Ocean acidification can also lead to increased sea surface temperature. An increase of about 1 or 2 °C can cause the collapse of the relationship between coral and zooxanthellae , possibly leading to bleaching . [ 15 ] The average sea surface temperature in the Great Barrier Reef is predicted to increase between 1 and 3 °C by 2100. [ 6 ] Bleaching occurs when the zooxanthellae and coralline algae leave the coral skeleton behind due to stresses in the water. This causes the coral to lose its colour because the previous organisms sustained on the coral skeleton vacate, leaving a white skeleton. The bleached coral can no longer complete photosynthesis, and so it slowly dies. The acidity of the water will slowly dissolve the leftover coral skeletons, essentially damaging the structural integrity of the coral reef. There are many organisms that also rely on the algae and zooxanthellae for their main source of food. Therefore, organisms in the bleached coral reef are forced to leave in search of new food sources. Since zooxanthellae and algae grow very slowly, restoring the coral reef to its original form will take a very long time. [ 20 ] This breakdown of the relationship between the coral and the zooxanthellae occurs when Photosystem II is damaged, either due to a reaction with the D1 protein or a lack of carbon dioxide fixation; these result in a lack of photosynthesis and can lead to bleaching. [ 8 ] Ocean acidification threatens coral reproduction throughout almost all aspects of the process. Gametogenesis may be indirectly affected by coral bleaching . Additionally, the stress that acidification puts on coral can potentially harm the viability of the sperm released. Larvae can also be affected by this process; metabolism and settlement cues could be altered, changing the size of the population or viability of reproduction. [ 8 ] [ 2 ] Other species of calcifying larvae have shown reduced growth rates under ocean acidification scenarios. [ 9 ] Biofilm , a bioindicator for oceanic conditions, underwent a reduced growth rate and altered composition in acidification, possibly affecting larval settlement on the biofilm itself. [ 21 ] Throughout the years there have been a few mass bleaching events that have affected the Great Barrier Reef. In particular, the years of 2016 and 2017, saw the reef sustain two years of back to back bleaching periods. This long period accounted for an estimated loss of half of the coral life in the Great Barrier Reef. The parts of the reef that did survive were damaged, leading to an overall period of low coral reproduction. [ 22 ] This was later followed by another bleaching event in 2020, making it the third bleaching event in five years. Studies found however that the results of the 2020 bleaching were not too severe, as it only affected a minimal amount of reefs, with most being in the lower to moderate levels of bleaching. [ 23 ] In early 2022 a study showed, 91% of coral in the Great Barrier Reef, have experienced some degree of coral bleaching. [ 24 ] The reefs that had higher levels of bleaching, often were accompanied by higher overall air temperature. These temperature levels lasted all through the summer season in Australia, attributing to prolonged coral bleaching periods. Prolonged periods raise concern, as corals would not be able to reproduce and die out, leading to more loss of the reefs. However, recent reports from June 2022, have stated that the Great Barrier Reef, is currently recovering. Reefs affected by bleaching have lowered to 16% along different areas of the Australian Coast. [ 24 ] As ocean temperatures continue to drop, we can expect bleaching levels to go down, and coral levels to increase. Though coral bleaching has gone down, predators of the coral reef, Crown-of-thorns starfish , are still impacting coral growth and development. [ 24 ] Biodiversity refers to the variety of life forms, including species diversity, genetic diversity, and ecosystem diversity. The Great Barrier Reef is a biodiversity hotspot, ranging over 9000 known species. [ 25 ] However, since the 1950’s half of the living corals on the Great Barrier Reef have died, and coral reef-associated biodiversity has declined by sixty three percent. [ 26 ] Only an estimated twenty five percent of these species have been formally discovered, leaving a substantial proportion yet to be scientifically classified . [ 26 ] We are no doubt losing species we have yet to identify in the wake of a shifting climate. Reduced levels of aragonite, as a result of ocean acidification, continues to be one of the Great Barrier Reef's biggest threats. [ 11 ] Healthy reefs support thousands of different corals, fish and marine mammals, but bleached reefs lose their ability to support and sustain life. [ 27 ] Coral structural formations create complex habitats critical for providing shelter, breeding grounds, and food sources for numerous marine organisms, including fish, invertebrates, and microorganisms. [ 28 ] In turn, corals depend on reef fish and other organisms to clean and regulate algae levels, provide nutrients for coral growth, and keep pests in check. [ 28 ] Coral reefs and the species they host have dynamic symbiotic relationships. Ocean acidification can also indirectly affect any organism, having reduced growth rates, decreased reproductive capacity, increased susceptibility to disease, and elevated mortality rates. [ 29 ] Bleaching events trigger homogenization of coral composition and losses of structural complexity which can be detrimental to reef fish and other organisms that depend on branching coral for breeding and shelter. [ 29 ] This decrease in ecosystem diversity has direct effects on species diversity. As coral reefs decay, their residents will have to adapt or find new habitats on which to rely. [ 15 ] Ocean acidification threatens the fundamental chemical balance of our oceans, creating conditions that eat away at essential minerals like calcium carbonate. A lack of aragonite and decreasing pH levels in ocean water makes it harder for calcifying organisms such as oysters, clams, lobsters, shrimp and coral reefs to build their shells and exoskeletons. [ 30 ] Organisms have been found to be more sensitive to the effects of ocean acidification in early, larval or planktonic stages. Larval health and settlement of both calcifying and non-calcifying organisms can be harmed by ocean acidification. A study published in the journal Global Change Biology developed a model for predicting the vulnerability of sharks and sting rays to climate change in the Great Barrier Reef. It was found that 30 of the 133 species were identified as moderately or highly vulnerable to climate change with the most vulnerable species being the freshwater whipray , porcupine ray , speartooth shark , and sawfish . Increasing temperature is also affecting the behavior and fitness of may reef species such as the common coral trout, a very important fish in sustaining the health of coral reefs. [ 31 ] Not only can ocean acidification affect habitat and development, but it can also affect how organisms view predators and conspecifics . Studies on the effects of ocean acidification have not been performed on long enough time scales to see if organisms can adapt to these conditions. However, ocean acidification is predicted to occur at a rate that evolution cannot match. [ 12 ] Some fish can compensate for disturbances under high CO2 conditions but they show unexpected sensitivity to current and future growing CO2 levels. The sensitivity affects many physiological and behavioral processes, including the growth to otoliths which are calcium carbonate structures in fish ears that aid in balance. Also, it affects functions in the brains, the amount of energy the fish uses, and the amount of nutrients a fish can absorb. The consequences of disrupted neurotransmitters like GABA are still being studied, but it can affect fish in the near future. Sensitivity of fish from ocean acidification varies between species with sensory perception being affected the most between all species. [ 32 ] A naturally occurring predator to coral reefs in the Great Barrier Reef is the Crown of Thorns sea star ( Acanthaster planci) . Population outbreaks of the Crown of Thorns sea star are one of the major causes of coral decline across the Great Barrier Reef, as an adult crown-of-thorns starfish is capable of consuming up to 10 m2 of reef building coral a year. [ 9 ] However, each species of coral is not equally impacted, as the sea star has been observed to favor branching species of coral, Acropora , followed by a sub branching species. This results in a sequential and ordered eradication of coral reef species. Crown of Thorns Sea Star outbreaks on the Great Barrier Reef have become more frequent in recent years, which scientists predict could be linked to human activities. [ 33 ] Any increase in nutrients, possibly from river run-off, can positively affect starfish populations, leading to detrimental outbreaks. [ 33 ] As pressures from climate change increase, the time between reef disturbances is becoming shorter, leaving less time for reef recovery. A simulation from 2015 has shown a potential solution that involves artificial ocean alkalization. This method contains a solution that increases the alkalinity of water by about 4 moles. Ships will inject artificial ocean alkalization throughout the coast of the ocean and it would decrease the pH of the ocean, causing ocean acidification to go away temporarily. Through the simulation, the results stated a significant increase in aragonite saturation state across the Great Barrier Reef. The use of alkalization would offset around 4 years of ocean acidification. Also, the results showed that there was an increase in aragonite saturation state in about 25% of the reefs which means that alkalization is helpful in reducing OA. [ 34 ] Being a major hotspots of biodiversity, coral reefs are very important to the ecosystem and livelihood of marine and human life. Countries around the world depend on reefs as a source of food and income, especially for civilizations that inhabit small islands. [ 35 ] With over a 60% decrease in available fishing around coral reefs, many countries, will be forced to adapt. [ 25 ] Coral Reefs are also important for a countries economy, as reefs provide various forms of tourist activities, that can generate a lot of revenue for the economy. [ 36 ] These can also contribute to individual levels of wellness, as the owners of these business, profit off of increased visitation and usage. Coral Reefs also provide, a form of coastal infrastructure, that acts as a barrier protecting coastal communities from major ocean catastrophes, such as tsunamis and coastal storms. [ 35 ]
https://en.wikipedia.org/wiki/Ocean_acidification_in_the_Great_Barrier_Reef
Marine chemistry , also known as ocean chemistry or chemical oceanography , is the study of the chemical composition and processes of the world’s oceans, including the interactions between seawater, the atmosphere, the seafloor, and marine organisms. [ 2 ] This field encompasses a wide range of topics, such as the cycling of elements like carbon, nitrogen, and phosphorus, the behavior of trace metals, and the study of gases and nutrients in marine environments. Marine chemistry plays a crucial role in understanding global biogeochemical cycles , ocean circulation , and the effects of human activities, such as pollution and climate change, on oceanic systems. [ 2 ] It is influenced by plate tectonics and seafloor spreading , turbidity , currents , sediments , pH levels, atmospheric constituents, metamorphic activity, and ecology. The impact of human activity on the chemistry of the Earth's oceans has increased over time, with pollution from industry and various land-use practices significantly affecting the oceans. Moreover, increasing levels of carbon dioxide in the Earth's atmosphere have led to ocean acidification , which has negative effects on marine ecosystems. The international community has agreed that restoring the chemistry of the oceans is a priority, and efforts toward this goal are tracked as part of Sustainable Development Goal 14 . Due to the interrelatedness of the ocean, chemical oceanographers frequently work on problems relevant to physical oceanography , geology and geochemistry , biology and biochemistry , and atmospheric science . Many of them are investigating biogeochemical cycles , and the marine carbon cycle in particular attracts significant interest due to its role in carbon sequestration and ocean acidification . [ 3 ] Other major topics of interest include analytical chemistry of the oceans, marine pollution , and anthropogenic climate change . DOM is a critical component of the ocean's carbon pool and includes many molecules such as amino acids, sugars, and lipids. It represents about 90% of the total organic carbon in marine environments. [ 4 ] Colored dissolved organic matter (CDOM) is estimated to range from 20-70% of the carbon content of the oceans, being higher near river outlets and lower in the open ocean. [ 5 ] DOM can be recycled and put back into the food web through a process called microbial loop which is essential for nutrient cycling and supporting primary productivity. [ 6 ] It also plays a vital role in the global regulation of oceanic carbon storage, as some forms resist microbial degradation and may exist within the ocean for centuries. [ 7 ] Marine life is similar mainly in biochemistry to terrestrial organisms, and is the most prolific source of halogenated organic compounds . [ 8 ] POM includes of large organic particles, such as organisms, fecal pellets, and detritus, which settle through the water column. It is a major component of the biological pump, a process by which carbon is transferred from the surface ocean to the deep sea. As POM sinks, it decomposes by bacterial activity, releasing nutrients and carbon dioxide. The refractory POM fraction can settle on the ocean floor and make relevant contributions to carbon sequestration over a very long period of time [ 9 ] The ocean is home to a variety of marine organisms known as extremophiles – organisms that thrive in extreme conditions of temperature, pressure, and light availability. Extremophiles inhabit many unique habitats in the ocean, such as hydrothermal vents , black smokers, cold seeps , hypersaline regions, and sea ice brine pockets . Some scientists have speculated that life may have evolved from hydrothermal vents in the ocean. In hydrothermal vents and similar environments, many extremophiles acquire energy through chemoautotrophy , using chemical compounds as energy sources, rather than light as in photoautotrophy . Hydrothermal vents enrich the nearby environment in chemicals such as elemental sulfur , H 2 , H 2 S , Fe 2+ , and methane . Chemoautotrophic organisms, primarily prokaryotes, derive energy from these chemicals through redox reactions . These organisms then serve as food sources for higher trophic levels , forming the basis of unique ecosystems. Several different metabolisms are present in hydrothermal vent ecosystems. Many marine microorganisms, including Thiomicrospira , Halothiobacillus , and Beggiatoa , are capable of oxidizing sulfur compounds, including elemental sulfur and the often toxic compound H 2 S. H 2 S is abundant in hydrothermal vents, formed through interactions between seawater and rock at the high temperatures found within vents. This compound is a major energy source, forming the basis of the sulfur cycle in hydrothermal vent ecosystems. In the colder waters surrounding vents, sulfur-oxidation can occur using oxygen as an electron acceptor ; closer to the vents, organisms must use alternate metabolic pathways or utilize another electron acceptor, such as nitrate. Some species of Thiomicrospira can utilize thiosulfate as an electron donor, producing elemental sulfur. Additionally, many marine microorganisms are capable of iron-oxidation, such as Mariprofundus ferrooxydans . Iron-oxidation can be oxic, occurring in oxygen-rich parts of the ocean, or anoxic, requiring either an electron acceptor such as nitrate or light energy. In iron-oxidation, Fe(II) is used as an electron donor ; conversely, iron-reducers utilize Fe(III) as an electron acceptor. These two metabolisms form the basis of the iron-redox cycle and may have contributed to banded iron formations . At another extreme, some marine extremophiles inhabit sea ice brine pockets where temperature is very low and salinity is very high. Organisms trapped within freezing sea ice must adapt to a rapid change in salinity up to 3 times higher than that of regular seawater, as well as the rapid change to regular seawater salinity when ice melts. Most brine-pocket dwelling organisms are photosynthetic, therefore, these microenvironments can become hyperoxic, which can be toxic to its inhabitants. Thus, these extremophiles often produce high levels of antioxidants. [ 10 ] Seafloor spreading on mid-ocean ridges is a global scale ion-exchange system. [ 11 ] Hydrothermal vents at spreading centers introduce various amounts of iron , sulfur , manganese , silicon and other elements into the ocean, some of which are recycled into the ocean crust . Helium-3 , an isotope that accompanies volcanism from the mantle, is emitted by hydrothermal vents and can be detected in plumes within the ocean. [ 12 ] Spreading rates on mid-ocean ridges vary between 10 and 200 mm/yr. Rapid spreading rates cause increased basalt reactions with seawater. The magnesium / calcium ratio will be lower because more magnesium ions are being removed from seawater and consumed by the rock, and more calcium ions are being removed from the rock and released to seawater. Hydrothermal activity at ridge crest is efficient in removing magnesium. [ 13 ] A lower Mg/Ca ratio favors the precipitation of low-Mg calcite polymorphs of calcium carbonate ( calcite seas ). [ 11 ] Slow spreading at mid-ocean ridges has the opposite effect and will result in a higher Mg/Ca ratio favoring the precipitation of aragonite and high-Mg calcite polymorphs of calcium carbonate ( aragonite seas ). [ 11 ] Experiments show that most modern high-Mg calcite organisms would have been low-Mg calcite in past calcite seas, [ 14 ] meaning that the Mg/Ca ratio in an organism's skeleton varies with the Mg/Ca ratio of the seawater in which it was grown. The mineralogy of reef-building and sediment-producing organisms is thus regulated by chemical reactions occurring along the mid-ocean ridge, the rate of which is controlled by the rate of sea-floor spreading. [ 13 ] [ 14 ] Marine pollution occurs when substances used or spread by humans, such as industrial , agricultural , and residential waste ; particles ; noise ; excess carbon dioxide ; or invasive organisms enter the ocean and cause harmful effects there. The majority of this waste (80%) comes from land-based activity, although marine transportation significantly contributes as well. [ 15 ] It is a combination of chemicals and trash, most of which comes from land sources and is washed or blown into the ocean. This pollution results in damage to the environment , to the health of all organisms, and to economic structures worldwide. [ 16 ] Since most inputs come from land, via rivers , sewage , or the atmosphere , it means that continental shelves are more vulnerable to pollution. Air pollution is also a contributing factor, as it carries iron, carbonic acid, nitrogen , silicon, sulfur, pesticides , and dust particles into the ocean. [ 17 ] The pollution often comes from nonpoint sources such as agricultural runoff , wind-blown debris , and dust. These nonpoint sources are largely due to runoff that enters the ocean through rivers, but wind-blown debris and dust can also play a role, as these pollutants can settle into waterways and oceans. [ 18 ] Pathways of pollution include direct discharge, land runoff, ship pollution , bilge pollution , dredging (which can create dredge plumes ), atmospheric pollution and, potentially, deep sea mining . Increased carbon dioxide levels, mostly from burning fossil fuels , are changing ocean chemistry. Global warming and changes in salinity [ 19 ] have significant implications for the ecology of marine environments . [ 20 ] Ocean acidification is the ongoing decrease in the pH of the Earth's ocean . Between 1950 and 2020, the average pH of the ocean surface fell from approximately 8.15 to 8.05. [ 21 ] Carbon dioxide emissions from human activities are the primary cause of ocean acidification, with atmospheric carbon dioxide (CO 2 ) levels exceeding 422 ppm (as of 2024 [update] ). [ 22 ] CO 2 from the atmosphere is absorbed by the oceans. This chemical reaction produces carbonic acid ( H 2 CO 3 ) which dissociates into a bicarbonate ion ( HCO − 3 ) and a hydrogen ion ( H + ). The presence of free hydrogen ions ( H + ) lowers the pH of the ocean, increasing acidity (this does not mean that seawater is acidic yet; it is still alkaline , with a pH higher than 8). Marine calcifying organisms , such as mollusks and corals , are especially vulnerable because they rely on calcium carbonate to build shells and skeletons. [ 23 ] Ocean deoxygenation is the reduction of the oxygen content in different parts of the ocean due to human activities. [ 28 ] [ 29 ] There are two areas where this occurs. Firstly, it occurs in coastal zones where eutrophication has driven some quite rapid (in a few decades) declines in oxygen to very low levels. [ 28 ] This type of ocean deoxygenation is also called dead zones . Secondly, ocean deoxygenation occurs also in the open ocean. In that part of the ocean, there is nowadays an ongoing reduction in oxygen levels. As a result, the naturally occurring low oxygen areas (so called oxygen minimum zones (OMZs)) are now expanding slowly. [ 30 ] This expansion is happening as a consequence of human caused climate change . [ 31 ] [ 32 ] The resulting decrease in oxygen content of the oceans poses a threat to marine life , as well as to people who depend on marine life for nutrition or livelihood. [ 33 ] [ 34 ] [ 35 ] A decrease in ocean oxygen levels affects how productive the ocean is, how nutrients and carbon move around , and how marine habitats function. [ 36 ] [ 37 ] As the oceans become warmer this increases the loss of oxygen in the oceans. This is because the warmer temperatures increase ocean stratification . The reason for this lies in the multiple connections between density and solubility effects that result from warming. [ 38 ] [ 39 ] As a side effect, the availability of nutrients for marine life is reduced, therefore adding further stress to marine organisms . The rising temperatures in the oceans also cause a reduced solubility of oxygen in the water, which can explain about 50% of oxygen loss in the upper level of the ocean (>1000 m). Warmer ocean water holds less oxygen and is more buoyant than cooler water. This leads to reduced mixing of oxygenated water near the surface with deeper water, which naturally contains less oxygen. Warmer water also raises oxygen demand from living organisms; as a result, less oxygen is available for marine life. [ 40 ] Early inquiries about marine chemistry usually concerned the origin of salinity in the ocean, including work by Robert Boyle . Modern chemical oceanography began as a field with the 1872–1876 Challenger expedition , led by the British Royal Navy which made the first systematic measurements of ocean chemistry. The chemical analysis of these samples providing the first systematic study of the composition of seawater was conducted by John Murray and George Forchhammer, leading to a better understanding of elements like chloride, sodium, and sulfate in ocean waters [ 44 ] The early 20th century saw significant advancements in marine chemistry, particularly with more accurate analytical techniques. Scientists like Martin Knudsen created the Knudsen Bottle, an instrument used to collect water samples from different ocean depths. [ 45 ] Over the past three decades (1970s, 19802, and 1990s), a comprehensive evaluation of advancements in chemical oceanography was compiled through a National Science Foundation initiative known as Futures of Ocean Chemistry in the United States (FOCUS). This project brought together numerous prominent chemical oceanographers, marine chemists, and geochemists to contribute to the FOCUS report. After World War II, advancements in geochemical techniques propelled marine chemistry into a new era. Researchers began using isotopic analysis to study ocean circulation and the carbon cycle. Roger Revelle and Hans Suess pioneered using radiocarbon dating to investigate oceanic carbon reservoirs and their exchange with the atmosphere. [ 46 ] Since the 1970s, the development of highly sophisticated instruments and computational models has revolutionized marine chemistry. Scientists can now measure trace metals , organic compounds , and isotopic ratios with unprecedented precision. Studies of marine biogeochemical cycles, including the carbon , nitrogen , and sulfur cycles , have become an area of interest to understand global climate change . The use of remote sensing technology and global ocean observation programs, such as the International Geosphere-Biosphere Programme (IGBP), has provided large-scale data on ocean chemistry, allowing scientists to monitor ocean acidification , deoxygenation , and other critical issues affecting the marine environment. [ 47 ] Chemical oceanographers collect and measure chemicals in seawater, using the standard toolset of analytical chemistry as well as instruments like pH meters , electrical conductivity meters , fluorometers , and dissolved CO₂ meters. Most data are collected through shipboard measurements and from autonomous floats or buoys , but remote sensing is used as well. On an oceanographic research vessel , a CTD is used to measure electrical conductivity , temperature , and pressure , [ 48 ] and is often mounted on a rosette of Nansen bottles to collect seawater for analysis. [ 49 ] Sediments are commonly studied with a box corer or a sediment trap , and older sediments may be recovered by scientific drilling . Advanced analytical equipment such as mass spectrometers and chromatographs are applied to detect trace elements, isotopes, and organic compounds. This allows for precisely measuring nutrients, gases, and pollutants in marine environments. [ 50 ] In recent years, autonomous underwater vehicles (AUVs) and remote sensing technology have enabled continuous, large-scale ocean chemistry monitoring, particularly for tracking changes in ocean acidification and nutrient cycles. [ 51 ] The chemistry of the subsurface ocean of Europa may be Earthlike. [ 52 ] The subsurface ocean of Enceladus vents hydrogen and carbon dioxide to space. [ 53 ]
https://en.wikipedia.org/wiki/Ocean_chemistry
Ocean color is the branch of ocean optics that specifically studies the color of the water and information that can be gained from looking at variations in color. The color of the ocean , while mainly blue, actually varies from blue to green or even yellow, brown or red in some cases. [ 1 ] This field of study developed alongside water remote sensing , so it is focused mainly on how color is measured by instruments (like the sensors on satellites and airplanes). Most of the ocean is blue in color, but in some places the ocean is blue-green, green, or even yellow to brown. [ 2 ] Blue ocean color is a result of several factors. First, water preferentially absorbs red light, which means that blue light remains and is reflected back out of the water. Red light is most easily absorbed and thus does not reach great depths, usually to less than 50 meters (164 ft). Blue light, in comparison, can penetrate up to 200 meters (656 ft). [ 3 ] Second, water molecules and very tiny particles in ocean water preferentially scatter blue light more than light of other colors. Blue light scattering by water and tiny particles happens even in the very clearest ocean water, [ 4 ] and is similar to blue light scattering in the sky . The main substances that affect the color of the ocean include dissolved organic matter , living phytoplankton with chlorophyll pigments, and non-living particles like marine snow and mineral sediments . [ 5 ] Chlorophyll can be measured by satellite observations and serves as a proxy for ocean productivity ( marine primary productivity ) in surface waters. In long term composite satellite images, regions with high ocean productivity show up in yellow and green colors because they contain more (green) phytoplankton , whereas areas of low productivity show up in blue. Ocean color depends on how light interacts with the materials in the water. When light enters water, it can either be absorbed (light gets used up, the water gets "darker"), [ 6 ] scattered (light gets bounced around in different directions, the water remains "bright"), [ 7 ] or a combination of both. How underwater absorption and scattering vary spectrally, or across the spectrum of visible to infrared light energy (about 400 nm to 2000 nm wavelengths) determines what "color" the water will appear to a sensor. Most of the world’s oceans appear blue because the light leaving water is brightest (has the highest reflectance value) in the blue part of the visible light spectrum. Nearer to land, coastal waters often appear green. Green waters appear this way because algae and dissolved substances are absorbing light in the blue and red portions of the spectrum. The reason that open-ocean waters appear blue is that they are very clear, somewhat similar to pure water, and have few materials present or very tiny particles only. Pure water absorbs red light with depth. [ 8 ] As red light is absorbed, blue light remains. Large quantities of pure water appear blue (even in a white-bottom swimming pool or white-painted bucket [ 9 ] ). The substances that are present in blue-colored open ocean waters are often very tiny particles which scatter light, scattering light especially strongly in the blue wavelengths. [ 10 ] Light scattering in blue water is similar to the scattering in the atmosphere which makes the sky appear blue (called Rayleigh scattering ). [ 11 ] Some blue-colored clear water lakes appear blue for these same reasons, like Lake Tahoe in the United States. [ 12 ] Microscopic marine algae, called phytoplankton , absorb light in the blue and red wavelengths, due to their specific pigments like chlorophyll-a . Accordingly, with more and more phytoplankton in the water, the color of the water shifts toward the green part of the spectrum. [ 13 ] [ 14 ] The most widespread light-absorbing substance in the oceans is chlorophyll pigment, which phytoplankton use to produce carbon by photosynthesis . Chlorophyll, a green pigment, makes phytoplankton preferentially absorb the red and blue portions of the light spectrum . As blue and red light are absorbed, green light remains. Ocean regions with high concentrations of phytoplankton have shades of blue-to-green water depending on the amount and type of the phytoplankton. [ 15 ] [ 16 ] Green waters can also have a combination of phytoplankton, dissolved substances, and sediments, while still appearing green. This often happens in estuaries, coastal waters, and inland waters, which are called "optically complex" waters because multiple different substances are creating the green color seen by the sensor. Ocean water appears yellow or brown when large amounts of dissolved substances , sediments , or both types of material are present. Water can appear yellow or brown due to large amounts of dissolved substances. [ 17 ] [ 18 ] Dissolved matter or gelbstoff (meaning yellow substance) appears dark yet relatively transparent, much like tea. Dissolved substances absorb blue light more strongly than light of other colors. Colored dissolved organic matter (CDOM) often comes from decaying plant matter on land or in marshes , or in the open ocean from marine phytoplankton exuding dissolved substances from their cells. [ 19 ] In coastal areas, runoff from rivers and resuspension of sand and silt from the bottom add sediments to surface waters. More sediments can make the waters appear more green, yellow, or brown because sediment particles scatter light energy at all colors. [ 20 ] In large amounts, mineral particles like sediment cause the water to turn brownish if there is a massive sediment loading event, [ 21 ] appearing bright and opaque (not transparent), much like chocolate milk. Ocean water can appear red if there is a bloom of a specific kind of phytoplankton causing a discoloration of the sea surface. [ 22 ] These events are called " Red tides ." However, not all red tides are harmful, and they are only considered harmful algal blooms if the type of plankton involved contains hazardous toxins. [ 23 ] The red color comes from the pigments in the specific kinds of phytoplankton causing the bloom. Some examples are Karenia brevis in the Gulf of Mexico, [ 24 ] Alexandrium fundyense in the Gulf of Maine, [ 25 ] Margalefadinium polykroides and Alexandrium monilatum in the Chesapeake Bay, [ 26 ] and Mesodinium rubrum in Long Island Sound. [ 27 ] Ocean color remote sensing is also referred to as ocean color radiometry . Remote sensors on satellites, airplanes, and drones measure the spectrum of light energy coming from the water surface. The sensors used to measure light energy coming from the water are called radiometers (or spectrometers or spectroradiometers ). Some radiometers are used in the field at earth’s surface on ships or directly in the water. Other radiometers are designed specifically for airplanes or earth-orbiting satellite missions. Using radiometers, scientists measure the amount of light energy coming from the water at all colors of the electromagnetic spectrum from ultraviolet to near-infrared. [ 28 ] From this reflected spectrum of light energy, or the apparent "color," researchers derive other variables to understand the physics and biology of the oceans. Ocean color measurements can be used to infer important information such as phytoplankton biomass or concentrations of other living and non-living material. The patterns of algal blooms from satellite over time, over large regions up to the scale of the global ocean, has been instrumental in characterizing variability of marine ecosystems . Ocean color data is a key tool for research into how marine ecosystems respond to climate change and anthropogenic perturbations. [ 29 ] One of the biggest challenges for ocean color remote sensing is atmospheric correction , or removing the color signal of the atmospheric haze and clouds to focus on the color signal of the ocean water. [ 30 ] The signal from the water itself is less than 10% of the total signal of light leaving Earth’s surface. [ 31 ] [ 32 ] People have written about the color of the ocean over many centuries, including ancient Greek poet Homer’s famous "wine-dark sea." Scientific measurements of the color of the ocean date back to the invention of the Secchi disk in Italy in the mid-1800s to study the transparency and clarity of the sea. [ 33 ] [ 34 ] Major accomplishments were made in the 1960s and 1970s leading up to modern ocean color remote sensing campaigns. Nils Gunnar Jerlov ’s book Optical Oceanography , published in 1968, [ 35 ] was a starting point for many researchers in the next decades. In 1970, George Clarke published the first evidence that chlorophyll concentration could be estimated based on green versus blue light coming from the water, as measured from an airplane over George's Bank . [ 36 ] In the 1970s, scientist Howard Gordon and his graduate student George Maul related imagery from the first Landsat mission to ocean color. [ 37 ] [ 38 ] Around the same time, a group of researchers, including John Arvesen, Dr. Ellen Weaver, and explorer Jacques Cousteau , began developing sensors to measure ocean productivity beginning with an airborne sensor. [ 39 ] [ 40 ] Remote sensing of ocean color from space began in 1978 with the successful launch of NASA's Coastal Zone Color Scanner (CZCS) on the Nimbus-7 satellite. Despite the fact that CZCS was an experimental mission intended to last only one year as a proof of concept, the sensor continued to generate a valuable time-series of data over selected test sites until early 1986. Ten years passed before other sources of ocean color data became available with the launch of other sensors, and in particular the Sea-viewing Wide Field-of-view sensor ( SeaWiFS ) in 1997 on board the NASA SeaStar satellite. [ 41 ] Subsequent sensors have included NASA's Moderate-resolution Imaging Spectroradiometer (MODIS) on board the Aqua and Terra satellites, ESA's MEdium Resolution Imaging Spectrometer ( MERIS ) onboard its environmental satellite Envisat . Several new ocean-colour sensors have recently been launched, including the Indian Ocean Colour Monitor (OCM-2) on-board ISRO 's Oceansat-2 satellite and the Korean Geostationary Ocean Color Imager (GOCI), which is the first ocean colour sensor to be launched on a geostationary satellite , and Visible Infrared Imager Radiometer Suite ( VIIRS ) aboard NASA's Suomi NPP . More ocean colour sensors are planned over the next decade by various space agencies, including hyperspectral imaging . [ 42 ] Ocean Color Radiometry and its derived products are also seen as fundamental Essential Climate Variables as defined by the Global Climate Observing System . [ 43 ] Ocean color datasets provide the only global synoptic perspective of primary production in the oceans, giving insight into the role of the world's oceans in the global carbon cycle . Ocean color data helps researchers map information relevant to society, such as water quality , hazards to human health like harmful algal blooms , bathymetry , and primary production and habitat types affecting commercially-important fisheries . [ 44 ] The most widely used piece of information from ocean color remote sensing is satellite-derived chlorophyll-a concentration. Researchers calculate satellite-derived chlorophyll-a concentration from space based on the central premise that the more phytoplankton is in the water, the greener it is. [ 46 ] Phytoplankton are microscopic algae, marine primary producers that turn sunlight into chemical energy that supports the ocean food web. Like plants on land, phytoplankton create oxygen for other life on Earth. Ocean color remote sensing ever since the launch of SeaWiFS in 1997 has allowed scientists to map phytoplankton – and thus model primary production - throughout the world’s oceans over many decades, [ 47 ] marking a major advance in knowledge of the Earth system. Beyond chlorophyll, a few examples of some of the ways that ocean color data are used include: Harmful algal blooms Researchers use ocean color data in conjunction with meteorological data and field sampling to forecast the development and movement of harmful algal blooms (commonly referred to as "red tides," although the two terms are not exactly the same). For example, MODIS data has been used to map Karenia brevis blooms in the Gulf of Mexico. [ 48 ] Suspended sediments Researchers use ocean color data to map the extent of river plumes and document wind-driven resuspension of sediments from the seafloor. For example, after hurricanes Katrina and Rita in the Gulf of Mexico, ocean color remote sensing was used to map the effects offshore. [ 49 ] Sensors used to measure ocean color are instruments that measure light at multiple wavelengths (multispectral) or a continuous spectrum of colors (hyperspectral), usually spectroradiometers or optical radiometers. Ocean color sensors can either be mounted on satellites or airplanes, or used at Earth’s surface. The sensors below are earth-orbiting satellite sensors. The same sensor can be mounted on multiple satellites to give more coverage over time (aka higher temporal resolution). For example, the MODIS sensor is mounted on both Aqua and Terra satellites. Additionally, the VIIRS sensor is mounted on both Suomi National Polar-Orbiting Partnership (Suomi-NPP or SNPP) and Joint Polar Satellite System (JPSS-1, now known as NOAA-20) satellites. The following sensors were designed to measure ocean color from airplanes for airborne remote sensing: At Earth’s surface, such as on research vessels , in the water using buoys , or on piers and towers, ocean color sensors take measurements that are then used to calibrate and validate satellite sensor data. Calibration and validation are two types of " ground-truthing " that are done independently. Calibration is the tuning of raw data from the sensor to match known values, such as the brightness of the moon or a known reflection value at Earth’s surface. Calibration, done throughout the lifetime of any sensor, is especially critical to the early part of any satellite mission when the sensor is developed, launched, and beginning its first raw data collection. Validation is the independent comparison of measurements made in situ with measurements made from a satellite or airborne sensor. [ 59 ] Satellite calibration and validation maintain the quality of ocean color satellite data. [ 60 ] [ 61 ] There are many kinds of in situ sensors, and the different types are often compared on dedicated field campaigns or lab experiments called "round robins." In situ data are archived in data libraries such as the SeaBASS data archive . Some examples of in situ sensors (or networks of many sensors) used to calibrate or validate satellite data are:
https://en.wikipedia.org/wiki/Ocean_color
From 1946 through 1993, thirteen countries used ocean disposal or ocean dumping as a method to dispose of nuclear/ radioactive waste with an approximation of 200,000 tons sourcing mainly from the medical, research and nuclear industry. [ 1 ] The waste materials included both liquids and solids housed in various containers, as well as reactor vessels, with and without spent or damaged nuclear fuel . [ 2 ] Since 1993, ocean disposal has been banned by international treaties. ( London Convention (1972) , Basel Convention , MARPOL 73/78 ). There has only been the disposal of low level radioactive waste (LLW) thus far in terms of ocean dumping as high level waste has been strictly prohibited. Ocean floor disposal (or sub-seabed disposal)—a more deliberate method of delivering radioactive waste to the ocean floor and depositing it into the seabed—was studied by the United Kingdom and Sweden, but never implemented. [ 3 ] Data are from IAEA-TECDOC-1105, [ 2 ] pages 3–4. Data are from IAEA-TECDOC-1105. [ 2 ] Summary of pages 27–120: Disposal projects attempted to locate ideal dumping sites based on depth, stability and currents, and to treat, solidify and contain the waste. However, some dumping only involved diluting the waste with surface water, or used containers that imploded at depth. Even containers that survived the pressure could physically decay over time. The countries involved – listed in order of total contributions measured in TBq (TBq=10 12 becquerel ) – were the Soviet Union, the United Kingdom, Switzerland, the United States, Belgium, France, the Netherlands, Japan, Sweden, Russia, New Zealand, Germany, Italy and South Korea. Together, they dumped a total of 85,100 TBq (85.1x10 15 Bq) of radioactive waste at over 100 ocean sites, as measured in initial radioactivity at the time of dump. For comparison: Data are from IAEA-TECDOC-1105. [ 2 ] : 6–7, 14 Data are from IAEA-TECDOC-1105. [ 2 ] : 27–120 There are three dump sites in the Pacific Ocean. Mainly at the east coast of Novaya Zemlya at Kara Sea and relatively small proportion at Barents Sea by the Soviet Union. Dumped at 20 sites from 1959 to 1992, [ 11 ] total of 222,000 m 3 including reactors and spent fuel. Dumping occurred from 1948 to 1982. The UK accounts for 78% of dumping in the Atlantic (35,088 TBq), followed by Switzerland (4,419 TBq), the United States (2,924 TBq) and Belgium (2,120 TBq). Sunken Soviet nuclear submarines are not included; see List of sunken nuclear submarines There were 137,000 tonnes dumped by eight European countries. The United States reported neither tonnage nor volume for 34,282 containers. The Soviet Union 874 TBq, US 554 TBq, Japan 606.2 Tonnes, New Zealand 1+ TBq. 751,000 m 3 was dumped by Japan and the Soviet Union. The United States reported neither tonnage nor volume of 56,261 containers. Dumping of contaminated water at the 2011 Fukushima nuclear accident (estimate 4,700–27,000 TBq) is not included. The Soviet Union dumped 749 TBq. Japan dumped 15.1 TBq south of main island. South Korea dumped 45 tonnes (unknown radioactivity value). Data are from IAEA-TECDOC-1105. [ 2 ] : 7 Joint Russian-Norwegian expeditions (1992–94) collected samples from four dump sites. At immediate vicinity of waste containers, elevated levels of radionuclide were found, but had not contaminated the surrounding area. Dumping was undertaken by UK, Switzerland, Belgium, France, the Netherlands, Sweden, Germany and Italy. IAEA had been studying since 1977. The report of 1996, by CRESP suggests measurable leakages of radioactive material, and, concluded that environmental impact is negligible. These sites are monitored by the United States Environmental Protection Agency and US National Oceanic and Atmospheric Administration . So far, no excess level of radionuclides was found in samples (sea water, sediments) collected in the area, except the sample taken at a location close to disposed packages that contained elevated levels of isotopes of caesium and plutonium . The joint Japanese-Korean-Russian expedition (1994–95) concluded that contamination resulted mainly from global fallout. The USSR dumped waste in the Sea of Japan. Japan dumped waste south of the main island. The first conversations surrounding dumping radioactive waste into the ocean began in 1958 at the United Nations Law of the Sea Conference (UNCLOS). [ 12 ] The conference resulted in an agreement that all states should actively try to prevent radioactive waste pollution in the sea and follow any international guidelines regarding the issue. [ 12 ] The UNCLOS also instigated research into the issues radioactive waste dumping caused. [ 12 ] However, by the late 1960s to early 1970s, millions of tons of waste were still being dumped into the ocean annually. [ 13 ] By this time, governments began to realize the severe impacts of marine pollution, which led to one of the first international policies regarding ocean dumping in 1972 – the London Convention. [ 13 ] The London Convention's main goals were to effectively control sources of marine pollution and take the proper steps to prevent it from happening, mainly accomplishing this by banning specific substances from being dumped in the ocean. [ 13 ] [ 14 ] The most recent version of the London Convention now bans all materials from marine dumping, except a thoroughly researched list of certain wastes. [ 13 ] [ 14 ] It also prohibits waste from being exported to other countries for disposal, as well as incinerating waste in the ocean. [ 13 ] While smaller organizations like the Nuclear Energy Agency of the European Organization for Economic Cooperation and Development have produced similar regulations, the London Convention remains the central international figure of radioactive waste policies. [ 12 ] Although there are many existing regulations that ban ocean dumping, it is still a prevalent issue. Different countries enforce the ban on radioactive waste dumping on different levels, resulting in an inconsistent implementation of the agreed upon policies. [ 13 ] Because of these discrepancies, it is hard to judge the effectiveness of international regulations like the London Convention. [ 13 ] Ocean floor disposal is a method of sequestering radioactive waste in ocean floor sediment where it is unlikely to be disturbed either geologically or by human activity. Several methods of depositing material in the ocean floor have been proposed, including encasing it in concrete and as the United Kingdom has previously done, dropping it in torpedoes designed to increase the depth of penetration into the ocean floor, or depositing containers in shafts drilled with techniques similar to those used in oil exploration. [ citation needed ] Ocean floor sediment is saturated with water, but since there is no water table per se and the water does not flow through it the migration of dissolved waste is limited to the rate at which it can diffuse through dense clay . This is slow enough that it could potentially take millions of years for waste to diffuse through several tens of meters of sediment so that by the time it reaches open ocean it would be highly dilute and decayed . Large regions of the ocean floor are thought to be completely geologically inactive and it is not expected that there will be extensive human activity there in the future. Water absorbs essentially all radiation within a few meters provided the waste remains contained. One of the problems associated with this option includes the difficulty of recovering the waste, if necessary, once it is emplaced deep in the ocean. Also, establishing an effective international structure to develop, regulate, and monitor a sub-seabed repository would be extremely difficult. Beyond technical and political considerations, the London Convention places prohibitions on disposing of radioactive materials at sea and does not make a distinction between waste dumped directly into the water and waste that is buried underneath the ocean's floor. It remained in force until 2018, after which the sub-seabed disposal option can be revisited at 25-year intervals. Depositing waste, in suitable containers, in subduction zones has also been suggested. Here, waste would be transported by plate tectonic movement into the Earth's mantle and rendered harmless through dilution and natural decay. Several objections have been raised to this method, including vulnerabilities during transport and disposal, as well as uncertainties in the actual tectonic processes. [ 15 ]
https://en.wikipedia.org/wiki/Ocean_disposal_of_radioactive_waste
Ocean dynamics define and describe the flow of water within the oceans. Ocean temperature and motion fields can be separated into three distinct layers: mixed (surface) layer, upper ocean (above the thermocline ), and deep ocean. Ocean dynamics has traditionally been investigated by sampling from instruments in situ. [ 1 ] The mixed layer is nearest to the surface and can vary in thickness from 10 to 500 meters. This layer has properties such as temperature, salinity and dissolved oxygen which are uniform with depth reflecting a history of active turbulence (the atmosphere has an analogous planetary boundary layer ). Turbulence is high in the mixed layer. However, it becomes zero at the base of the mixed layer. Turbulence again increases below the base of the mixed layer due to shear instabilities. At extratropical latitudes this layer is deepest in late winter as a result of surface cooling and winter storms and quite shallow in summer. Its dynamics is governed by turbulent mixing as well as Ekman transport , exchanges with the overlying atmosphere, and horizontal advection . [ 2 ] [ unreliable source? ] The upper ocean, characterized by warm temperatures and active motion, varies in depth from 100 m or less in the tropics and eastern oceans to in excess of 800 meters in the western subtropical oceans. This layer exchanges properties such as heat and freshwater with the atmosphere on timescales of a few years. Below the mixed layer the upper ocean is generally governed by the hydrostatic and geostrophic relationships. [ 2 ] Exceptions include the deep tropics and coastal regions. The deep ocean is both cold and dark with generally weak velocities (although limited areas of the deep ocean are known to have significant recirculations). The deep ocean is supplied with water from the upper ocean in only a few limited geographical regions: the subpolar North Atlantic and several sinking regions around the Antarctic . Because of the weak supply of water to the deep ocean the average residence time of water in the deep ocean is measured in hundreds of years. In this layer as well the hydrostatic and geostrophic relationships are generally valid and mixing is generally quite weak. Ocean dynamics are governed by Newton's equations of motion expressed as the Navier-Stokes equations for a fluid element located at ( x , y , z ) on the surface of our rotating planet and moving at velocity (u,v,w) relative to that surface: Here "u" is zonal velocity, "v" is meridional velocity, "w" is vertical velocity, "p" is pressure, "ρ" is density, "T" is temperature, "S" is salinity, "g" is acceleration due to gravity, "τ" is wind stress, and "f" is the Coriolis parameter. "Q" is the heat input to the ocean, while "P-E" is the freshwater input to the ocean. Mixed layer dynamics are quite complicated; however, in some regions some simplifications are possible. The wind-driven horizontal transport in the mixed layer is approximately described by Ekman Layer dynamics in which vertical diffusion of momentum balances the Coriolis effect and wind stress. [ 3 ] This Ekman transport is superimposed on geostrophic flow associated with horizontal gradients of density. Horizontal convergences and divergences within the mixed layer due, for example, to Ekman transport convergence imposes a requirement that ocean below the mixed layer must move fluid particles vertically. But one of the implications of the geostrophic relationship is that the magnitude of horizontal motion must greatly exceed the magnitude of vertical motion. Thus the weak vertical velocities associated with Ekman transport convergence (measured in meters per day) cause horizontal motion with speeds of 10 centimeters per second or more. The mathematical relationship between vertical and horizontal velocities can be derived by expressing the idea of conservation of angular momentum for a fluid on a rotating sphere. This relationship (with a couple of additional approximations) is known to oceanographers as the Sverdrup relation . [ 3 ] Among its implications is the result that the horizontal convergence of Ekman transport observed to occur in the subtropical North Atlantic and Pacific forces southward flow throughout the interior of these two oceans. Western boundary currents (the Gulf Stream and Kuroshio ) exist in order to return water to higher latitude.
https://en.wikipedia.org/wiki/Ocean_dynamics
Ocean fertilization or ocean nourishment is a type of technology for carbon dioxide removal from the ocean based on the purposeful introduction of plant nutrients to the upper ocean to increase marine food production and to remove carbon dioxide from the atmosphere. [ 1 ] [ 2 ] Ocean nutrient fertilization, for example iron fertilization , could stimulate photosynthesis in phytoplankton . The phytoplankton would convert the ocean's dissolved carbon dioxide into carbohydrate , some of which would sink into the deeper ocean before oxidizing. More than a dozen open-sea experiments confirmed that adding iron to the ocean increases photosynthesis in phytoplankton by up to 30 times. [ 3 ] This is one of the more well-researched carbon dioxide removal (CDR) approaches, and supported by the Climate restoration proponents. However, there is uncertainty about this approach regarding the duration of the effective oceanic carbon sequestration. While surface ocean acidity may decrease as a result of nutrient fertilization, when the sinking organic matter remineralizes, deep ocean acidity could increase. A 2021 report on CDR indicates that there is medium-high confidence that the technique could be efficient and scalable at low cost, with medium environmental risks. [ 4 ] The risks of nutrient fertilization can be monitored. Peter Fiekowsky and Carole Douglis write "I consider iron fertilization an important item on our list of pottential climate restoration solutions. Given the fact that iron fertilization is a natural process that has taken place on a massive scale for millions of years, it is likely that most of the side effects are familiar ones that pose no major threat" [ 5 ] A number of techniques, including fertilization by the micronutrient iron (called iron fertilization) or with nitrogen and phosphorus (both macronutrients), have been proposed. Some research in the early 2020s suggested that it could only permanently sequester a small amount of carbon. [ 6 ] More recent research publications sustain that iron fertilization shows promise. A NOAA special report rated iron fertilization as having "a moderate potential for cost, scalability and how long carbon might be stored compared to other marine sequestration ideas" [ 7 ] The marine food chain is based on photosynthesis by marine phytoplankton that combine carbon with inorganic nutrients to produce organic matter. Production is limited by the availability of nutrients, most commonly nitrogen or iron . Numerous experiments [ 8 ] have demonstrated how iron fertilization can increase phytoplankton productivity. Nitrogen is a limiting nutrient over much of the ocean and can be supplied from various sources, including fixation by cyanobacteria . Carbon-to-iron ratios in phytoplankton are much larger than carbon-to-nitrogen or carbon -to- phosphorus ratios, so iron has the highest potential for sequestration per unit mass added. Oceanic carbon naturally cycles between the surface and the deep via two "pumps" of similar scale. The "solubility" pump is driven by ocean circulation and the solubility of CO 2 in seawater. The "biological" pump is driven by phytoplankton and subsequent settling of detrital particles or dispersion of dissolved organic carbon. The former has increased as a result of increasing atmospheric CO 2 concentration. This CO 2 sink is estimated to be approximately 2 GtC yr−1. [ 9 ] The global phytoplankton population fell about 40 percent between 1950 and 2008 or about 1 percent per year. The most notable declines took place in polar waters and in the tropics. The decline is attributed to sea surface temperature increases. [ 10 ] A separate study found that diatoms, the largest type of phytoplankton, declined more than 1 percent per year from 1998 to 2012, particularly in the North Pacific, North Indian and Equatorial Indian oceans. The decline appears to reduce pytoplankton's ability to sequester carbon in the deep ocean. [ 11 ] Fertilization offers the prospect of both reducing the concentration of atmospheric greenhouse gases with the aim of slowing climate change and at the same time increasing fish stocks via increasing primary production . The reduction reduces the ocean's rate of carbon sequestration in the deep ocean. Each area of the ocean has a base sequestration rate on some timescale, e.g., annual. Fertilization must increase that rate, but must do so on a scale beyond the natural scale. Otherwise, fertilization changes the timing, but not the total amount sequestered. However, accelerated timing may have beneficial effects for primary production separate from those from sequestration. [ 9 ] Biomass production inherently depletes all resources (save for sun and water). Either they must all be subject to fertilization or sequestration will eventually be limited by the one mostly slowly replenished (after some number of cycles) unless the ultimate limiting resource is sunlight and/or surface area. Generally, phosphate is the ultimate limiting nutrient. As oceanic phosphorus is depleted (via sequestration) it would have to be included in the fertilization cocktail supplied from terrestrial sources. [ 9 ] Phytoplankton require a variety of nutrients. These include macronutrients such as nitrate and phosphate (in relatively high concentrations) and micronutrients such as iron and zinc (in much smaller quantities). Nutrient requirements vary across phylogenetic groups (e.g., diatoms require silicon) but may not individually limit total biomass production. Co-limitation (among multiple nutrients) may also mean that one nutrient can partially compensate for a shortage of another. Silicon does not affect total production, but can change the timing and community structure with follow-on effects on remineralization times and subsequent mesopelagic nutrient vertical distribution. [ 9 ] High-nutrient, low-chlorophyll (HNLC) waters occupy the oceans' subtropical gyre systems, approximately 40 per cent of the surface, where wind-driven downwelling and a strong thermocline impede nutrient resupply from deeper water. Nitrogen fixation by cyanobacteria provides a major source of N. In effect, it ultimately prevents the ocean from losing the N required for photosynthesis. Phosphorus has no substantial supply route, making it the ultimate limiting macronutrient. The sources that fuel primary production are deep water stocks and runoff or dust-based. [ 9 ] Iron fertilization is the intentional introduction of iron -containing compounds (like iron sulfate ) to iron-poor areas of the ocean surface to stimulate phytoplankton production. This is intended to enhance biological productivity and/or accelerate carbon dioxide (CO 2 ) sequestration from the atmosphere. Iron is a trace element necessary for photosynthesis in plants. It is highly insoluble in sea water and in a variety of locations is the limiting nutrient for phytoplankton growth. Large algal blooms can be created by supplying iron to iron-deficient ocean waters. These blooms can nourish other organisms. Ocean iron fertilization is an example of a geoengineering technique. [ 12 ] Iron fertilization [ 13 ] attempts to encourage phytoplankton growth , which removes carbon from the atmosphere for at least a period of time. [ 14 ] [ 15 ] This technique is controversial because there is limited understanding of its complete effects on the marine ecosystem , [ 16 ] including side effects and possibly large deviations from expected behavior. Such effects potentially include release of nitrogen oxides , [ 17 ] and disruption of the ocean's nutrient balance. [ 12 ] Controversy remains over the effectiveness of atmospheric CO 2 sequestration and ecological effects. [ 18 ] Since 1990, 13 major large scale experiments have been carried out to evaluate efficiency and possible consequences of iron fertilization in ocean waters. A study in 2017 considered that the method is unproven; the sequestering efficiency was low and sometimes no effect was seen and the amount of iron deposits needed to make a small cut in the carbon emissions would be in the million tons per year. [ 19 ] However since 2021, interest is renewed in the potential of iron fertilization, among other from a white paper study of NOAA, the US National Oceanographic and Atmospheric Administration, which rated iron fertilization as having "moderate potential for cost, scalability and how long carbon might be stored compared to other marine sequestration ideas" In the very long term, phosphorus "is often considered to be the ultimate limiting macronutrient in marine ecosystems" [ 21 ] and has a slow natural cycle. Where phosphate is the limiting nutrient in the photic zone , addition of phosphate is expected to increase primary phytoplankton production. This technique can give 0.83 W/m 2 of globally averaged negative forcing, [ 22 ] which is sufficient to reverse the warming effect of about half the current levels of anthropogenic CO 2 emissions. One water-soluble fertilizer is diammonium phosphate (DAP), (NH 4 ) 2 HPO 4 , that as of 2008 had a market price of 1700/tonne−1 of phosphorus. Using that price and the C : P Redfield ratio of 106 : 1 produces a sequestration cost (excluding preparation and injection costs) of some $45 /tonne of carbon (2008), substantially less than the trading price for carbon emissions. [ 9 ] This technique proposes to fertilize the ocean with urea , a nitrogen rich substance, to encourage phytoplankton growth. [ 23 ] [ 24 ] [ 25 ] Concentrations of macronutrients per area of ocean surface would be similar to large natural upwellings. Once exported from the surface, the carbon remains sequestered for a long time. [ 26 ] An Australian company, Ocean Nourishment Corporation (ONC), planned to inject hundreds of tonnes of urea into the ocean, in order to boost the growth of CO 2 -absorbing phytoplankton, as a way to combat climate change. In 2007, Sydney-based ONC completed an experiment involving one tonne of nitrogen in the Sulu Sea off the Philippines. [ 27 ] This project was criticized by many institutions, including the European Commission , [ 28 ] due to lack of knowledge of side effects on the marine ecosystem. [ 29 ] Macronutrient nourishment can give 0.38 W/m 2 of globally averaged negative forcing, [ 22 ] which is sufficient to reverse the warming effect of current levels of around a quarter of anthropogenic CO 2 emissions. The two dominant costs are manufacturing the nitrogen and nutrient delivery. [ 30 ] In waters with sufficient iron micro nutrients, but a deficit of nitrogen, urea fertilization is the better choice for algae growth. [ 31 ] Urea is the most used fertilizer in the world, due to its high content of nitrogen, low cost and high reactivity towards water. [ 32 ] When exposed to ocean waters, urea is metabolized by phytoplankton via urease enzymes to produce ammonia . [ 33 ] CO ( NH 2 ) 2 + H 2 O → u r e a s e NH 3 + NH 2 COOH {\displaystyle {\ce {CO(NH_2)_2 + H_2O ->[urease] NH_3 + NH_2COOH}}} NH 2 COOH + H 2 O ⟶ NH 3 + H 2 CO 3 {\displaystyle {\ce {NH_2COOH + H_2O -> NH_3 + H_2CO_3}}} The intermediate product carbamate also reacts with water to produce a total of two ammonia molecules. [ 34 ] Another cause of concern is the sheer amount of urea needed to capture the same amount of carbon as eq. iron fertilization. The nitrogen to iron ratio in a typical algae cell is 16:0.0001, meaning that for every iron atom added to the ocean a substantial larger amount of carbon is captured compared to adding one atom of nitrogen. [ 35 ] Scientists also emphasize that adding urea to ocean waters could reduce oxygen content and result in a rise of toxic marine algae. [ 35 ] This could potentially have devastating effects on fish populations, which others argue would be benefiting from the urea fertilization (the argument being that fish populations would feed on healthy phytoplankton ). [ 36 ] Local wave power could be used to pump nutrient-rich water from hundred- metre-plus depths to the euphotic zone. However, deep water concentrations of dissolved CO 2 could be returned to the atmosphere. [ 9 ] The supply of DIC in upwelled water is generally sufficient for photosynthesis permitted by upwelled nutrients, without requiring atmospheric CO 2 . Second-order effects include how the composition of upwelled water differs from that of settling particles. More nitrogen than carbon is remineralized from sinking organic material. Upwelling of this water allows more carbon to sink than that in the upwelled water, which would make room for at least some atmospheric CO 2 to be absorbed. the magnitude of this difference is unclear. No comprehensive studies have yet resolved this question. Preliminary calculations using upper limit assumptions indicate a low value. 1,000 square kilometres (390 sq mi) could sequester 1  gigatonne/year. [ 9 ] Sequestration thus depends on the upward flux and the rate of lateral surface mixing of the surface water with denser pumped water. [ 9 ] Volcanic ash adds nutrients to the surface ocean. This is most apparent in nutrient-limited areas. Research on the effects of anthropogenic and aeolian iron addition to the ocean surface suggests that nutrient-limited areas benefit most from a combination of nutrients provided by anthropogenic, eolian and volcanic deposition. [ 37 ] Some oceanic areas are comparably limited in more than one nutrient, so fertilization regimes that includes all limited nutrients is more likely to succeed. Volcanic ash supplies multiple nutrients to the system, but excess metal ions can be harmful. The positive impacts of volcanic ash deposition are potentially outweighed by their potential to do harm. [ citation needed ] Clear evidence documents that ash can be as much as 45 percent by weight in some deep marine sediments. [ 38 ] [ 39 ] In the Pacific Ocean estimates claim that (on a millennial-scale) the atmospheric deposition of air-fall volcanic ash was as high as the deposition of desert dust. [ 40 ] This indicates the potential of volcanic ash as a significant iron source. In August 2008 the Kasatochi volcanic eruption in the Aleutian Islands , Alaska, deposited ash in the nutrient-limited northeast Pacific. This ash (including iron) resulted in one of the largest phytoplankton blooms observed in the subarctic. [ 41 ] [ 42 ] Fisheries scientists in Canada linked increased oceanic productivity from the volcanic iron to subsequent record returns of salmon in the Fraser River two years later [ 43 ] The approach advocated by Ocean Nutrition Corporation is to limit the distribution of added nutrients to allow phytoplankton concentrations to rise only to the values seen in upwelling regions (5–10 mg Chl/m 3 ). Maintaining healthy phytoplankton levels is claimed to avoid harmful algal blooms and oxygen depletion. Chlorophyll concentration is an easily measured proxy for phytoplankton concentration. The company stated that values of approximately 4 mg Chl/m 3 meet this requirement. [ 44 ] SS While manipulation of the land ecosystem in support of agriculture for the benefit of humans has long been accepted (despite its side effects), directly enhancing ocean productivity has not. Among the reasons are: According to Lisa Speer of the Natural Resources Defense Council, "There is a limited amount of money, of time, that we have to deal with this problem....The worst possible thing we could do for climate change technologies would be to invest in something that doesn't work and that has big impacts that we don't anticipate." [ 45 ] In 2009 Aaron Strong, Sallie Chisholm, Charles Miller and John Cullen opined in Nature "...fertilizing the oceans with iron to stimulate phytoplankton blooms, absorb carbon dioxide from the atmosphere and export carbon to the deep sea – should be abandoned." [ 46 ] In Science , Warren Cornwall mentions "Tests have shown the iron does stimulate plankton growth. But key questions remain,says Dave Siegel, a marine scientist at the University of California, Santa Barbara, who served on the NASEM panel. How much of the absorbed carbon makes it to the deep ocean is uncertain", while Wil Burns, an ocean law expert at Northwestern University declares that "...making iron fertilization a research priority is "barking mad" since "...a recent survey of 13 past fertilization experiments found only one that increased carbon levels deep in the ocean." [ 47 ] Algal cell chemical composition is often assumed to respect a ratio where atoms are 106 carbon : 16 nitrogen : 1 phosphorus ( Redfield ratio [ 48 ] ): 0.0001 iron. In other words, each atom of iron helps capture 1,060,000 atoms of carbon, while one nitrogen atom only 6. [ 49 ] In large areas of the ocean, such organic growth (and hence nitrogen fixation) is thought to be limited by the lack of iron rather than nitrogen, although direct measures are hard. [ 48 ] On the other hand, experimental iron fertilisation in HNLC regions has been supplied with excess iron which cannot be utilized before it is scavenged. Thus the organic material produced was much less than if the ratio of nutrients above were achieved. Only a fraction of the available nitrogen (because of iron scavenging) is drawn down. In culture bottle studies of oligotrophic water, adding nitrogen and phosphorus can draw down considerably more nitrogen per dosing. The export production is only a small percentage of the new primary production and in the case of iron fertilization, iron scavenging means that regenerative production is small. With macronutrient fertilisation, regenerative production is expected to be large and supportive of larger total export. Other losses can also reduce efficiency. [ 50 ] In addition, the efficiency of carbon sequestration through ocean fertilisation is heavily influenced by factors such as changes in stoichiometric ratios and gas exchange make accurately predicting the effectiveness of ocean feralization projects. [ 51 ] Fertilisation also does not create a permanent carbon sink. "Ocean fertilisation options are only worthwhile if sustained on a millennial timescale and phosphorus addition may have greater long-term potential than iron or nitrogen fertilisation." [ 22 ] Beyond biological impacts, evidences suggests that plankton blooms can affect the physical properties of surface waters simply by absorbing light and heat from the sun. Watson added that if fertilization is done in shallow coastal waters, a dense layer of phytoplankton clouding the top 30 metres or so of the ocean could hinder corals, kelps or other deeper sea life from carrying out photosynthesis (Watson et al. 2008). In addition, as the bloom declines, nitrous oxide is released, potentially counteracting the effects from the sequestering of carbon. [ 52 ] Toxic algal blooms are common in coastal areas. Fertilization could trigger such blooms. Chronic fertilization could risk the creation of dead zones , such as the one in the Gulf of Mexico . [ 53 ] Adding urea to the ocean can cause phytoplankton blooms that serve as a food source for zooplankton and in turn feed for fish. This may increase fish catches. [ 54 ] However, if cyanobacteria and dinoflagellates dominate phytoplankton assemblages that are considered poor quality food for fish then the increase in fish quantity may not be large. [ 55 ] Some evidence links iron fertilization from volcanic eruptions to increased fisheries production. [ 43 ] [ 41 ] Other nutrients would be metabolized along with the added nutrient(s), reducing their presence in fertilized waters. [ 45 ] Krill populations have declined dramatically since whaling began. [ 53 ] Sperm whales transport iron from the deep ocean to the surface during prey consumption and defecation. Sperm whales have been shown to increase the levels of primary production and carbon export to the deep ocean by depositing iron-rich faeces into surface waters of the Southern Ocean. The faeces causes phytoplankton to grow and take up carbon. The phytoplankton nourish krill. Reducing the abundance of sperm whales in the Southern Ocean, whaling resulted in an extra 2 million tonnes of carbon remaining in the atmosphere each year. [ 56 ] Many locations, such as the Tubbataha Reef in the Sulu Sea , support high marine biodiversity . [ 57 ] Nitrogen or other nutrient loading in coral reef areas can lead to community shifts towards algal overgrowth of corals and ecosystem disruption, implying that fertilization must be restricted to areas in which vulnerable populations are not put at risk. [ 58 ] As the phytoplankton descend the water column, they decay, consuming oxygen and producing greenhouse gases methane and nitrous oxide . Plankton-rich surface waters could warm the surface layer, affecting circulation patterns. [ 45 ] Many phytoplankton species release dimethyl sulfide (DMS), which escapes into the atmosphere where it forms sulfate aerosols and encourages cloud formation, which could reduce warming. [ 45 ] However, substantial increases in DMS could reduce global rainfall, according to global climate model simulations, while halving temperature increases as of 2100. [ 59 ] [ 60 ] In 2007 Working Group III of the United Nations Intergovernmental Panel on Climate Change examined ocean fertilization methods in its fourth assessment report and noted that the field-study estimates of the amount of carbon removed per ton of iron was probably over-estimated and that potential adverse effects had not been fully studied. [ 61 ] In June 2007 the London Dumping Convention issued a statement of concern noting 'the potential for large scale ocean iron fertilization to have negative impacts on the marine environment and human health', [ 62 ] but did not define 'large scale'. It is believed that the definition would include operations. [ citation needed ] In 2008, the London Convention/London Protocol noted in resolution LC-LP.1 that knowledge on the effectiveness and potential environmental impacts of ocean fertilization was insufficient to justify activities other than research. This non-binding resolution stated that fertilization, other than research, "should be considered as contrary to the aims of the Convention and Protocol and do not currently qualify for any exemption from the definition of dumping". [ 63 ] In May 2008, at the Convention on Biological Diversity , 191 nations called for a ban on ocean fertilization until scientists better understand the implications. [ 64 ] In August 2018, Germany banned the sale of ocean seeding as carbon sequestration system [ 65 ] while the matter was under discussion at EU and EASAC levels. [ 66 ] International law presents some dilemmas for ocean fertilization. [ citation needed ] The United Nations Framework Convention on Climate Change (UNFCCC 1992) has accepted mitigation actions. [ citation needed ] According to United Nations Convention on the Law of the Sea (LOSC 1982), all states are obliged to take all measures necessary to prevent, reduce and control pollution of the marine environment, to prohibit the transfer of damage or hazards from one area to another and to prohibit the transformation of one type pollution to another. How this relates to fertilization is undetermined. [ 67 ] Fertilization may create sulfate aerosols that reflect sunlight, modifying the Earth's albedo , creating a cooling effect that reduces some of the effects of climate change. Enhancing the natural sulfur cycle in the Southern Ocean [ 68 ] by fertilizing with iron in order to enhance dimethyl sulfide production and cloud reflectivity may achieve this. [ 69 ] [ 70 ]
https://en.wikipedia.org/wiki/Ocean_fertilization
In oceanography , a gyre ( / ˈ dʒ aɪ ər / ) is any large system of ocean surface currents moving in a circular fashion driven by wind movements. Gyres are caused by the Coriolis effect ; planetary vorticity , horizontal friction and vertical friction determine the circulatory patterns from the wind stress curl ( torque ). [ 1 ] Gyre can refer to any type of vortex in an atmosphere or a sea , [ 2 ] even one that is human-created, but it is most commonly used in terrestrial oceanography to refer to the major ocean systems. The largest ocean gyres are wind-driven, meaning that their locations and dynamics are controlled by the prevailing global wind patterns : easterlies at the tropics and westerlies at the midlatitudes. These wind patterns result in a wind stress curl that drives Ekman pumping in the subtropics (resulting in downwelling) and Ekman suction in subpolar regions (resulting in upwelling). [ 3 ] Ekman pumping results in an increased sea surface height at the center of the gyre and anticyclonic geostrophic currents in subtropical gyres. [ 3 ] Ekman suction results in a depressed sea surface height and cyclonic geostrophic currents in subpolar gyres. [ 3 ] Wind-driven ocean gyres are asymmetrical, with stronger flows on their western boundary and weaker flows throughout their interior. The weak interior flow that is typical over most of the gyre is a result of the conservation of potential vorticity . In the shallow water equations (applicable for basin-scale flow as the horizontal length scale is much greater than the vertical length scale), potential vorticity is a function of relative (local) vorticity ζ {\displaystyle \zeta } (zeta), planetary vorticity f {\displaystyle f} , and the depth H {\displaystyle H} , and is conserved with respect to the material derivative : [ 4 ] In the case of the subtropical ocean gyre, Ekman pumping results in water piling up in the center of the gyre, compressing water parcels. This results in a decrease in H {\displaystyle H} , so by the conservation of potential vorticity the numerator ζ + f {\displaystyle \zeta +f} must also decrease. [ 5 ] It can be further simplified by realizing that, in basin-scale ocean gyres, the relative vorticity ζ {\displaystyle \zeta } is small, meaning that local changes in vorticity cannot account for the decrease in H {\displaystyle H} . [ 5 ] Thus, the water parcel must change its planetary vorticity f {\displaystyle f} accordingly. The only way to decrease the planetary vorticity is by moving the water parcel equatorward, so throughout the majority of subtropical gyres there is a weak equatorward flow. Harald Sverdrup quantified this phenomenon in his 1947 paper, "Wind Driven Currents in a Baroclinic Ocean", [ 6 ] in which the (depth-integrated) Sverdrup balance is defined as: [ 7 ] Here, V g {\displaystyle V_{g}} is the meridional mass transport (positive north), β {\displaystyle \beta } is the Rossby parameter , ρ {\displaystyle \rho } is the water density, and w E {\displaystyle w_{E}} is the vertical Ekman velocity due to wind stress curl (positive up). It can be clearly seen in this equation that for a negative Ekman velocity (e.g., Ekman pumping in subtropical gyres), meridional mass transport (Sverdrup transport) is negative (south, equatorward) in the northern hemisphere ( f > 0 {\displaystyle f>0} ). Conversely, for a positive Ekman velocity (e.g., Ekman suction in subpolar gyres), Sverdrup transport is positive (north, poleward) in the northern hemisphere. As the Sverdrup balance argues, subtropical ocean gyres have a weak equatorward flow and subpolar ocean gyres have a weak poleward flow over most of their area. However, there must be some return flow that goes against the Sverdrup transport in order to preserve mass balance. [ 9 ] In this respect, the Sverdrup solution is incomplete, as it has no mechanism in which to predict this return flow. [ 9 ] Contributions by both Henry Stommel and Walter Munk resolved this issue by showing that the return flow of gyres is done through an intensified western boundary current. [ 10 ] [ 8 ] Stommel's solution relies on a frictional bottom boundary layer which is not necessarily physical in a stratified ocean (currents do not always extend to the bottom). [ 5 ] Munk's solution instead relies on friction between the return flow and the sidewall of the basin. [ 5 ] This allows for two cases: one with the return flow on the western boundary (western boundary current) and one with the return flow on the eastern boundary (eastern boundary current). A qualitative argument for the presence of western boundary current solutions over eastern boundary current solutions can be found again through the conservation of potential vorticity. Considering again the case of a subtropical northern hemisphere gyre, the return flow must be northward. In order to move northward (an increase in planetary vorticity f {\displaystyle f} ), there must be a source of positive relative vorticity to the system. The relative vorticity in the shallow-water system is: [ 11 ] Here v {\displaystyle v} is again the meridional velocity and u {\displaystyle u} is the zonal velocity. In the sense of a northward return flow, the zonal component is neglected and only the meridional velocity is important for relative vorticity. Thus, this solution requires that ∂ v / ∂ x > 0 {\displaystyle \partial v/\partial x>0} in order to increase the relative vorticity and have a valid northward return flow in the northern hemisphere subtropical gyre. [ 5 ] Due to friction at the boundary, the velocity of flow must go to zero at the sidewall before reaching some maximum northward velocity within the boundary layer and decaying to the southward Sverdrup transport solution far away from the boundary. Thus, the condition that ∂ v / ∂ x > 0 {\displaystyle \partial v/\partial x>0} can only be satisfied through a western boundary frictional layer, as the eastern boundary frictional layer forces ∂ v / ∂ x < 0 {\displaystyle \partial v/\partial x<0} . [ 5 ] One can make similar arguments for subtropical gyres in the southern hemisphere and for subpolar gyres in either hemisphere and see that the result remains the same: the return flow of an ocean gyre is always in the form of a western boundary current. The western boundary current must transport on the same order of water as the interior Sverdrup transport in a much smaller area. This means western boundary currents are much stronger than interior currents, [ 5 ] a phenomenon called "western intensification". There are five major subtropical gyres across the world's oceans: the North Atlantic Gyre, the South Atlantic Gyre, the Indian Ocean Gyre, the North Pacific Gyre, and the South Pacific Gyre. All subtropical gyres are anticyclonic, meaning that in the northern hemisphere they rotate clockwise, while the gyres in the southern hemisphere rotate counterclockwise. This is due to the Coriolis force . Subtropical gyres typically consist of four currents: a westward flowing equatorial current, a poleward flowing, narrow, and strong western boundary current, an eastward flowing current in the midlatitudes, and an equatorward flowing, weaker, and broader eastern boundary current. The North Atlantic Gyre is located in the northern hemisphere in the Atlantic Ocean, between the Intertropical Convergence Zone (ITCZ) in the south and Iceland in the north. The North Equatorial Current brings warm waters west towards the Caribbean and defines the southern edge of the North Atlantic Gyre. Once these waters reach the Caribbean they join the warm waters in the Gulf of Mexico and form the Gulf Stream , a western boundary current. This current then heads north and east towards Europe, forming the North Atlantic Current . The Canary Current flows south along the western coast of Europe and north Africa, completing the gyre circulation. The center of the gyre is the Sargasso Sea , which is characterized by the dense accumulation of Sargassum seaweed. [ 12 ] The South Atlantic Gyre is located in the southern hemisphere in the Atlantic Ocean, between the Intertropical Convergence Zone in the north and the Antarctic Circumpolar Current to the south. The South Equatorial Current brings water west towards South America, forming the northern boundary of the South Atlantic gyre. Here, the water moves south in the Brazil Current , the western boundary current of the South Atlantic Gyre. The Antarctic Circumpolar Current forms both the southern boundary of the gyre and the eastward component of the gyre circulation. Eventually, the water reaches the west coast of Africa, where it is brought north along the coast as a part of the eastern boundary Benguela Current , completing the gyre circulation. The Benguela Current experiences the Benguela Niño event, an Atlantic Ocean analogue to the Pacific Ocean's El Niño , and is correlated with a reduction in primary productivity in the Benguela upwelling zone. [ 13 ] The Indian Ocean Gyre , located in the Indian Ocean, is, like the South Atlantic Gyre, bordered by the Intertropical Convergence Zone in the north and the Antarctic Circumpolar Current to the south. The South Equatorial Current forms the northern boundary of the Indian Ocean Gyre as it flows west along the equator towards the east coast of Africa. At the coast of Africa, the South Equatorial Current is split by Madagascar into the Mozambique Current , flowing south through the Mozambique Channel, and the East Madagascar Current , flowing south along the east coast of Madagascar, both of which are western boundary currents. South of Madagascar the two currents join to form the Agulhas Current . [ 14 ] The Agulhas Current flows south until it joins the Antarctic Circumpolar Current, which flows east at the southern edge of the Indian Ocean Gyre. Due to the African continent not extending as far south as the Indian Ocean Gyre, some of the water in the Agulhas Current "leaks" into the Atlantic Ocean, with potentially important effects for global thermohaline circulation . [ 15 ] The gyre circulation is completed by the north flowing West Australian Current , which forms the eastern boundary of the gyre. The North Pacific Gyre , one of the largest ecosystems on Earth, [ 16 ] is bordered to the south by the Intertropical Convergence Zone and extending north to roughly 50°N. At the southern boundary of the North Pacific Gyre, the North Equatorial Current flows west along the equator towards southeast Asia. The Kuroshio Current is the western boundary current of the North Pacific Gyre, flowing northeast along the coast of Japan. At roughly 50°N, the flow turns east and becomes the North Pacific Current . The North Pacific Current flows east, eventually bifurcating near the west coast of North America into the northward flowing Alaska Current and the southward flowing California Current . [ 17 ] The Alaska Current is the eastern boundary current of the subpolar Alaska Gyre, [ 18 ] while the California Current is the eastern boundary current that completes the North Pacific Gyre circulation. Within the North Pacific Gyre is the Great Pacific Garbage Patch , an area of increased plastic waste concentration. [ 19 ] The South Pacific Gyre , like its northern counterpart, is one of the largest ecosystems on Earth with an area that accounts for around 10% of the global ocean surface area. [ 20 ] Within this massive area is Point Nemo , the location on Earth that is farthest away from all continental landmass (2,688 km away from the closest land). [ 21 ] The remoteness of this gyre complicates sampling, causing this gyre to be historically under sampled in oceanographic datasets. [ 22 ] [ 23 ] At the northern boundary of the South Pacific Gyre, the South Equatorial Current flows west towards southeast Asia and Australia. There, it turns south as it flows in the East Australian Current , a western boundary current. The Antarctic Circumpolar Current again returns the water to the east. The flow turns north along the western coast of South America in the Humboldt Current , the eastern boundary current that completes the South Pacific Gyre circulation. Like the North Pacific Gyre, the South Pacific Gyre has an elevated concentration of plastic waste near the center, termed the South Pacific garbage patch . Unlike the North Pacific garbage patch which was first described in 1988, [ 19 ] the South Pacific garbage patch was discovered much more recently in 2016 [ 24 ] (a testament to the extreme remoteness of the South Pacific Gyre). Subpolar gyres form at high latitudes (around 60° ). Circulation of surface wind and ocean water is cyclonic, counterclockwise in the northern hemisphere and clockwise in the southern hemisphere, around a low-pressure area , such as the persistent Aleutian Low and the Icelandic Low . The wind stress curl in this region drives the Ekman suction, which creates an upwelling of nutrient-rich water from the lower depths. [ 25 ] Subpolar circulation in the southern hemisphere is dominated by the Antarctic Circumpolar Current , due to the lack of large landmasses breaking up the Southern Ocean . There are minor gyres in the Weddell Sea and the Ross Sea , the Weddell Gyre and Ross Gyre , which circulate in a clockwise direction. The North Atlantic Subpolar Gyre, located in the North Atlantic Ocean, is characterized by a counterclockwise rotation of surface waters. It plays a crucial role in the global oceanic conveyor belt system, influencing climate and marine ecosystems. [ 26 ] The gyre is driven by the convergence of warm, salty waters from the south and cold, fresher waters from the north. As these waters meet, the warm, dense water sinks beneath the lighter, colder water, initiating a complex circulation pattern. The North Atlantic Subpolar Gyre has significant implications for climate regulation, as it helps redistribute heat and nutrients throughout the North Atlantic, influencing weather patterns and supporting diverse marine life. Additionally, changes in the gyre's strength and circulation can impact regional climate variability and may be influenced by broader climate change trends. [ 26 ] The Atlantic Meridional Overturning Circulation (AMOC) is a key component of the global climate system through its transport of heat and freshwater. [ 26 ] The North Atlantic Subpolar Gyre is in a region where the AMOC is actively developed and shaped through mixing and water mass transformation. It is a region where large amounts of heat transported northward by the ocean are released into the atmosphere, thereby modifying the climate of northwest Europe. [ 27 ] The North Atlantic Subpolar Gyre has a complex topography with a series of basins in which the large-scale circulation is characterized by cyclonic boundary currents and interior recirculation. The North Atlantic Current develops out of the Gulf Stream extension and turns eastward, crossing the Atlantic in a wide band between about 45°N and 55°N creating the southern border of the North Atlantic Subpolar Gyre. There are several branches of the North Atlantic Current, and they flow into an eastern intergyral region in the Bay of Biscay , the Rockall Trough , the Iceland Basin, and the Irminger Sea . Part of the North Atlantic Current flows into the Norwegian Sea, and some recirculate within the boundary currents of the subpolar gyre. [ 26 ] The Ross Gyre is located in the Southern Ocean surrounding Antarctica , just outside of the Ross Sea. This gyre is characterized by a clockwise rotation of surface waters, driven by the combined influence of wind, the Earth's rotation, and the shape of the seafloor. The gyre plays a crucial role in the transport of heat, nutrients, and marine life in the Southern Ocean, affecting the distribution of sea ice and influencing regional climate patterns. The Ross Sea , Antarctica , is a region where the mixing of distinct water masses and complex interactions with the cryosphere lead to the production and export of dense water, with global-scale impacts. [ 28 ] which controls the proximity of the warm waters of the Antarctic Circumpolar Current to the Ross Sea continental shelf, where they may drive ice shelf melting and increase sea level. [ 29 ] The deepening of sea level pressures over the Southeast Pacific/Amundsen-Bellingshausen Seas generates a cyclonic circulation cell that reduces sea surface heights north of the Ross Gyre via Ekman suction. The relative reduction of sea surface heights to the north facilitates a northeastward expansion of the outer boundary of the Ross Gyre. Further, the gyre is intensified by a westward ocean stress anomaly over its southern boundary. The ensuing southward Ekman transport anomaly raises sea surface heights over the continental shelf and accelerates the westward throughflow by increasing the cross-slope pressure gradient. The sea level pressure center may have a greater impact on the Ross Gyre transport or the throughflow, depending on its location and strength. This gyre has significant effects on interactions in the Southern Ocean between waters of the Antarctic margin, the Antarctic Circumpolar Current, and intervening gyres with a strong seasonal sea ice cover play a major role in the climate system. [ 30 ] The Ross Sea is the southernmost sea on Earth and holds the United States' McMurdo Station and Italian Zuchelli Station . Even though this gyre is located nearby two of the most prominent research stations in the world for Antarctic study, the Ross Gyre remains one of the least sampled gyres in the world. [ 31 ] The Weddell Gyre is located in the Southern Ocean surrounding Antarctica, just outside of the Weddell Sea. It is characterized by a clockwise rotation of surface waters, influenced by the combined effects of winds, the Earth's rotation, and the seafloor's topography. [ 32 ] Like the Ross Gyre, the Weddell Gyre plays a critical role in the movement of heat, nutrients, and marine life in the Southern Ocean. Insights into the behavior and variability of the Weddell Gyre are crucial for comprehending the interaction between ocean processes in the southern hemisphere and their implications for the global climate system. [ 32 ] This gyre is formed by interactions between the Antarctic Circumpolar Current and the Antarctic Continental Shelf . [ 33 ] The Weddell Gyre (WG) is one of the main oceanographic features of the Southern Ocean south of the Antarctic Circumpolar Current which plays an influential role in global ocean circulation as well as gas exchange with the atmosphere. [ 33 ] The WG is situated in the Atlantic sector of the Southern Ocean, south of 55–60°S and roughly between 60°W and 30°E (Deacon, 1979). It stretches over the Weddell abyssal plain, where the Weddell Sea is situated, and extends east into the Enderby abyssal plain. [ 33 ] The anti-cyclonic Beaufort Gyre is the dominant circulation of the Canada Basin and the largest freshwater reservoir in the Arctic Ocean's western and northern sectors. [ 34 ] The Gyre is characterized by a large-scale, quasi-permanent, counterclockwise rotation of surface waters within the Beaufort Sea . This gyre functions as a critical mechanism for the transport of heat, nutrients, and sea ice within the Arctic region, thus influencing the physical and biological characteristics of the marine environment. Negative wind stress curl over the region, mediated by the sea ice pack, leads to Ekman pumping, downwelling of isopycnal surfaces, and storage of ~20,000 km3 of freshwater in the upper few hundred meters of the ocean. [ 35 ] The gyre gains energy from winds in the south and loses energy in the north over a mean annual cycle. The strong atmospheric circulation in the autumn, combined with significant areas of open water, demonstrates the effect that wind stress has directly on the surface geostrophic currents. [ 36 ] The Beaufort Gyre and the Transpolar Drift are interconnected due to their relationship in their role in transporting sea ice across the Arctic Ocean. Their influence on the distribution of freshwater has broad impacts for global sea level rise and climate dynamics. Depending on their location around the world, gyres can be regions of high biological productivity or low productivity. Each gyre has a unique ecological profile but can be grouped by region due to dominating characteristics. Generally, productivity is greater for cyclonic gyres (e.g., subpolar gyres) that drive upwelling through Ekman suction and lesser for anticyclonic gyres (e.g., subtropical gyres) that drive downwelling through Ekman pumping, but this can differ between seasons and regions. [ 37 ] Subtropical gyres are sometimes described as "ocean deserts" or "biological deserts", in reference to arid land deserts where little life exists. [ 38 ] Due to their oligotrophic characteristics, warm subtropical gyres have some of the least productive waters per unit surface area in the ocean. [ 37 ] The downwelling of water that occurs in subtropical gyres takes nutrients deeper in the ocean, removing them from surface waters. Organic particles can also be removed from surface waters through gravitational sinking, where the particle is too heavy to remain suspended in the water column. [ 39 ] However, since subtropical gyres cover 60% of the ocean surface, their relatively low production per unit area is made up for by covering massive areas of the Earth. [ 40 ] This means that, despite being areas of relatively low productivity and low nutrients, they play a large role in contributing to the overall amount of ocean production. [ 41 ] [ 42 ] In contrast to subtropical gyres, subpolar gyres can have a lot of biological activity due to Ekman suction upwelling driven by wind stress curl. [ 43 ] Subpolar gyres in the North Atlantic have a "bloom and crash" pattern following seasonal and storm patterns. The highest productivity in the North Atlantic occurs in boreal spring when there are long days and high levels of nutrients. This is different to the subpolar North Pacific, where almost no phytoplankton bloom occurs and patterns of respiration are more consistent through time than in the North Atlantic. [ 37 ] Primary production in the ocean is heavily dependent on the presence of nutrients and the availability of sunlight. Here, nutrients refers to nitrogen, nitrate, phosphate, and silicate, all important nutrients in biogeochemical processes that take place in the ocean. [ 44 ] A commonly accepted method for relating different nutrient availabilities to each other in order to describe chemical processes is the Redfield, Ketchum, and Richards (RKR) equation. This equation describes the process of photosynthesis and respiration and the ratios of the nutrients involved. [ 45 ] The RKR Equation for Photosynthesis and Respiration: With the correct ratios of nutrients on the left side of the RKR equation and sunlight, photosynthesis takes place to produce plankton (primary production) and oxygen. Typically, the limiting nutrients to production are nitrogen and phosphorus with nitrogen being the most limiting. [ 45 ] Lack of nutrients in the surface waters of subtropical gyres is related to the strong downwelling and sinking of particles that occurs in these areas as mentioned earlier. However, nutrients are still present in these gyres. These nutrients can come from not only vertical transport, but also lateral transport across gyre fronts. This lateral transport helps make up for the large loss of nutrients due to downwelling and particle sinking. [ 46 ] However, the major source of nitrate in the nitrate-limited subtropical gyres is a result of biological, not physical, factors. Nitrogen in subtropical gyres is produced primarily by nitrogen-fixing bacteria, [ 47 ] which are common throughout most of the oligotrophic waters of subtropical gyres. [ 48 ] These bacteria transform atmospheric nitrogen into bioavailable forms. The Alaskan Gyre and Western Subarctic Gyre are an iron-limited environment rather than a nitrogen or phosphorus limited environment. This region relies on dust blowing off the state of Alaska and other landmasses nearby to supply iron. [ 49 ] Because it is limited by iron instead of nitrogen or phosphorus, it is known as high-nutrient, low-chlorophyll region. [ 50 ] [ 51 ] Iron limitation in high-nutrient, low-chlorophyll regions results in water that is rich in other nutrients because they have not been removed by the small populations of plankton that live there. [ 52 ] The North Atlantic Subpolar Gyre is an important part of the ocean's carbon dioxide drawdown mechanism. The photosynthesis of phytoplankton communities in this area seasonally depletes surface waters of carbon dioxide, removing it through primary production. [ 53 ] This primary production occurs seasonally, with the highest amounts happening in summer. [ 54 ] Generally, spring is an important time for photosynthesis as the light limitation imposed during winter is lifted and there are high levels of nutrients available. However, in the North Atlantic Subpolar Gyre, spring productivity is low in comparison to expected levels. It is hypothesized that this low productivity is because phytoplankton are less efficiently using light than they do in the summer months. [ 54 ] Ocean gyres typically contain 5–6 trophic levels . The limiting factor for the number of trophic levels is the size of the phytoplankton , which are generally small in nutrient limited gyres. In low oxygen zones, oligotrophs are a large percentage of the phytoplankton. [ 55 ] At the intermediate level, small fishes and squid (especially ommastrephidae ) dominate the nektonic biomass. They are important for the transport of energy from low trophic levels to high trophic levels. In some gyres, ommastrephidae are a major part of many animals' diets and can support the existence of large marine life . [ 37 ] Indigenous Traditional Ecological Knowledge recognizes that Indigenous people, as the original caretakers, hold unique relationships with the land and waters. These relationships make TEK difficult to define, as Traditional Knowledge means something different to each person, each community, and each caretaker. The United Nations Declaration on the Rights of Indigenous Peoples begins by reminding readers that “respect for Indigenous knowledge, cultures and traditional practices contributes to sustainable and equitable development and proper management of the environment” [ 56 ] Attempts to collect and store this knowledge have been made over the past twenty years. Conglomerates such as The Indigenous Knowledge Social Network (SIKU) https://siku.org/ , the Igliniit project, [ 57 ] and the Wales Inupiaq Sea Ice Directory have made strides in the inclusion and documentation of indigenous people's thoughts on global climate, oceanographic, and social trends. One example involves ancient Polynesians and how they discovered and then travelled throughout the Pacific Ocean from modern day Polynesia to Hawaii and New Zealand. Known as wayfinding , navigators would use the stars, winds, and ocean currents to know where they were on the ocean and where they were headed. [ 58 ] These navigators were intimately familiar with Pacific currents that create the North Pacific gyre and this way of navigating continues today. [ 59 ] Another example involves the Māori people who came from Polynesia and are an indigenous group in New Zealand. Their way of life and culture has strong connections to the ocean. The Māori believe that the sea is the source of all life and is an energy, called Tangaroa. This energy could manifest in many different ways, like strong ocean currents, calm seas, or turbulent storms. [ 60 ] The Māori have a rich oral history of navigation within the Southern Ocean and Antarctic Ocean and a deep understanding their ice and ocean patterns. A current research project is aimed at consolidating these oral histories. [ 61 ] Efforts are being made to integrate TEK with Western science in marine and ocean research in New Zealand. [ 62 ] Additional research efforts aim to collate indigenous oral histories and incorporate indigenous knowledge into climate change adaptation practices in New Zealand that will directly affect the Māori and other indigenous communities. [ 63 ] Ocean circulation re-distributes the heat and water-resources, therefore determines the regional climate. For example, the western branches of the subtropical gyres flow from the lower latitudes towards higher latitudes, bringing relatively warm and moist air to the adjacent land, contributing to a mild and wet climate (e.g., East China, Japan). In contrast, the eastern boundary currents of the subtropical gyres streaming from the higher latitudes towards lower latitudes, corresponding to a relatively cold and dry climate (e.g., California). Currently, the core of the subtropical gyres are around 30° in both Hemispheres. However, their positions were not always there. Satellite observational sea surface height and sea surface temperature data suggest that the world's major ocean gyres are slowly moving towards higher latitudes in the past few decades. Such feature show agreement with climate model prediction under anthropogenic global warming. [ 64 ] Paleo-climate reconstruction also suggest that during the past cold climate intervals, i.e., ice ages, some of the western boundary currents (western branches of the subtropical ocean gyres) are closer to the equator than their modern positions. [ 65 ] [ 66 ] These evidence implies that global warming is very likely to push the large-scale ocean gyres towards higher latitudes. [ 67 ] [ 68 ] As the ocean absorbs more carbon dioxide , it becomes more acidic. [ 69 ] This pH change poses a threat to marine organisms, especially those that build calcium carbonate shells and skeletons. This includes planktonic foraminifera , pteropods , and coccolithophores . [ 70 ] Subtropical gyres, or “ocean deserts”, are home to critical primary producers that form the base of the food web. Acidification impairs both the growth and reproduction of planktonic organisms, leading to reduced primary productivity. [ 69 ] Being at the base of the food chain , this impairment affects many larger marine species who rely on primary producers for food. [ 70 ] Acidification alters nutrient cycling by affecting multiple microbial processes. Nitrogen fixation is a crucial process in nutrient-poor subtropical gyres, and may be less efficient in lower pH waters. [ 70 ] This would further limit primary production and worsen oligotrophic conditions in these gyre regions. Overfishing is a major anthropogenic pressure on marine ecosystems associated with ocean gyres. Many large fishing fleets target gyre surroundings due to upwelling and nutrient convergence zones leading to higher biological productivity. Intense fishing pressure has led to population declines and collapses of certain species. This impacts not only targeted fish stocks , but the entire marine food web . [ 71 ] By removing top predators and key forage species, overfishing disrupts trophic dynamics . [ 71 ] Species usually kept in check by predators can proliferate unnaturally fast, leading to trophic cascades . In subtropical regions, biological productivity is already limited due to decreased nutrient availability, overfishing has more drastic effects. Ecosystems can easily become dominated by jellyfish or less valuable species. [ 72 ] The emerging threat of deep-sea mining targets polymetallic nodules , cobalt-rich crusts, and massive sulfide deposits in abyssal plains lying within ocean gyres. [ 73 ] Mining activities disturb the seafloor, creating sediment plumes that can spread over hundreds of kilometers. [ 74 ] These plumes can smother organisms and disrupt ecological processes that have evolved over millennia in stable conditions in deep oceans. Noise, light, and chemical pollution generated by mining could have cascading effects in water columns , affecting the surface and midwater ecosystems in gyres. [ 73 ] Deep-sea ecosystems recover at extremely slow rates, if at all, meaning the long-term impacts of mining are predicted to be significant and largely irreversible. [ 74 ] A garbage patch is a gyre of marine debris particles caused by the effects of ocean currents and increasing plastic pollution by human populations. These human-caused collections of plastic and other debris are responsible for ecosystem and environmental problems that affect marine life, contaminate oceans with toxic chemicals, and contribute to greenhouse gas emissions . Once waterborne, marine debris becomes mobile. Flotsam can be blown by the wind, or follow the flow of ocean currents, often ending up in the middle of oceanic gyres where currents are weakest. Within garbage patches, the waste is not compact, and although most of it is near the surface of the ocean, it can be found up to more than 30 metres (100 ft) deep in the water. [ 75 ] Patches contain plastics and debris in a range of sizes from microplastics and small scale plastic pellet pollution , to large objects such as fishing nets and consumer goods and appliances lost from flood and shipping loss. Garbage patches grow because of widespread loss of plastic from human trash collection systems. The United Nations Environmental Program estimated that "for every square mile of ocean" there are about "46,000 pieces of plastic". [ 76 ] The 10 largest emitters of oceanic plastic pollution worldwide are, from the most to the least, China, Indonesia, Philippines, Vietnam, Sri Lanka, Thailand, Egypt, Malaysia, Nigeria, and Bangladesh, [ 77 ] largely through the rivers Yangtze , Indus , Yellow , Hai , Nile , Ganges , Pearl , Amur , Niger , and the Mekong , and accounting for "90 percent of all the plastic that reaches the world's oceans". [ 78 ] [ 79 ] Asia was the leading source of mismanaged plastic waste , with China alone accounting for 2.4 million metric tons. [ 80 ]
https://en.wikipedia.org/wiki/Ocean_gyre
Ocean heat content (OHC) or ocean heat uptake (OHU) is the energy absorbed and stored by oceans , and is thus an important indicator of global warming . [ 2 ] Ocean heat content is calculated by measuring ocean temperature at many different locations and depths, and integrating the areal density of a change in enthalpic energy over an ocean basin or entire ocean. [ 3 ] Between 1971 and 2018, a steady upward trend [ 4 ] in ocean heat content accounted for over 90% of Earth's excess energy from global warming. [ 5 ] [ 6 ] Scientists estimate a 1961–2022 warming trend of 0.43 ± 0.08 W/m², accelerating at about 0.15 ± 0.04 W/m² per decade. [ 7 ] By 2020, about one third of the added energy had propagated to depths below 700 meters. [ 8 ] [ 9 ] In 2024, the world's oceans were again the hottest in the historical record and exceeded the previous 2023 record maximum. [ 10 ] The five highest ocean heat observations to a depth of 2000 meters all occurred in the period 2020–2024. [ 4 ] The main driver of this increase has been human-caused greenhouse gas emissions . [ 11 ] : 1228 Ocean water can absorb a lot of solar energy because water has far greater heat capacity than atmospheric gases. [ 8 ] As a result, the top few meters of the ocean contain more energy than the entire Earth's atmosphere . [ 12 ] Since before 1960, research vessels and stations have sampled sea surface temperatures and temperatures at greater depth all over the world. Since 2000, an expanding network of nearly 4000 Argo robotic floats has measured temperature anomalies, or the change in ocean heat content. With improving observation in recent decades, the heat content of the upper ocean has been analyzed to have increased at an accelerating rate. [ 13 ] [ 14 ] [ 15 ] The net rate of change in the top 2000 meters from 2003 to 2018 was +0.58 ± 0.08 W/m 2 (or annual mean energy gain of 9.3 zettajoules ). It is difficult to measure temperatures accurately over long periods while at the same time covering enough areas and depths. This explains the uncertainty in the figures. [ 2 ] Changes in ocean temperature greatly affect ecosystems in oceans and on land. For example, there are multiple impacts on coastal ecosystems and communities relying on their ecosystem services . Direct effects include variations in sea level and sea ice , changes to the intensity of the water cycle , and the migration of marine life. [ 16 ] Ocean heat content is a term used in physical oceanography to describe a type of thermodynamic potential energy that is stored in the ocean. It is defined in coordination with the equation of state of seawater. TEOS-10 is an international standard approved in 2010 by the Intergovernmental Oceanographic Commission . [ 17 ] Calculation of ocean heat content follows that of enthalpy referenced to the ocean surface, also called potential enthalpy . OHC changes are thus made more readily comparable to seawater heat exchanges with ice, freshwater, and humid air. [ 18 ] [ 19 ] OHC is always reported as a change or as an "anomaly" relative to a baseline. Positive values then also quantify ocean heat uptake (OHU) and are useful to diagnose where most of planetary energy gains from global heating are going. To calculate the ocean heat content, measurements of ocean temperature from sample parcels of seawater gathered at many different locations and depths are required. [ 20 ] Integrating the areal density of ocean heat over an ocean basin, or entire ocean, gives the total ocean heat content. Thus, total ocean heat content is a volume integral of the product of temperature, density, and heat capacity over the three-dimensional region of the ocean for which data is available. [ 21 ] The bulk of measurements have been performed at depths shallower than about 2000 m (1.25 miles). [ 22 ] The areal density of ocean heat content between two depths is computed as a definite integral: [ 3 ] [ 21 ] H = c p ∫ h 2 h 1 ρ ( z ) Θ ( z ) d z {\displaystyle H=c_{p}\int _{h2}^{h1}\rho (z)\Theta (z)dz} where c p {\displaystyle c_{p}} is the specific heat capacity of sea water , h2 is the lower depth, h1 is the upper depth, ρ ( z ) {\displaystyle \rho (z)} is the in-situ seawater density profile, and Θ ( z ) {\displaystyle \Theta (z)} is the conservative temperature profile. c p {\displaystyle c_{p}} is defined at a single depth h0 usually chosen as the ocean surface. In SI units , H {\displaystyle H} has units of Joules per square metre (J·m −2 ). In practice, the integral can be approximated by summation using a smooth and otherwise well-behaved sequence of in-situ data; including temperature (t), pressure (p), salinity (s) and their corresponding density (ρ). Conservative temperature Θ ( z ) {\displaystyle \Theta (z)} are translated values relative to the reference pressure (p0) at h0. A substitute known as potential temperature has been used in earlier calculations. [ 23 ] Measurements of temperature versus ocean depth generally show an upper mixed layer (0–200 m), a thermocline (200–1500 m), and a deep ocean layer (>1500 m). These boundary depths are only rough approximations. Sunlight penetrates to a maximum depth of about 200 m; the top 80 m of which is the habitable zone for photosynthetic marine life covering over 70% of Earth's surface. [ 24 ] Wave action and other surface turbulence help to equalize temperatures throughout the upper layer. Unlike surface temperatures which decrease with latitude, deep-ocean temperatures are relatively cold and uniform in most regions of the world. [ 25 ] About 50% of all ocean volume is at depths below 3000 m (1.85 miles), with the Pacific Ocean being the largest and deepest of five oceanic divisions. The thermocline is the transition between upper and deep layers in terms of temperature, nutrient flows, abundance of life, and other properties. It is semi-permanent in the tropics, variable in temperate regions (often deepest during the summer), and shallow to nonexistent in polar regions. [ 26 ] Ocean heat content measurements come with difficulties, especially before the deployment of the Argo profiling floats. [ 22 ] Due to poor spatial coverage and poor quality of data, it has not always been easy to distinguish between long term global warming trends and climate variability . Examples of these complicating factors are the variations caused by El Niño–Southern Oscillation or changes in ocean heat content caused by major volcanic eruptions . [ 2 ] Argo is an international program of robotic profiling floats deployed globally since the start of the 21st century. [ 28 ] The program's initial 3000 units had expanded to nearly 4000 units by year 2020. At the start of each 10-day measurement cycle, a float descends to a depth of 1000 meters and drifts with the current there for nine days. It then descends to 2000 meters and measures temperature, salinity (conductivity), and depth (pressure) over a final day of ascent to the surface. At the surface the float transmits the depth profile and horizontal position data through satellite relays before repeating the cycle. [ 29 ] Starting 1992, the TOPEX/Poseidon and subsequent Jason satellite series altimeters have observed vertically integrated OHC, which is a major component of sea level rise. [ 30 ] Since 2002, GRACE and GRACE-FO have remotely monitored ocean changes using gravimetry . [ 31 ] The partnership between Argo and satellite measurements has yielded ongoing improvements to estimates of OHC and other global ocean properties. [ 27 ] Ocean heat uptake accounts for over 90% of total planetary heat uptake, mainly as a consequence of human-caused changes to the composition of Earth's atmosphere. [ 12 ] [ 32 ] This high percentage is because waters at and below the ocean surface - especially the turbulent upper mixed layer - exhibit a thermal inertia much larger than the planet's exposed continental crust, ice-covered polar regions, or atmospheric components themselves. A body with large thermal inertia stores a big amount of energy because of its heat capacity , and effectively transmits energy according to its heat transfer coefficient . Most extra energy that enters the planet via the atmosphere is thereby taken up and retained by the ocean. [ 33 ] [ 34 ] [ 35 ] Planetary heat uptake or heat content accounts for the entire energy added to or removed from the climate system. [ 36 ] It can be computed as an accumulation over time of the observed differences (or imbalances ) between total incoming and outgoing radiation. Changes to the imbalance have been estimated from Earth orbit by CERES and other remote instruments, and compared against in-situ surveys of heat inventory changes in oceans, land, ice and the atmosphere. [ 5 ] [ 37 ] [ 38 ] Achieving complete and accurate results from either accounting method is challenging, but in different ways that are viewed by researchers as being mostly independent of each other. [ 37 ] Increases in planetary heat content for the well-observed 2005–2019 period are thought to exceed measurement uncertainties. [ 32 ] From the ocean perspective, the more abundant equatorial solar irradiance is directly absorbed by Earth's tropical surface waters and drives the overall poleward propagation of heat. The surface also exchanges energy that has been absorbed by the lower troposphere through wind and wave action. Over time, a sustained imbalance in Earth's energy budget enables a net flow of heat either into or out of greater ocean depth via thermal conduction , downwelling , and upwelling . [ 39 ] [ 40 ] Releases of OHC to the atmosphere occur primarily via evaporation and enable the planetary water cycle . [ 41 ] Concentrated releases in association with high sea surface temperatures help drive tropical cyclones , atmospheric rivers , atmospheric heat waves and other extreme weather events that can penetrate far inland. [ 42 ] Altogether these processes enable the ocean to be Earth's largest thermal reservoir which functions to regulate the planet's climate; acting as both a sink and a source of energy. [ 33 ] From the perspective of land and ice covered regions, their portion of heat uptake is reduced and delayed by the dominant thermal inertia of the ocean. Although the average rise in land surface temperature has exceeded the ocean surface due to the lower inertia (smaller heat-transfer coefficient) of solid land and ice, temperatures would rise more rapidly and by a greater amount without the full ocean. [ 33 ] Measurements of how rapidly the heat mixes into the deep ocean have also been underway to better close the ocean and planetary energy budgets. [ 43 ] Numerous independent studies in recent years have found a multi-decadal rise in OHC of upper ocean regions that has begun to penetrate to deeper regions. [ 5 ] [ 22 ] The upper ocean (0–700 m) has warmed since 1971, while it is very likely that warming has occurred at intermediate depths (700–2000 m) and likely that deep ocean (below 2000 m) temperatures have increased. [ 11 ] : 1228 The heat uptake results from a persistent warming imbalance in Earth's energy budget that is most fundamentally caused by the anthropogenic increase in atmospheric greenhouse gases . [ 44 ] : 41 There is very high confidence that increased ocean heat content in response to anthropogenic carbon dioxide emissions is essentially irreversible on human time scales. [ 11 ] : 1233 Studies based on Argo measurements indicate that ocean surface winds , especially the subtropical trade winds in the Pacific Ocean , change ocean heat vertical distribution. [ 46 ] This results in changes among ocean currents , and an increase of the subtropical overturning , which is also related to the El Niño and La Niña phenomenon. Depending on stochastic natural variability fluctuations, during La Niña years around 30% more heat from the upper ocean layer is transported into the deeper ocean. Furthermore, studies have shown that approximately one-third of the observed warming in the ocean is taking place in the 700–2000 meter ocean layer. [ 47 ] Model studies indicate that ocean currents transport more heat into deeper layers during La Niña years, following changes in wind circulation. [ 48 ] [ 49 ] Years with increased ocean heat uptake have been associated with negative phases of the interdecadal Pacific oscillation (IPO). [ 50 ] This is of particular interest to climate scientists who use the data to estimate the ocean heat uptake . The upper ocean heat content in most North Atlantic regions is dominated by heat transport convergence (a location where ocean currents meet), without large changes to temperature and salinity relation. [ 51 ] Additionally, a study from 2022 on anthropogenic warming in the ocean indicates that 62% of the warming from the years between 1850 and 2018 in the North Atlantic along 25°N is kept in the water below 700 m, where a major percentage of the ocean's surplus heat is stored. [ 52 ] A study in 2015 concluded that ocean heat content increases by the Pacific Ocean were compensated by an abrupt distribution of OHC into the Indian Ocean. [ 53 ] Although the upper 2000 m of the oceans have experienced warming on average since the 1970s, the rate of ocean warming varies regionally with the subpolar North Atlantic warming more slowly and the Southern Ocean taking up a disproportionate large amount of heat due to anthropogenic greenhouse gas emissions. [ 11 ] : 1230 Deep-ocean warming below 2000 m has been largest in the Southern Ocean compared to other ocean basins. [ 11 ] : 1230 A large-ensemble reanalysis of ocean warming published in 2024 estimated a 1961–2022 warming trend of 0.43 ± 0.08 W/m², along with a statistically significant acceleration rate of 0.15 ± 0.04 W/m² per decade. [ 7 ] Warming oceans are one reason for coral bleaching [ 54 ] and contribute to the migration of marine species . [ 55 ] Marine heat waves are regions of life-threatening and persistently elevated water temperatures. [ 56 ] Redistribution of the planet's internal energy by atmospheric circulation and ocean currents produces internal climate variability , often in the form of irregular oscillations , [ 57 ] and helps to sustain the global thermohaline circulation . [ 58 ] [ 59 ] The increase in OHC accounts for 30–40% of global sea-level rise from 1900 to 2020 because of thermal expansion . [ 60 ] [ 61 ] It is also an accelerator of sea ice , iceberg , and tidewater glacier melting. The ice loss reduces polar albedo , amplifying both the regional and global energy imbalances. [ 62 ] The resulting ice retreat has been rapid and widespread for Arctic sea ice , [ 63 ] and within northern fjords such as those of Greenland and Canada . [ 64 ] Impacts to Antarctic sea ice and the vast Antarctic ice shelves which terminate into the Southern Ocean have varied by region and are also increasing due to warming waters. [ 65 ] [ 66 ] Breakup of the Thwaites Ice Shelf and its West Antarctica neighbors contributed about 10% of sea-level rise in 2020. [ 67 ] [ 68 ] The ocean also functions as a sink and source of carbon, with a role comparable to that of land regions in Earth's carbon cycle . [ 69 ] [ 70 ] In accordance with the temperature dependence of Henry's law , warming surface waters are less able to absorb atmospheric gases including oxygen and the growing emissions of carbon dioxide and other greenhouse gases from human activity. [ 71 ] [ 72 ] Nevertheless the rate in which the ocean absorbs anthropogenic carbon dioxide has approximately tripled from the early 1960s to the late 2010s; a scaling proportional to the increase in atmospheric carbon dioxide. [ 73 ] The increase in CO2 levels causes ocean acidification, which is where the pH of the ocean decreases due to the uptake of CO2. This impacts the various species including reducing growth and calcification rates for calcifiers, lowering the capacity of acid base regulation in bivalves, and being harmful to the metabolic pathways of organisms which can lower the amount of energy these organisms are able to produce. [ 74 ] Warming of the deep ocean has the further potential to melt and release some of the vast store of frozen methane hydrate deposits that have naturally accumulated there. [ 75 ]
https://en.wikipedia.org/wiki/Ocean_heat_content
Ocean optics is the study of how light interacts with water and the materials in water. Although research often focuses on the sea, the field broadly includes rivers, lakes, inland waters, coastal waters, and large ocean basins. How light acts in water is critical to how ecosystems function underwater. Knowledge of ocean optics is needed in aquatic remote sensing research in order to understand what information can be extracted from the color of the water as it appears from satellite sensors in space. The color of the water as seen by satellites is known as ocean color . While ocean color is a key theme of ocean optics, optics is a broader term that also includes the development of underwater sensors using optical methods to study much more than just color, including ocean chemistry, particle size, imaging of microscopic plants and animals, and more. Where waters are “optically deep,” the bottom does not reflect incoming sunlight, and the seafloor cannot be seen by humans or satellites. [ 1 ] The vast majority of the world's oceans by area are optically deep. Optically deep water can still be relatively shallow water in terms of total physical depth, as long as the water is very turbid, such as in estuaries. Where waters are “optically shallow,” the bottom reflects light and often can be seen by humans and satellites. [ 2 ] Here, ocean optics can also be used to study what is under the water. Based on what color they appear to sensors, researchers can map habitat types, including macroalgae, corals, seagrass beds, and more. Mapping shallow-water environments requires knowledge of ocean optics because the color of the water must be accounted for when looking at the color of the seabed environment below. Inherent optical properties (IOPs) depend on what is in the water. These properties stay the same no matter what the incoming light is doing (daytime or nighttime, low sun angle or high sun angle). [ 3 ] Water with large amounts of dissolved substances, such as lakes with large amounts of colored dissolved organic matter ( CDOM ), experience high light absorption. Phytoplankton and other particles also absorb light. [ 4 ] Areas with sea ice, estuaries with large amounts of suspended sediments , and lakes with large amounts of glacial flour are examples of water bodies with high light scattering. All particles scatter light to some extent, including plankton, minerals, and detritus. Particle size effects how much scattering happens at different colors; for example, very small particles scatter light exponentially more in the blue colors than other colors, which is why the ocean and the sky are generally blue (called Rayleigh scattering ). Without scattering, light would not “go” anywhere (outside of a direct beam from the sun or other source) and we would not be able to see the world around us. [ 5 ] Attenuation in water, also called beam attenuation or the beam attenuation coefficient , is the sum of all absorption and scattering. Attenuation of a light beam in one specific direction can be measured with an instrument called a transmissometer. [ 6 ] Apparent optical properties (AOPs) depend on what is in the water (IOPs) and what is going on with the incoming light from the Sun. AOPs depend most strongly on IOPs and only depend somewhat on incoming light aka the “light field.” Characteristics of the light field that can affect AOP measurements include the angle at which light hits the water surface (high in the sky vs. low in the sky, and from which compass direction) and the weather and sky conditions (clouds, atmospheric haze, fog, or sea state aka roughness of the surface of the water). [ 7 ] Remote sensing reflectance (Rrs) is a measure of light radiating out from beneath the ocean surface at all colors, normalized by incoming sunlight at all colors. Because Rrs is a ratio, it is slightly less sensitive to what is going on with the light field (such as the angle of the sun or atmospheric haziness). [ 8 ] Rrs is measured using two paired spectroradiometers that simultaneously measure light coming in from the sky and light coming up from the water below at many wavelengths. Since it is a measurement of a light-to-light ratio, the energy units cancel out, and Rrs has the units of per steradian (sr-1) due to the angular nature of the measurement (upwelling light is measured at a specific angle, and incoming light is measured on a flat plane from a half-hemispherical area above the water surface). [ 9 ] K d is the diffuse (or downwelling) coefficient of light attenuation (K d ), also called simply light attenuation , the v ertical extinction coefficient , or the extinction coefficient . [ 10 ] K d describes the rate of decrease of light with depth in water, in units of per meter (m −1 ). The “d” stands for downwelling light, which is light coming from above the sensor in a half-hemispherical shape (aka half of a basketball). Scientists sometimes use K d to describe the decrease in the total visible light available for plants in terms of photosynthetically active radiation (PAR) – called “K d (PAR).” In other cases, Kd can describe the decrease in light with depth over a spectrum of colors or wavelengths, usually written as “K d (λ).” At one color (one wavelength) Kd can describe the decrease in light with depth of one color, such as the decrease in blue light at the wavelength 490 nm, written as “K d (490).” In general, K d is calculated using Beer's Law and a series of light measurements collected from just under the water surface down through the water at many depths. [ 11 ] [ 12 ] “Closure” refers to how optical oceanographers measure the consistency of models and measurements. Models refer to anything that is not explicitly measured in the water, including satellite-derived variables that are estimated using empirical relationships (for example, satellite-derived chlorophyll-a concentration is estimated from the ratios between green and blue remote sensing reflectance using an empirical relationship). Closure includes measurement closure, model closure, model-data closure, and scale closure. Where model-data closure experiments show misalignment between data and models, the cause of the misalignment may be due to measurement error , issues with the model, both, or some other external factor. [ 13 ] [ 14 ] Ocean optics has been applied to study topics like primary production , phytoplankton , zooplankton , [ 15 ] [ 16 ] shallow-water habitats like seagrass beds and coral reefs , [ 17 ] [ 18 ] marine biogeochemistry , [ 19 ] heating of the upper ocean, [ 20 ] and carbon export to deep waters by way of the ocean biological pump . [ 21 ] The portion of the electromagnetic spectrum usually involved in ocean optics is ultraviolet through infrared, about 300 nm to less than 2000 nm wavelengths. [ 22 ] The most widely used optical oceanographic sensors are PAR sensors, chlorophyll-a fluorescence sensors ( fluorometers ), and transmissometers. These three instruments are frequently mounted on CTD(conductivity-temperature-depth)-rosette samplers . These instruments have been used for many years on CTD-rosettes in global repeat oceanographic surveys like the CLIVAR GO-SHIP campaign. [ 23 ] [ 24 ] Optical instruments are often used to measure the size spectrum of particles in the ocean. For example, phytoplankton organisms can range in size from a few microns (micrometers, μm) to hundreds of microns. The size of particles is often measured to estimate how quickly particles will sink, and therefore how efficiently plants can sequester carbon in the ocean's biological pump . Scientists study individual tiny objects such as plankton and detritus particles using flow cytometry and in situ camera systems. Flow cytometers measure sizes and take photographs of individual particles flowing through a tube system; one such instrument is the Imaging FlowCytoBot (IFCB). [ 26 ] In situ camera systems are deployed over the side of a research vessel, alone or attached to other equipment, and they capture photographs of the water itself to image the particles present in the water; one such instrument is the Underwater Vision Profiler (UVP). [ 27 ] Other imaging technologies used in the ocean include holography [ 28 ] and particle imaging velocimetry (PIV), which uses 3D video footage to track the movement of underwater particles. [ 29 ] Ocean optics research done “ in situ ” (from research vessels , small boats, or on docks and piers) supports research that uses satellite data. In situ optical measurements provide a way to: 1) calibrate satellite sensors when they are just beginning to collect data, 2) develop algorithms to derive products or variables like chlorophyll-a concentration, and 3) validate data products derived from satellites. Using satellite data, researchers estimate things like particle size, carbon, water quality , water clarity , and bottom type based on the color profile as seen by satellite; all of these estimations (aka models) must be validated by comparing them to optical measurements made in situ. [ 30 ] In situ data are often available from publicly accessible data libraries like the SeaBASS data archive. Oceanographers, physicists, and other scientists who have made major contributions to the field of ocean optics include (incomplete list): David Antoine, Marcel Babin, Paula Bontempi , Emmanuel Boss, Annick Bricaud, Kendall Carder, Ivona Cetinic, Edward Fry, Heidi Dierssen, David Doxaran, Gene Carl Feldman, Howard Gordon, Chuanmin Hu, Nils Gunnar Jerlov , George Kattawar, John Kirk, ZhongPing Lee, Hubert Loisel, Stephane Maritorena, Michael Mishchenko, Curtis Mobley, Bruce Monger, Andre Morel, Michael Morris , Norm Nelson, Mary Jane Perry , Rudolph Preisendorfer, Louis Prieur, Chandrasekhara Raman, Collin Roesler , Rüdiger Röttgers, David Siegel, Raymond Smith, Heidi Sosik , Dariusz Stramski, Michael Twardowski, Talbot Waterman, Jeremy Werdell, Ken Voss, Charles Yentsch, and Ronald Zaneveld. While ocean optics is an interdisciplinary field of study applies to a wide range of topics, it is not often taught as a course in graduate programs for marine science and oceanography. Two summer-term courses have been developed for graduate students from many different institutions. First, there is a summer lecture series operated by the International Ocean Colour Coordinating Group (IOCCG) which usually takes place in France. [ 31 ] Second, there is an ongoing course in the United States called the “Optical Oceanography Class” or “Ocean Optics Class” in Washington State and later in Maine, which has been running continuously since 1985. [ 32 ] For independent learning, Curt Mobley, Collin Roesler, and Emmanuel Boss wrote the Ocean Optics Web Book as an open-access online guide. Related fields and topics: Inherent and apparent optical properties and in-water methods: Remote sensing and radiometric methods:
https://en.wikipedia.org/wiki/Ocean_optics
Ocean turbidity is a measure of the amount of cloudiness or haziness in sea water caused by individual particles that are too small to be seen without magnification. Highly turbid ocean waters are those with many scattering particulates in them. In both highly absorbing and highly scattering waters , visibility into the water is reduced. Highly scattering (turbid) water still reflects much light, while highly absorbing water, such as a blackwater river or lake, is very dark. The scattering particles that cause the water to be turbid can be composed of many things, including sediments and phytoplankton . There are a number of ways to measure ocean turbidity, including autonomous remote vehicles, shipcasts and satellites. From a satellite , a proxy measurement of the water turbidity can be made by examining the amount of reflectance in the visible region of the electromagnetic spectrum . For the Advanced Very High Resolution Radiometer (AVHRR), the logical choice is band 1, covering wavelengths 580 to 680 nanometres , the orange and red. In order to make derived products that are comparable over time and space, an atmospheric correction is required. To do this, the effects of Rayleigh scattering are calculated based on the satellite viewing angle and the solar zenith angle and then subtracted from the band 1 radiance . For an aerosol correction, band 2 in the near infrared is used. It is first corrected for Rayleigh scattering and then subtracted from the Rayleigh corrected band 1. The Rayleigh corrected band 2 is assumed to be aerosol radiance because no return signal from water in the near infrared is expected since water is highly absorbing at those wavelengths. Because bands 1 and 2 are relatively close on the electromagnetic spectrum, we can reasonably assume their aerosol radiances are the same. In these images the turbidity is quantified as the percent reflected light emerging from the water column in a range of 0 to 8 percent. The reflectance percentage can be correlated to attenuation , Secchi disk depth or total suspended solids although the exact relationship will vary regionally and depends on the optical properties of the water. For example, in Florida Bay , 10% reflectance corresponds to a sediment concentration of 30 milligram/litre and a Secchi depth of 0.5 metre. These relationships are approximately linear so that 5% reflectance would correspond to a sediment concentration of approximately 15 milligram/litre and a Secchi depth of 1 metre. In the Mississippi River plume regions these same reflectance values would represent sediment concentrations that are about ten times or more higher. As one would expect, the majority of these images reveal large increases in turbidity in the regions where a hurricane has made landfall. The increases are primarily due to sediments that have been resuspended from the shallow bottom regions. In areas near shore some of the signal may also be due to sediments eroded from beaches as well as from sediment laden river plumes. In some cases a post-hurricane phytoplankton bloom due to increased nutrient availability may perhaps be detectable. The examination of the turbidity after the passing of a hurricane can have potentially many uses for coastal resource management including: With regard to these uses, determining the regions of high turbidity will allow managers to best decide on response strategies as well as help ensure that post-hurricane resources are most effectively utilized. Only a small fraction of the light incident on the ocean will be reflected and received by the satellite. The probability for a photon to reflect and exit the ocean decreases exponentially with length of its path through the water because the ocean is an absorbing medium. The more ocean a photon must travel through, the greater its chances of being absorbed by something. After absorption, it will eventually become part of the ocean's heat reservoir. The absorption and scattering characteristics of a water body determine the rate of vertical light attenuation and set a limit to the depths contributing to a satellite signal. A reasonable rule of thumb is that 90 percent of the signal coming from the water that is seen by the satellite is from the first attenuation length. How deep this is depends on the absorption and scattering properties of both the water itself and other constituents in the water. For wavelengths in the near infrared and longer, the penetration depth varies from a metre to a few micrometres . For band 1, the penetration depth will usually be between 1 and 10 metres. If the water has a large turbidity spike below 10 metres, the spike is unlikely to be seen by a satellite. For very shallow clear water there is a good chance the bottom may be seen. For example, in the Bahamas , the water is quite clear and only a few metres deep, resulting in an apparent high turbidity because the bottom reflects much band 1 light. For areas with consistently high turbidity signals, particularly areas with relatively clear water, part of the signal may be due to bottom reflection. Normally this will not be a problem with a post-hurricane turbidity image since the storm easily resuspends enough sediment such that bottom reflection is negligible. Clouds are also problematic for the interpretation of satellite derived turbidity. Cloud removal algorithms perform a satisfactory job for pixels that are fully cloudy. Partially cloudy pixels are much harder to identify and typically result in false high turbidity estimates. High turbidity values near clouds are suspect. Note : The information in this page has been incorporated from NOAA , allowable under United States fair use laws. Original source of the information is at https://web.archive.org/web/20040902231404/http://www.csc.noaa.gov/crs/cohab/hurricane/turbid.htm
https://en.wikipedia.org/wiki/Ocean_turbidity
Oceana, inc. is a 501(c)(3) nonprofit ocean conservation organization focused on influencing specific policy decisions on the national level to preserve and restore the world's oceans. It is headquartered in Washington, D.C. , with offices in Juneau , Monterey , Fort Lauderdale , New York , Portland , Toronto , Mexico City , Madrid , Brussels , Copenhagen , Geneva , London , Manila , Belmopan , Brasília , Santiago , and Lima , [ 2 ] [ 3 ] [ 4 ] and it is the largest international advocacy group dedicated entirely to ocean conservation. [ 5 ] Currently, Oceana has a staff of about 200 and 6,000 volunteers, and it has almost 50 million dollars of revenue (as of 2017). [ 3 ] Oceana takes a multi-faceted approach to ocean conservation; It conducts its own scientific research in addition to making policy recommendations, lobbying for specific legislation, and filing and litigating lawsuits. [ 6 ] Oceana was established in 2001 by an international group of leading foundations including the Rockefeller Brothers Fund , Sandler Foundation , and The Pew Charitable Trusts . This followed a 1999 study they commissioned, which found that less than 0.5% of all resources spent by U.S. environmental nonprofit groups were used for ocean conservation. [ 7 ] In 2001, Oceana absorbed The Ocean Law Project, which was also created by The Pew Charitable Trusts, for Oceana's legal branch. In 2002, American Oceans Campaign, founded by actor and environmentalist Ted Danson , merged with Oceana to further their common goals of ocean conservation. [ 7 ] On April 19, 2024, Oceana, Inc. announced the appointment of James Simon as the new chief executive officer. Simon, previously the President of Oceana, succeeded Andrew Sharpless following an eight-month international search. [ 8 ] In 2015, Oceana Canada was established as a legally distinct non-profit organization. It works in collaboration with Oceana, inc. and is considered part of the larger charity. [ 9 ] [ 10 ] [ 11 ] Except under very specific circumstances, Canadian charity law does not grant either legal charity status or the ability to issue tax exempt receipts to Canadian offices of non-Canadian nonprofits, making it beneficial to create an independent, Canadian charity. [ 12 ] Concerned about declining fish catches since 1980, Oceana is committed to combating overfishing and restoring the world's fisheries. It mainly focuses on legislation for scientific based catch limits , which have led to dramatic recoveries of depleted fisheries in the recent past. It also opposes fishing subsidies, which it argues are (in their current form) contributing to overfishing. [ 13 ] [ 14 ] Oceana also focuses on reducing bycatch , especially of protected or endangered species. [ 15 ] [ 14 ] Oceana's main focus with sustainable fishing is providing clean, plentiful food. They often cite the lack of emissions or resources, like land or fresh water, that wild fish require, and that this lack of pollution or resources would be necessary to feed the world's growing population. [ 14 ] This campaign is called "Save the Oceans, Feed the World". [ 16 ] Oceana focuses on curbing or eliminating the use of plastics, especially single use plastics due to their harmful impact on marine ecosystem and on human consumers. The organization generally opposes focusing on recycling or cleanup, and it says this is due to inefficiencies of recycling the large amounts of plastics in the ocean. [ 17 ] [ 18 ] [ 19 ] [ non-primary source needed ] Oceana has put a major focus on exposing and advocating against seafood fraud . Its opposition comes from the widespread nature of this problem, the negative health impact mislabeled fish can have (especially to people with certain seafood allergies) and their impact on overfishing by obscuring its impact. [ 20 ] [ 21 ] Various environmental news outlets have published op-eds criticizing Oceana's reports on seafood fraud , and similar criticism was included in a New York Times article. Criticism focuses on Oceana's assumption that all mislabeled seafood is intentionally fraudulent, even for species that are easily confused or have different names in different countries. The methodology of Oceana's studies has also been questioned, mainly due to its selection of historically mislabeled fish for testing instead of a more representative sample. Additionally, they criticized policy recommendations that Oceana recommended in their reports for being infeasible and bureaucratic. [ 22 ] [ 23 ] [ 24 ] Oceana is dedicated to combating the numerous threats to the world's oceans that climate change imposes. Its main focus has been the acidification of the ocean , which threatens marine life, especially shellfish and coral that are necessary to many marine ecosystems, and, consequently, sources of seafood. [ 25 ] They also focus on promoting offshore wind farms [ 26 ] and combating the use of offshore drilling [ 27 ] and seismic airgun blasting. [ 28 ] [ 29 ] Oceana launches expeditions to gather scientific data, which is used by Oceana, other nonprofit groups, local communities, and governmental agencies to create or influence policy. [ 30 ] [ 31 ] Recent examples of these expeditions' success can be seen in Malta , where an expedition led to the Maltese government expanding marine protected areas , [ 32 ] [ 30 ] or in the Philippines , where an expedition led to the government creating a new marine protected area in the Benham Bank . [ 33 ] Oceana focuses on influencing specific legislation, lawsuits, or other policies, which fit under its broader goals. It calls these "victories" when successful. [ 3 ] [ 34 ] Recent successes have included protecting dusky sharks , [ 35 ] banning industrial activity in Canada's marine protected areas, [ 36 ] increasing transparency through digital tracking in Chile's fishing industry, [ 37 ] and creating the second-largest marine national park in Spain's Mediterranean coast. [ 38 ] Over the course of its existence, Oceana has protected 4.5 million square miles of the ocean by influencing legislation and policy related to banning bottom trawling, restricting fishing, and establishing Marine Protected Areas . [ 39 ] [ 40 ] [ 41 ] [ 42 ] Oceana considers an area "protected" once it has achieved a policy victory related to protecting it. [ 43 ] Andy Sharpless, the CEO of Oceana, and author Suzannah Evans wrote The Perfect Protein in 2013. While it mentions some of Oceana's achievements, it focuses on its main goal: to make fishing a sustainable and abundant food supply. The main recommendations and goals of the book are science based catch limits , eating fish lower on the food chain (like sardines), focusing less on more glamorous sea creatures (like whales and dolphins), protecting habitats, and reducing bycatch . [ 44 ] Actor and Oceana Vice Chair Ted Danson , along with Michael D'Orso , wrote the book Oceana: Our Endangered Oceans and What We Can Do to Save Them in 2011. It describes Danson's early involvement with the environmental movement while also explaining the problems that face our oceans today, such as offshore drilling , pollution , ocean acidification , and overfishing . The book is scientifically grounded and was called engaging by the Los Angeles Times because it is filled with asides, charts, and photographs. [ 45 ] The California Wetfish Producers Association (CWPA), a small nonprofit organization dedicated to preserving California's wetfish industry, [ 46 ] has repeatedly criticized Oceana's attempts to temporarily halt the Pacific sardine fishery. CWPA criticized Oceana's citation of a National Oceanic and Atmospheric Administration (NOAA) study that reported 95% of the sardine stock had been depleted since 2006 (and the study itself). CWPA claims that these numbers are inflated and that the actual (smaller) decline in fish stock has not been caused by overfishing, but rather by environmental factors. The CWPA has specifically called Oceana's claims about overfishing "fake news." [ 47 ] [ 48 ] Although the NOAA has not fully responded to the CWPA's calls for a new study, it has not declared sardines overfished, but it has also banned commercial fishing of sardines. [ 49 ] In 2021, a Netflix documentary Seaspiracy criticized Oceana for appearing to be unable to provide a definition for "sustainable fishing". Oceana responded by saying it was misrepresented in the film, and argued that abstaining from eating fish as the film recommends is not a realistic choice for people who depend on coastal fisheries . [ 50 ]
https://en.wikipedia.org/wiki/Oceana_(conservation_organization)
Oceaneering International, Inc. is a subsea engineering and applied technology company based in Houston , Texas , U.S. that provides engineered services and hardware to customers who operate in marine , space , and other environments. Oceaneering's business offerings include remotely operated vehicle (ROV) services, specialty oilfield subsea hardware, deepwater intervention and crewed diving services, non-destructive testing and inspections, engineering and project management, and surveying and mapping services. Its services and products are marketed worldwide to oil and gas companies, government agencies, and firms in the aerospace , marine engineering and mobile robotics and construction industries. Oceaneering was founded in 1964 with the incorporation of World Wide Divers, Inc., one of three companies who merged in 1969 to operate under the name Oceaneering International, Inc. The merged companies were World Wide Divers, Inc. (Morgan City, LA), California Divers, Inc. (Santa Barbara, CA), and Can-Dive Services Ltd (North Vancouver, BC). [ 3 ] World Wide Divers, Inc. was owned by Mike Hughes and Johnny Johnson. California Divers, Inc. was owned by Lad Handelman, Gene Handelman, Kevin Lengyel, and Bob Ratcliffe. Can-Dive Services Ltd was owned by Phil Nuytten and partners. Mike Hughes served as Chairman of the Board and Lad Handelman served as President of the merged companies. In the early 1970s, Oceaneering supported considerable research into ways to increase safety of their divers and general diving efficiency, including their collaboration with Duke University Medical Center to explore the use of trimix breathing gas to reduce the incidence of high-pressure nervous syndrome . [ 4 ] Oceaneering purchased the rights to the JIM suit in 1975. By 1979, a team from Oceaneering assisted Dr. Sylvia Earle in testing Atmospheric diving suits for scientific diving operations by diving a JIM suit to 1,250 fsw . [ 5 ] Oceaneering also used WASP atmospheric diving suits. [ 6 ] A dive team from Oceaneering salvaged three of the four propellers from the RMS Lusitania in 1982. [ 7 ] From 1984 to 1988, Michael L. Gernhardt served as Oceaneering's Manager and then Vice President of Special Projects. He led the development of a telerobotic system for subsea platform cleaning and inspection, and of a variety of new diver and robot tools. [ 8 ] In 1988, he founded Oceaneering Space Systems, to transfer subsea technology and operational experience to the ISS program . [ 8 ] After the 1986 Space Shuttle Challenger disaster , Oceaneering teams recovered the Solid Rocket Booster that contained the faulty O-ring responsible for launch's failure. [ 9 ] Oceaneering was a NASDAQ listed company until 1991, when they moved to the New York Stock Exchange . Oceaneering ROVs were used to determine what happened to the cargo ship Lucona in the 1991 murder and fraud investigation that claimed uranium mining equipment was lost when the vessel went down. [ 9 ] Recovery of the airplane cockpit voice recorder in the loss of ValuJet Flight 592 was a priority in early 1996. [ 9 ] In the days following the loss of TWA Flight 800 later that same year, Oceaneering was contacted to provide ROV support to the US Navy lead search and recovery effort. [ 9 ] Boeing and Fugro teamed up with Oceaneering in 2001 to begin integration of their advanced technology into deep sea exploration. [ 10 ] Oceaneering helped recover the Confederate submarine H. L. Hunley , which sank in 1864. [ 11 ] [ 12 ] Several recovery plans were evaluated; the final recovery included a truss structure with foam to surround the body of the submarine. [ 13 ] On August 8, 2000, at 8:37 a.m., the sub broke the surface for the first time in 136 years. On August 2, 2006, NASA announced it would issue a Request for Proposal (RFP) for the design, development, certification, production and sustaining engineering of the Constellation Space Suit to meet the needs of the Constellation Program . [ 14 ] On June 11, 2008, NASA awarded a USD$745 million contract to Oceaneering for the creation and manufacture of this new space suit. [ 15 ] In 2006, NAVSEA awarded Oceaneering a maintenance contract for the Dry Deck Shelter program. [ 16 ] Dry Deck Shelters are used to transport equipment such as the Advanced SEAL Delivery System and Combat Rubber Raiding Craft aboard a submarine. [ 17 ] [ 18 ] In 2009, Oceaneering installed a demonstrator crane aboard the SS Flickertail State to evaluate its performance in transferring containers between two moving ships, in an operational environment using commercial and oil industry at-sea mooring techniques in the Gulf of Mexico. [ 19 ] Developed in conjunction with the Sea Warfare and Weapons Department in the Office of Naval Research , the crane has sensors and cameras as well as motion-sensing algorithms that automatically compensate for the rolling and pitching of the sea, making it much easier for operators to center it over and transfer cargo. [ 20 ] [ 21 ] Oceaneering teamed up with the Canadian company GRI Simulations to design and produce the ROV simulators they utilize for training, development of procedures, and equipment staging. [ 22 ] After a dispute over theft of trade secrets and copyright infringement that lasted several years, Oceaneering now licenses the VROV simulator system from GRI Simulations. [ 22 ] [ 23 ] A 2009 collaboration with Royal Dutch Shell saw the installation of a wireline at a record 2,673 feet (815 m) of water for repairing a safety valve. [ 24 ] On April 22, 2010, three Oceaneering ROV crews aboard the Oceaneering vessel Ocean Intervention III , the DOF ASA Skandi Neptune and the Boa International Boa Sub C began to map the seabed and assess the wreckage from the Deepwater Horizon oil spill . The crews reported "large amounts of oil that flowed out." [ 25 ] Oceaneering ROV Technician Tyrone Benton was later called as a witness to provide information on the leaks associated with BOP stack investigation, but gave no reason why he later failed to appear in court. [ 26 ] [ 27 ] Petrobras , the biggest deepwater oilfield company in the world, placed the largest umbilical order in company history in 2012. [ 28 ] As of 2012, eighty percent of Oceaneering's income has been derived from deepwater work. [ 29 ] It is also the world's largest operator of ROVs. [ 29 ] [ 30 ] BAE Systems was contracted in October 2013 to build a Jones Act -compliant multi-service vessel to serve Oceaneering's "subsea intervention services in the ultra-deep waters of the Gulf of Mexico", [ 31 ] which was delivered in 2019. [ 32 ] Oceaneering was sanctioned by the Chinese government on December 27, 2024 due to arm sales to Taiwan. [ 33 ] The Oceaneering Entertainment Systems (OES) division [ 34 ] is an active developer of educational and entertainment technology, such as the Shuttle Launch Experience at the Kennedy Space Center Visitor Complex in Florida. [ 35 ] It is based in Orlando, Florida , with an additional site in Hanover, Maryland . OES was formed in 1992 when Oceaneering International purchased Eastport International, Inc. , which specialized in underwater remotely operated vehicles (ROVs) and had recently been contracted by Universal Studios Florida to redesign and build the animatronic sharks for its Jaws attraction. The original animatronics, ride system and control system had malfunctioned, causing the attraction to close soon after its grand opening. After Eastport's acquisition by Oceaneering, the themed attraction work was moved to the new OES division, which completed the Jaws contract. [ 36 ] OES has since developed motion-based dark ride vehicles for Transformers: The Ride at Universal Studios Florida, Justice League: Battle for Metropolis at Six Flags parks, Antarctica: Empire of the Penguin at SeaWorld, and Speed of Magic at Ferrari World Abu Dhabi, among others. [ 37 ] [ 38 ] [ 39 ] It has also developed animatronics for Universal Studios ' Jurassic Park and Jaws rides. [ 40 ] [ 41 ] It has provided custom show-action equipment for various entertainment projects, including Revenge of the Mummy at Universal Studios Orlando, and Curse of DarKastle at Busch Gardens Williamsburg . In 2014, the Themed Entertainment Association presented their THEA Award to OES for their Revolution Tru-Trackless ride system. [ 42 ] In 2013, OES won the THEA for Transformers The Ride 3-D at Universal Studios Hollywood and Singapore , for Ride & Show Systems. In 2008 they won the THEA for Shuttle Launch Experience . [ 43 ] Oceaneering donated a hyperbaric chamber to assist with the treatment on the Miskito Indian population in 1986. [ 44 ] They donated a compressor in 1997 that, along with funding from the Divers Alert Network , supported continued medical support of the Miskito population. [ 45 ] In November 2009, Oceaneering donated an ROV to Stavanger Offshore Tekniske Skole, a Norwegian technical college, to facilitate their students' qualification exams. [ 46 ] They donated an ROV to South Central Louisiana Technical College in 2011 to support its unique ROV maintenance curriculum. [ 47 ]
https://en.wikipedia.org/wiki/Oceaneering_International
The oceanic carbon cycle (or marine carbon cycle ) is composed of processes that exchange carbon between various pools within the ocean as well as between the atmosphere, Earth interior, and the seafloor . The carbon cycle is a result of many interacting forces across multiple time and space scales that circulates carbon around the planet, ensuring that carbon is available globally. The Oceanic carbon cycle is a central process to the global carbon cycle and contains both inorganic carbon (carbon not associated with a living thing, such as carbon dioxide) and organic carbon (carbon that is, or has been, incorporated into a living thing). Part of the marine carbon cycle transforms carbon between non-living and living matter. Three main processes (or pumps) that make up the marine carbon cycle bring atmospheric carbon dioxide (CO 2 ) into the ocean interior and distribute it through the oceans. These three pumps are: (1) the solubility pump, (2) the carbonate pump, and (3) the biological pump. The total active pool of carbon at the Earth's surface for durations of less than 10,000 years is roughly 40,000 gigatons C (Gt C, a gigaton is one billion tons, or the weight of approximately 6 million blue whales ), and about 95% (~38,000 Gt C) is stored in the ocean, mostly as dissolved inorganic carbon . [ 1 ] [ 2 ] The speciation (the different forms of an element or compound) of dissolved inorganic carbon in the marine carbon cycle is a primary controller of acid-base chemistry in the oceans. Earth's plants and algae ( primary producers ) are responsible for the largest annual carbon fluxes. Although the amount of carbon stored in marine biota (~3 Gt C) is very small compared with terrestrial vegetation (~610 GtC), the amount of carbon exchanged (the flux) by these groups is nearly equal – about 50 GtC each. [ 1 ] Marine organisms link the carbon and oxygen cycles through processes such as photosynthesis . [ 1 ] The marine carbon cycle is also biologically tied to the nitrogen and phosphorus cycles by a near-constant stoichiometric ratio C:N:P of 106:16:1, also known as the Redfield Ketchum Richards (RKR) ratio , [ 3 ] which states that organisms tend to take up nitrogen and phosphorus incorporating new organic carbon. Likewise, organic matter decomposed by bacteria releases phosphorus and nitrogen. Based on the publications of NASA , World Meteorological Association, IPCC , and International Council for the Exploration of the Sea , as well as scientists from NOAA , Woods Hole Oceanographic Institution , Scripps Institution of Oceanography , CSIRO , and Oak Ridge National Laboratory , the human impacts on the marine carbon cycle are significant. [ 4 ] [ 5 ] [ 6 ] [ 7 ] Before the Industrial Revolution, the ocean was a net source of CO 2 to the atmosphere whereas now the majority of the carbon that enters the ocean comes from atmospheric carbon dioxide (CO 2 ). [ 8 ] In recent decades, the ocean has acted as a sink for anthropogenic CO 2 , absorbing around a quarter of the CO 2 produced by humans through the burning of fossil fuels and land use changes. [ 9 ] By doing so, the ocean has acted as a buffer, somewhat slowing the rise in atmospheric CO 2 levels. However, this absorption of anthropogenic CO 2 has also caused acidification of the oceans . [ 8 ] [ 10 ] Climate change , a result of this excess CO 2 in the atmosphere, has increased the temperature of the ocean and atmosphere. [ 11 ] The slowed rate of global warming occurring from 2000–2010 [ 12 ] may be attributed to an observed increase in upper ocean heat content . [ 13 ] [ 14 ] Carbon compounds can be distinguished as either organic or inorganic, and dissolved or particulate, depending on their composition. Organic carbon forms the backbone of key component of organic compounds such as – proteins , lipids , carbohydrates , and nucleic acids . Inorganic carbon is found primarily in simple compounds such as carbon dioxide, carbonic acid, bicarbonate, and carbonate (CO 2 , H 2 CO 3 , HCO 3 − , CO 3 2− respectively). Marine carbon is further separated into particulate and dissolved phases. These pools are operationally defined by physical separation – dissolved carbon passes through a 0.2 μm filter, and particulate carbon does not. There are two main types of inorganic carbon that are found in the oceans. Dissolved inorganic carbon (DIC) is made up of bicarbonate (HCO 3 − ), carbonate (CO 3 2− ) and carbon dioxide (including both dissolved CO 2 and carbonic acid H 2 CO 3 ). DIC can be converted to particulate inorganic carbon (PIC) through precipitation of CaCO 3 (biologically or abiotically). DIC can also be converted to particulate organic carbon (POC) through photosynthesis and chemoautotrophy (i.e. primary production). DIC increases with depth as organic carbon particles sink and are respired. Free oxygen decreases as DIC increases because oxygen is consumed during aerobic respiration. Particulate inorganic carbon (PIC) is the other form of inorganic carbon found in the ocean. Most PIC is the CaCO 3 that makes up shells of various marine organisms, but can also form in whiting events . Marine fish also excrete calcium carbonate during osmoregulation . [ 15 ] Some of the inorganic carbon species in the ocean, such as bicarbonate and carbonate , are major contributors to alkalinity , a natural ocean buffer that prevents drastic changes in acidity (or pH ). The marine carbon cycle also affects the reaction and dissolution rates of some chemical compounds, regulates the amount of carbon dioxide in the atmosphere and Earth's temperature. [ 16 ] Like inorganic carbon, there are two main forms of organic carbon found in the ocean (dissolved and particulate). Dissolved organic carbon (DOC) is defined operationally as any organic molecule that can pass through a 0.2 μm filter. DOC can be converted into particulate organic carbon through heterotrophy and it can also be converted back to dissolved inorganic carbon (DIC) through respiration. Those organic carbon molecules being captured on a filter are defined as particulate organic carbon (POC). POC is composed of organisms (dead or alive), their fecal matter, and detritus . POC can be converted to DOC through disaggregation of molecules and by exudation by phytoplankton , for example. POC is generally converted to DIC through heterotrophy and respiration. Full article: Solubility pump The oceans store the largest pool of reactive carbon on the planet as DIC, which is introduced as a result of the dissolution of atmospheric carbon dioxide into seawater – the solubility pump. [ 16 ] Aqueous CO 2 , carbonic acid , bicarbonate ion, and carbonate ion concentrations comprise dissolved inorganic carbon (DIC). DIC circulates throughout the whole ocean by Thermohaline circulation , which facilitates the tremendous DIC storage capacity of the ocean. [ 17 ] The chemical equations below show the reactions that CO 2 undergoes after it enters the ocean and transforms into its aqueous form. Carbonic acid rapidly dissociates into free hydrogen ion (technically, hydronium ) and bicarbonate. The free hydrogen ion meets carbonate, already present in the water from the dissolution of CaCO 3 , and reacts to form more bicarbonate ion. The dissolved species in the equations above, mostly bicarbonate, make up the carbonate alkalinity system, the dominant contributor to seawater alkalinity. [ 10 ] The carbonate pump, sometimes called the carbonate counter pump, starts with marine organisms at the ocean's surface producing particulate inorganic carbon (PIC) in the form of calcium carbonate ( calcite or aragonite , CaCO 3 ). This CaCO 3 is what forms hard body parts like shells . [ 16 ] The formation of these shells increases atmospheric CO 2 due to the production of CaCO 3 [ 10 ] in the following reaction with simplified stoichiometry: [ 18 ] Coccolithophores , a nearly ubiquitous group of phytoplankton that produce shells of calcium carbonate, are the dominant contributors to the carbonate pump. [ 16 ] Due to their abundance, coccolithophores have significant implications on carbonate chemistry, in the surface waters they inhabit and in the ocean below: they provide a large mechanism for the downward transport of CaCO 3 . [ 20 ] The air-sea CO 2 flux induced by a marine biological community can be determined by the rain ratio - the proportion of carbon from calcium carbonate compared to that from organic carbon in particulate matter sinking to the ocean floor, (PIC/POC). [ 19 ] The carbonate pump acts as a negative feedback on CO 2 taken into the ocean by the solubility pump. It occurs with lesser magnitude than the solubility pump. Particulate organic carbon, created through biological production, can be exported from the upper ocean in a flux commonly termed the biological pump, or respired (equation 6) back into inorganic carbon. In the former, dissolved inorganic carbon is biologically converted into organic matter by photosynthesis (equation 5) and other forms of autotrophy [ 16 ] that then sinks and is, in part or whole, digested by heterotrophs. [ 21 ] Particulate organic carbon can be classified, based on how easily organisms can break them down for food, as labile , semilabile, or refractory. Photosynthesis by phytoplankton is the primary source for labile and semilabile molecules, and is the indirect source for most refractory molecules. [ 22 ] [ 23 ] Labile molecules are present at low concentrations outside of cells (in the picomolar range) and have half-lives of only minutes when free in the ocean. [ 24 ] They are consumed by microbes within hours or days of production and reside in the surface oceans, [ 23 ] where they contribute a majority of the labile carbon flux. [ 25 ] Semilabile molecules, much more difficult to consume, are able to reach depths of hundreds of meters below the surface before being metabolized. [ 26 ] Refractory DOM largely comprises highly conjugated molecules like Polycyclic aromatic hydrocarbons or lignin . [ 22 ] Refractory DOM can reach depths greater than 1000 m and circulates through the oceans over thousands of years. [ 27 ] [ 23 ] [ 28 ] Over the course of a year, approximately 20 gigatons of photosynthetically-fixed labile and semilabile carbon is taken up by heterotrophs , whereas fewer than 0.2 gigatons of refractory carbon is consumed. [ 23 ] Marine dissolved organic matter (DOM) can store as much carbon as the current atmospheric CO 2 supply, [ 28 ] but industrial processes are altering the balance of this cycle. [ 29 ] Inputs to the marine carbon cycle are numerous, but the primary contributions, on a net basis, come from the atmosphere and rivers. [ 1 ] Hydrothermal vents generally supply carbon equal to the amount they consume. [ 16 ] Before the Industrial Revolution , the ocean was a source of CO 2 to the atmosphere [ 8 ] balancing the impact of rock weathering and terrestrial particulate organic carbon; now it has become a sink for the excess atmospheric CO 2 . [ 31 ] Carbon dioxide is absorbed from the atmosphere at the ocean's surface at an exchange rate which varies locally and with time [ 32 ] but on average, the oceans have a net absorption of around 2.9 Pg (equivalent to 2.9 billion metric tonnes) of carbon from atmospheric CO 2 per year. [ 33 ] Because the solubility of carbon dioxide increases when temperature decreases, cold areas can contain more CO 2 and still be in equilibrium with the atmosphere; In contrast, rising sea surface temperatures decrease the capacity of the oceans to take in carbon dioxide. [ 34 ] [ 10 ] The North Atlantic and Nordic oceans have the highest carbon uptake per unit area in the world, [ 35 ] and in the North Atlantic deep convection transports approximately 197 Tg per year of non-refractory carbon to depth. [ 36 ] The rate of CO 2 absorption by the ocean has been increasing with time as atmospheric CO 2 concentrations have increased due to anthropogenic emissions. However, the ocean carbon sink may be more sensitive to climate change than previously thought, and ocean warming and circulation changes due to climate change could result in the ocean absorbing less CO 2 from the atmosphere in future than expected. [ 37 ] Ocean-atmospheric exchanges rates of CO 2 depend on the concentration of carbon dioxide already present in both the atmosphere and the ocean, temperature, salinity, and wind speed. [ 38 ] This exchange rate can be approximated by Henry's law and can be calculated as S = kP, where the solubility (S) of the carbon dioxide gas is proportional to the amount of gas in the atmosphere, or its partial pressure . [ 1 ] Since the oceanic intake of carbon dioxide is limited, CO 2 influx can also be described by the Revelle factor . [ 34 ] [ 10 ] The Revelle Factor is a ratio of the change of carbon dioxide to the change in dissolved inorganic carbon, which serves as an indicator of carbon dioxide dissolution in the mixed layer considering the solubility pump. The Revelle Factor is an expression to characterize the thermodynamic efficiency of the DIC pool to absorb CO 2 into bicarbonate. The lower the Revelle factor, the higher the capacity for ocean water to take in carbon dioxide. While Revelle calculated a factor of around 10 in his day, in a 2004 study data showed a Revelle factor ranging from approximately 9 in low-latitude tropical regions to 15 in the southern ocean near Antarctica. [ 39 ] Rivers can also transport organic carbon to the ocean through weathering or erosion of aluminosilicate (equation 7) and carbonate rocks (equation 8) on land, or by the decomposition of life (equation 5, e.g. plant and soil material). [ 1 ] Rivers contribute roughly equal amounts (~0.4 GtC/yr) of DIC and DOC to the oceans. [ 1 ] It is estimated that approximately 0.8 GtC (DIC + DOC) is transported annually from the rivers to the ocean. [ 1 ] The rivers that flow into Chesapeake Bay ( Susquehanna , Potomac , and James rivers) input approximately 0.004 Gt (6.5 x 10 10 moles) DIC per year. [ 40 ] The total carbon transport of rivers represents approximately 0.02% of the total carbon in the atmosphere. [ 41 ] Though it seems small, over long time scales (1000 to 10,000 years) the carbon that enters rivers (and therefore does not enter the atmosphere) serves as a stabilizing feedback for greenhouse warming. [ 42 ] The key outputs of the marine carbon system are particulate organic matter (POC) and calcium carbonate (PIC) preservation as well as reverse weathering . [ 1 ] While there are regions with local loss of CO 2 to the atmosphere and hydrothermal processes, a net loss in the cycle does not occur. [ 16 ] Sedimentation is a long-term sink for carbon in the ocean, as well as the largest loss of carbon from the oceanic system. [ 43 ] Deep marine sediments and geologic formations are important since they provide a thorough record of life on Earth and an important source of fossil fuel. [ 43 ] Oceanic carbon can exit the system in the form of detritus that sinks and is buried in the seafloor without being fully decomposed or dissolved. Ocean floor surface sediments account for 1.75x10 15 kg of carbon in the global carbon cycle. [ 44 ] At most, 4% of the particulate organic carbon from the euphotic zone in the Pacific Ocean, where light-powered primary production occurs, is buried in marine sediments. [ 43 ] It is then implied that since there is a higher input of organic matter to the ocean than what is being buried, a large portion of it is used up or consumed within. Historically, sediments with the highest organic carbon contents were frequently found in areas with high surface water productivity or those with low bottom-water oxygen concentrations. [ 45 ] 90% of organic carbon burial occurs in deposits of deltas and continental shelves and upper slopes; [ 46 ] this is due partly to short exposure time because of a shorter distance to the seafloor and the composition of the organic matter that is already deposited in those environments. [ 47 ] Organic carbon burial is also sensitive to climate patterns: the accumulation rate of organic carbon was 50% larger during the glacial maximum compared to interglacials . [ 48 ] POC is decomposed by a series of microbe-driven processes, such as methanogenesis and sulfate reduction, before burial in the seafloor. [ 49 ] [ 50 ] Degradation of POC also results in microbial methane production which is the main gas hydrate on the continental margins. [ 51 ] Lignin and pollen are inherently resistant to degradation , and some studies show that inorganic matrices may also protect organic matter. [ 52 ] Preservation rates of organic matter depend on other interdependent variables that vary nonlinearly in time and space. [ 53 ] Although organic matter breakdown occurs rapidly in the presence of oxygen, microbes utilizing a variety of chemical species (via redox gradients) can degrade organic matter in anoxic sediments. [ 53 ] The burial depth at which degradation halts depends upon the sedimentation rate, the relative abundance of organic matter in the sediment, the type of organic matter being buried, and innumerable other variables. [ 53 ] While decomposition of organic matter can occur in anoxic sediments when bacteria use oxidants other than oxygen ( nitrate , sulfate , Fe 3+ ), decomposition tends to end short of complete mineralization . [ 54 ] This occurs because of preferential decomposition of labile molecules over refractile molecules. [ 54 ] Organic carbon burial is an input of energy for underground biological environments and can regulate oxygen in the atmosphere at long time-scales (> 10,000 years). [ 48 ] Burial can only take place if organic carbon arrives to the sea floor, making continental shelves and coastal margins the main storage of organic carbon from terrestrial and oceanic primary production. Fjords , or cliffs created by glacial erosion, have also been identified as areas of significant carbon burial, with rates one hundred times greater than the ocean average. [ 55 ] Particulate organic carbon is buried in oceanic sediments, creating a pathway between a rapidly available carbon pool in the ocean to its storage for geological timescales. Once carbon is sequestered in the seafloor, it is considered blue carbon . Burial rates can be calculated as the difference between the rate at which organic matter sinks and the rate at which it decomposes. The precipitation of calcium carbonate is important as it results in a loss of alkalinity as well as a release of CO 2 (Equation 4), and therefore a change in the rate of preservation of calcium carbonate can alter the partial pressure of CO 2 in Earth's atmosphere. [ 16 ] CaCO 3 is supersatured in the great majority of ocean surface waters and undersaturated at depth, [ 10 ] meaning the shells are more likely to dissolve as they sink to ocean depths. CaCO 3 can also be dissolved through metabolic dissolution (i.e. can be used as food and excreted) and thus deep ocean sediments have very little calcium carbonate. [ 16 ] The precipitation and burial of calcium carbonate in the ocean removes particulate inorganic carbon from the ocean and ultimately forms limestone . [ 16 ] On time scales greater than 500,000 years Earth's climate is moderated by the flux of carbon in and out of the lithosphere . [ 56 ] Rocks formed in the ocean seafloor are recycled through plate tectonics back to the surface and weathered or subducted into the mantle , the carbon outgassed by volcanoes . [ 1 ] Oceans take up around 25 – 31% of anthropogenic CO 2 . [ 57 ] [ 58 ] Because the Revelle factor increases with increasing CO 2 , a smaller fraction of the anthropogenic flux will be taken up by the ocean in the future. [ 59 ] Current annual increase in atmospheric CO 2 is approximately 4–5 gigatons of carbon, [ 60 ] about 2–3ppm CO 2 per year. [ 61 ] [ 62 ] This induces climate change that drives carbon concentration and carbon-climate feedback processes that modifies ocean circulation and the physical and chemical properties of seawater , which alters CO 2 uptake. [ 63 ] [ 64 ] Overfishing and the plastic pollution of the oceans contribute to the degraded state of the world's biggest carbon sink. [ 65 ] [ 66 ] Full article: Ocean acidification The pH of the oceans is declining due to uptake of atmospheric CO 2 . [ 67 ] The rise in dissolved carbon dioxide reduces the availability of the carbonate ion, reducing CaCO 3 saturation state, thus making it thermodynamically harder to make CaCO 3 shell. [ 68 ] Carbonate ions preferentially bind to hydrogen ions to form bicarbonate, [ 10 ] thus a reduction in carbonate ion availability increases the amount of unbound hydrogen ions, and decreases the amount of bicarbonate formed (Equations 1–3). pH is a measurement of hydrogen ion concentration, where a low pH means there are more unbound hydrogen ions. pH is therefore an indicator of carbonate speciation (the format of carbon present) in the oceans and can be used to assess how healthy the ocean is. [ 68 ] The list of organisms that may struggle due to ocean acidification include coccolithophores and foraminifera (the base of the marine food chain in many areas), human food sources such as oysters and mussels , [ 69 ] and perhaps the most conspicuous, a structure built by organisms – the coral reefs. [ 68 ] Most surface water will remain supersaturated with respect to CaCO 3 (both calcite and aragonite) for some time on current emissions trajectories, [ 68 ] but the organisms that require carbonate will likely be replaced in many areas. [ 68 ] Coral reefs are under pressure from overfishing, nitrate pollution, and warming waters; ocean acidification will add additional stress on these important structures. [ 68 ] Full article: Iron Fertilization Iron fertilization is a facet of geoengineering , which purposefully manipulates the Earth's climate system, typically in aspects of the carbon cycle or radiative forcing. Of current geoengineering interest is the possibility of accelerating the biological pump to increase export of carbon from the surface ocean. This increased export could theoretically remove excess carbon dioxide from the atmosphere for storage in the deep ocean. Ongoing investigations regarding artificial fertilization exist. [ 70 ] Due to the scale of the ocean and the fast response times of heterotrophic communities to increases in primary production, it is difficult to determine whether limiting-nutrient fertilization results in an increase in carbon export. [ 70 ] However, the majority of the community does not believe this is a reasonable or viable approach. [ 71 ] There are over 16 million dams in the world [ 72 ] that alter carbon transport from rivers to oceans. [ 73 ] Using data from the Global Reservoirs and Dams database, which contains approximately 7000 reservoirs that hold 77% of the total volume of water held back by dams (8000 km 3 ), it is estimated that the delivery of carbon to the ocean has decreased by 13% since 1970 and is projected to reach 19% by 2030. [ 74 ] The excess carbon contained in the reservoirs may emit an additional ~0.184 Gt of carbon to the atmosphere per year [ 75 ] and an additional ~0.2 GtC will be buried in sediment. [ 74 ] Prior to 2000, the Mississippi , the Niger , and the Ganges River basins account for 25 – 31% of all reservoir carbon burial. [ 74 ] After 2000, the Paraná (home to 70 dams) and the Zambezi (home to the largest reservoir) River basins exceeded the burial by the Mississippi. [ 74 ] Other large contributors to carbon burial caused by damming occur on the Danube , the Amazon , the Yangtze , the Mekong , the Yenisei , and the Tocantins Rivers. [ 74 ]
https://en.wikipedia.org/wiki/Oceanic_carbon_cycle
Oceanic deserts are regions of the oceans characterized by low annual precipitation, comparable to that of continental deserts . [ 1 ] These areas typically overlap with subtropical gyres - large systems of circular ocean currents formed by the global wind patterns . [ 2 ] These gyres are characterized by semi-permanent high-pressure systems , which inhibit the formation of deep precipitating clouds. [ 3 ] [ 4 ] [ 5 ] Unlike continental deserts, oceanic deserts maintain a relatively high cloud fraction throughout the year. [ 6 ] Despite the pronounced cloud cover, the low level shallow clouds over these areas produce very little precipitation, distinguishing these areas as oceanic deserts. [ 7 ] The term "desert" in this context not only refers to the low precipitation but also to the low biodiversity found in these regions. The oceanic circulation in these regions significantly impacts marine life, leading to lower productivity and biodiversity compared to other parts of the ocean. [ 8 ] Oceanic deserts are primarily found in the eastern subtropical oceans. [ 1 ] The corresponding subtropical gyres are the North Atlantic Gyre extends from the eastern coast of North America to the western coast of Europe and Africa. The South Atlantic Gyre is located off the coast of South America, stretching towards Africa. The North Pacific Gyre spans from the western coast of North America to the eastern coast of Asia. The South Pacific Gyre is found off the coast of South America, extending to the western Pacific. The Indian Ocean Gyre is positioned between the eastern coast of Southern Africa and the western coast of Australia. [ 9 ] Oceanic deserts are influenced by several key atmospheric and climatic factors. Persistent high-pressure systems , known as subtropical highs, dominate these regions, leading to stable atmospheric conditions. [ 10 ] The stable high-pressure zones prevent the development of deep convective clouds, which are necessary for rainfall. As a result, annual precipitation in oceanic deserts is minimal, comparable to the aridity observed in terrestrial deserts. Despite the low precipitation, oceanic deserts maintain a relatively high cloud fraction throughout the year (see figure 2). Coastal regions typically see stratus clouds , while offshore areas are characterized by stratocumulus clouds . [ 6 ] Over the relatively warm ocean to the west, trade-wind cumuli are common. [ 11 ] The cloud cover in these regions is largely non-precipitating, contributing to the persistent dry conditions. [ 7 ] This is in contrast to continental deserts, which generally have clear skies with occasional, but often intense, rainfall events. [ 12 ] [ 13 ] A key feature of the atmospheric profile in oceanic deserts is the presence of the trade wind inversion . [ 11 ] The vertical profiles of meteorological parameters such as temperature, humidity, and wind speed in oceanic deserts reveals sharp gradients at the inversion layer. This inversion layer, found at an altitude of about 1 to 2 kilometers, acts as a cap that limits vertical mixing and the development of deep convective clouds and thus contributes to the suppression of precipitation. [ 14 ] Oceanic deserts are depicted in dark blue on maps produced by NASA (see figures 1 and 3), indicating areas with low precipitation and low chlorophyll concentrations, highlighting their status as nutrient-starved oceanic regions. [ 15 ] [ 16 ] However, the coastal zones in these oceanic deserts have a relatively high productivity and biodiversity due to deposition of nutrient rich continental sediment by surface runoff as well as upwelling of cold ocean waters induced by prevailing winds and rising sea floors brings iron and other essential nutrients to the surface. This includes the equator zone. [ 17 ] [ 18 ] The physical characteristics of oceanic deserts significantly influence their ecological dynamics. Strong stratification in these regions prevents the mixing of nutrient-rich deep waters with surface waters, maintaining nutrient-poor conditions at the surface. This stable stratification is a result of the warm, saline surface waters overlaying cooler and denser deep waters, which inhibits vertical mixing. [ 19 ] Nutrient depletion in subtropical gyres is primarily due to strong downwelling and particle sinking. [ 20 ] In contrast, regions with the highest chlorophyll concentrations are found in cold waters, where nutrient-rich upwelling occurs, allowing phytoplankton to thrive.
https://en.wikipedia.org/wiki/Oceanic_deserts
Oceanic dispersal is a type of biological dispersal that occurs when terrestrial organisms transfer from one land mass to another by way of a sea crossing. Island hopping is the crossing of an ocean by a series of shorter journeys between islands, as opposed to a single journey directly to the destination. Often this occurs via large rafts of floating vegetation such as are sometimes seen floating down major rivers in the tropics and washing out to sea, occasionally with animals trapped on them. [ 1 ] Dispersal via such a raft is sometimes referred to as a rafting event . [ 2 ] Colonization of land masses by plants can also occur via long-distance oceanic dispersal of floating seeds. [ 3 ] Rafting has played an important role in the colonization of isolated land masses by mammals. Prominent examples include Madagascar , which has been isolated for ~120 million years ( Ma ), and South America , which was isolated for much of the Cenozoic . Both land masses, for example, appear to have received their primates by this mechanism. According to genetic evidence, the common ancestor of the lemurs of Madagascar appears to have crossed the Mozambique Channel by rafting between 50 and 60 Ma ago. [ 4 ] [ 5 ] [ 6 ] Likewise, the New World monkeys are thought to have originated in Africa and rafted to South America by the Oligocene , when the continents were much closer than they are today. [ 5 ] Madagascar also appears to have received its tenrecs (25–42 Ma ago), nesomyid rodents (20–24 Ma ago) and euplerid carnivorans (19–26 Ma ago) by this route [ 6 ] and South America its caviomorph rodents (over 30 Ma ago). [ 7 ] [ 8 ] Simian primates (ancestral to monkeys) and hystricognath rodents (ancestral to caviomorphs) are believed to have previously rafted from Asia to Africa about 40 Ma ago. [ 9 ] Among reptiles, several iguanid species in the South Pacific have been hypothesized to be descended from iguanas that rafted 10,000 kilometres (6,200 mi) from Central or South America [ 10 ] (an alternative theory involves dispersal of a putative now-extinct iguana lineage from Australia or Asia [ 11 ] ). Similarly, a number of clades of American geckos seem to have rafted over from Africa during both the Paleogene and Neogene. [ 12 ] Skinks of the related genera Mabuya and Trachylepis also apparently both floated across the Atlantic from Africa to South America and Fernando de Noronha , respectively, during the last 9 Ma. [ 13 ] Skinks from the same group have also rafted from Africa to Cape Verde , Madagascar, the Seychelles , the Comoros and Socotra . [ 13 ] (Among lizards, skinks and geckos seem especially capable of surviving long transoceanic journeys. [ 13 ] ) Surprisingly, even burrowing amphisbaenians [ 14 ] and blind snakes [ 15 ] appear to have rafted from Africa to South America. An example of a bird that is thought to have reached its present location by rafting is the weak-flying South American hoatzin , whose ancestors apparently floated over from Africa. [ 16 ] Colonization of groups of islands can occur by an iterative rafting process sometimes called island hopping. Such a process appears to have played a role, for example, in the colonization of the Caribbean by mammals of South American origin (i.e., caviomorphs, monkeys and sloths ). [ 17 ] A remarkable example of iterative rafting has been proposed for spiders of the genus Amaurobioides . [ 18 ] [ 19 ] Members of this genus inhabit coastal sites and build silken cells which they seal at high tide; however, they do not balloon . DNA sequence analysis suggests that ancestors of the genus dispersed from southern South America to South Africa about 10 Ma ago, where the most basal clade is found; subsequent rafting events then took the genus eastward with the Antarctic Circumpolar Current to Australia, then to New Zealand and finally to Chile by about 2 Ma ago. [ 19 ] Another example among spiders is the species Moggridgea rainbowi , the only Australian member of a genus otherwise endemic to Africa, with a divergence date of 2 to 16 Ma ago. [ 20 ] However, oceanic dispersal of terrestrial species may not always take the form of rafting; in some cases, swimming or simply floating may suffice. Tortoises of the genus Chelonoidis arrived in South America from Africa in the Oligocene; [ 21 ] they were probably aided by their ability to float with their heads up, and to survive up to six months without food or fresh water. [ 21 ] South American tortoises then went on to colonize the West Indies and Galápagos Islands . The dispersal of semiaquatic species is likely to occur similarly. The dispersal of anthracotheres from Asia to Africa about 40 Ma ago, [ 9 ] and the much more recent dispersal of hippos (relatives and possible descendants of anthracotheres) from Africa to Madagascar may have occurred by floating or swimming. [ 6 ] Ancestors of the Nile crocodile are thought to have reached the Americas from Africa 5 to 6 Ma ago. [ 22 ] [ 23 ] The first documented example of colonization of a land mass by rafting occurred in the aftermath of hurricanes Luis and Marilyn in the Caribbean in 1995. A raft of uprooted trees carrying fifteen or more green iguanas was observed by fishermen landing on the east side of Anguilla – an island where they had never before been recorded. [ 24 ] The iguanas had apparently been caught on the trees and rafted 200 mi (320 km) across the ocean from Guadeloupe , where they are indigenous. [ 25 ] [ 26 ] Examination of the weather patterns and ocean currents indicated that they had probably spent three weeks at sea before landfall. [ 26 ] This colony began breeding on the new island within two years of its arrival. [ 26 ] The advent of human civilization has created opportunities for organisms to raft on floating artifacts, which may be more durable than natural floating objects. This phenomenon was noted following the 2011 Tōhoku tsunami in Japan, with about 300 species found to have been carried on debris by the North Pacific Current to the west coast of North America (although no colonizations have been detected thus far). [ 27 ] [ 28 ]
https://en.wikipedia.org/wiki/Oceanic_dispersal
Oceanography (from Ancient Greek ὠκεανός ( ōkeanós ) ' ocean ' and γραφή ( graphḗ ) ' writing ' ), also known as oceanology , sea science , ocean science , and marine science , is the scientific study of the ocean , including its physics , chemistry , biology , and geology . It is an Earth science , which covers a wide range of topics, including ocean currents , waves , and geophysical fluid dynamics ; fluxes of various chemical substances and physical properties within the ocean and across its boundaries; ecosystem dynamics; and plate tectonics and seabed geology. Oceanographers draw upon a wide range of disciplines to deepen their understanding of the world’s oceans, incorporating insights from astronomy , biology , chemistry , geography , geology , hydrology , meteorology and physics . Humans first acquired knowledge of the waves and currents of the seas and oceans in pre-historic times. Observations on tides were recorded by Aristotle and Strabo in 384–322 BC. [ 1 ] Early exploration of the oceans was primarily for cartography and mainly limited to its surfaces and of the animals that fishermen brought up in nets, though depth soundings by lead line were taken. The Portuguese campaign of Atlantic navigation is the earliest example of a systematic scientific large project, sustained over many decades, studying the currents and winds of the Atlantic. The work of Pedro Nunes (1502–1578) is remembered in the navigation context for the determination of the loxodromic curve: the shortest course between two points on the surface of a sphere represented onto a two-dimensional map. [ 2 ] [ 3 ] When he published his "Treatise of the Sphere" (1537), mostly a commentated translation of earlier work by others, he included a treatise on geometrical and astronomic methods of navigation. There he states clearly that Portuguese navigations were not an adventurous endeavour: "nam se fezeram indo a acertar: mas partiam os nossos mareantes muy ensinados e prouidos de estromentos e regras de astrologia e geometria que sam as cousas que os cosmographos ham dadar apercebidas (...) e leuaua cartas muy particularmente rumadas e na ja as de que os antigos vsauam" (were not done by chance: but our seafarers departed well taught and provided with instruments and rules of astrology (astronomy) and geometry which were matters the cosmographers would provide (...) and they took charts with exact routes and no longer those used by the ancient). [ 4 ] His credibility rests on being personally involved in the instruction of pilots and senior seafarers from 1527 onwards by Royal appointment, along with his recognized competence as mathematician and astronomer. [ 2 ] The main problem in navigating back from the south of the Canary Islands (or south of Boujdour ) by sail alone, is due to the change in the regime of winds and currents: the North Atlantic gyre and the Equatorial counter current [ 5 ] will push south along the northwest bulge of Africa, while the uncertain winds where the Northeast trades meet the Southeast trades (the doldrums) [ 6 ] leave a sailing ship to the mercy of the currents. Together, prevalent current and wind make northwards progress very difficult or impossible. It was to overcome this problem and clear the passage to India around Africa as a viable maritime trade route, that a systematic plan of exploration was devised by the Portuguese. The return route from regions south of the Canaries became the ' volta do largo' or 'volta do mar '. The 'rediscovery' of the Azores islands in 1427 is merely a reflection of the heightened strategic importance of the islands, now sitting on the return route from the western coast of Africa (sequentially called 'volta de Guiné' and 'volta da Mina'); and the references to the Sargasso Sea (also called at the time 'Mar da Baga'), to the west of the Azores , in 1436, reveals the western extent of the return route. [ 7 ] This is necessary, under sail, to make use of the southeasterly and northeasterly winds away from the western coast of Africa, up to the northern latitudes where the westerly winds will bring the seafarers towards the western coasts of Europe. [ 8 ] The secrecy involving the Portuguese navigations, with the death penalty for the leaking of maps and routes, concentrated all sensitive records in the Royal Archives, completely destroyed by the Lisbon earthquake of 1775 . However, the systematic nature of the Portuguese campaign, mapping the currents and winds of the Atlantic, is demonstrated by the understanding of the seasonal variations, with expeditions setting sail at different times of the year taking different routes to take account of seasonal predominate winds. This happens from as early as late 15th century and early 16th: Bartolomeu Dias followed the African coast on his way south in August 1487, while Vasco da Gama would take an open sea route from the latitude of Sierra Leone , spending three months in the open sea of the South Atlantic to profit from the southwards deflection of the southwesterly on the Brazilian side (and the Brazilian current going southward - Gama departed in July 1497); and Pedro Álvares Cabral (departing March 1500) took an even larger arch to the west, from the latitude of Cape Verde, thus avoiding the summer monsoon (which would have blocked the route taken by Gama at the time he set sail). [ 9 ] Furthermore, there were systematic expeditions pushing into the western Northern Atlantic (Teive, 1454; Vogado, 1462; Teles, 1474; Ulmo, 1486). [ 10 ] The documents relating to the supplying of ships, and the ordering of sun declination tables for the southern Atlantic for as early as 1493–1496, [ 11 ] all suggest a well-planned and systematic activity happening during the decade long period between Bartolomeu Dias finding the southern tip of Africa, and Gama's departure; additionally, there are indications of further travels by Bartolomeu Dias in the area. [ 7 ] The most significant consequence of this systematic knowledge was the negotiation of the Treaty of Tordesillas in 1494, moving the line of demarcation 270 leagues to the west (from 100 to 370 leagues west of the Azores), bringing what is now Brazil into the Portuguese area of domination. The knowledge gathered from open sea exploration allowed for the well-documented extended periods of sail without sight of land, not by accident but as pre-determined planned route; for example, 30 days for Bartolomeu Dias culminating on Mossel Bay , the three months Gama spent in the South Atlantic to use the Brazil current (southward), or the 29 days Cabral took from Cape Verde up to landing in Monte Pascoal , Brazil. The Danish expedition to Arabia 1761–67 can be said to be the world's first oceanographic expedition, as the ship Grønland had on board a group of scientists, including naturalist Peter Forsskål , who was assigned an explicit task by the king, Frederik V , to study and describe the marine life in the open sea, including finding the cause of mareel , or milky seas. For this purpose, the expedition was equipped with nets and scrapers, specifically designed to collect samples from the open waters and the bottom at great depth. [ 12 ] Although Juan Ponce de León in 1513 first identified the Gulf Stream , and the current was well known to mariners, Benjamin Franklin made the first scientific study of it and gave it its name. Franklin measured water temperatures during several Atlantic crossings and correctly explained the Gulf Stream's cause. Franklin and Timothy Folger printed the first map of the Gulf Stream in 1769–1770. [ 13 ] [ 14 ] Information on the currents of the Pacific Ocean was gathered by explorers of the late 18th century, including James Cook and Louis Antoine de Bougainville . James Rennell wrote the first scientific textbooks on oceanography, detailing the current flows of the Atlantic and Indian oceans. During a voyage around the Cape of Good Hope in 1777, he mapped "the banks and currents at the Lagullas " . He was also the first to understand the nature of the intermittent current near the Isles of Scilly , (now known as Rennell's Current). [ 15 ] The tides and currents of the ocean are distinct. Tides are the rise and fall of sea levels created by the combination of the gravitational forces of the Moon along with the Sun (the Sun just in a much lesser extent) and are also caused by the Earth and Moon orbiting each other. An ocean current is a continuous, directed movement of seawater generated by a number of forces acting upon the water, including wind, the Coriolis effect , breaking waves , cabbeling , and temperature and salinity differences . [ 16 ] Sir James Clark Ross took the first modern sounding in deep sea in 1840, and Charles Darwin published a paper on reefs and the formation of atolls as a result of the second voyage of HMS Beagle in 1831–1836. Robert FitzRoy published a four-volume report of Beagle ' s three voyages. In 1841–1842 Edward Forbes undertook dredging in the Aegean Sea that founded marine ecology. The first superintendent of the United States Naval Observatory (1842–1861), Matthew Fontaine Maury devoted his time to the study of marine meteorology, navigation , and charting prevailing winds and currents. His 1855 textbook Physical Geography of the Sea was one of the first comprehensive oceanography studies. Many nations sent oceanographic observations to Maury at the Naval Observatory, where he and his colleagues evaluated the information and distributed the results worldwide. [ 17 ] Knowledge of the oceans remained confined to the topmost few fathoms of the water and a small amount of the bottom, mainly in shallow areas. Almost nothing was known of the ocean depths. The British Royal Navy 's efforts to chart all of the world's coastlines in the mid-19th century reinforced the vague idea that most of the ocean was very deep, although little more was known. As exploration ignited both popular and scientific interest in the polar regions and Africa , so too did the mysteries of the unexplored oceans. The seminal event in the founding of the modern science of oceanography was the 1872–1876 Challenger expedition . As the first true oceanographic cruise, this expedition laid the groundwork for an entire academic and research discipline. [ 18 ] In response to a recommendation from the Royal Society , the British Government announced in 1871 an expedition to explore world's oceans and conduct appropriate scientific investigation. Charles Wyville Thomson and Sir John Murray launched the Challenger expedition . Challenger , leased from the Royal Navy, was modified for scientific work and equipped with separate laboratories for natural history and chemistry . [ 19 ] Under the scientific supervision of Thomson, Challenger travelled nearly 70,000 nautical miles (130,000 km) surveying and exploring. On her journey circumnavigating the globe, [ 19 ] 492 deep sea soundings, 133 bottom dredges, 151 open water trawls and 263 serial water temperature observations were taken. [ 20 ] Around 4,700 new species of marine life were discovered. The result was the Report Of The Scientific Results of the Exploring Voyage of H.M.S. Challenger during the years 1873–76 . Murray, who supervised the publication, described the report as "the greatest advance in the knowledge of our planet since the celebrated discoveries of the fifteenth and sixteenth centuries". He went on to found the academic discipline of oceanography at the University of Edinburgh , which remained the centre for oceanographic research well into the 20th century. [ 21 ] Murray was the first to study marine trenches and in particular the Mid-Atlantic Ridge , and map the sedimentary deposits in the oceans. He tried to map out the world's ocean currents based on salinity and temperature observations, and was the first to correctly understand the nature of coral reef development. In the late 19th century, other Western nations also sent out scientific expeditions (as did private individuals and institutions). The first purpose-built oceanographic ship, Albatros , was built in 1882. In 1893, Fridtjof Nansen allowed his ship, Fram , to be frozen in the Arctic ice. This enabled him to obtain oceanographic, meteorological and astronomical data at a stationary spot over an extended period. In 1881 the geographer John Francon Williams published a seminal book, Geography of the Oceans . [ 22 ] [ 23 ] [ 24 ] Between 1907 and 1911 Otto Krümmel published the Handbuch der Ozeanographie , which became influential in awakening public interest in oceanography. [ 25 ] The four-month 1910 North Atlantic expedition headed by John Murray and Johan Hjort was the most ambitious research oceanographic and marine zoological project ever mounted until then, and led to the classic 1912 book The Depths of the Ocean . The first acoustic measurement of sea depth was made in 1914. Between 1925 and 1927 the "Meteor" expedition gathered 70,000 ocean depth measurements using an echo sounder, surveying the Mid-Atlantic Ridge. In 1934, Easter Ellen Cupp , the first woman to have earned a PhD (at Scripps) in the United States, completed a major work on diatoms [ 26 ] that remained the standard taxonomy in the field until well after her death in 1999. In 1940, Cupp was let go from her position at Scripps. Sverdrup specifically commended Cupp as a conscientious and industrious worker and commented that his decision was no reflection on her ability as a scientist. Sverdrup used the instructor billet vacated by Cupp to employ Marston Sargent, a biologist studying marine algae, which was not a new research program at Scripps. Financial pressures did not prevent Sverdrup from retaining the services of two other young post-doctoral students, Walter Munk and Roger Revelle . Cupp's partner, Dorothy Rosenbury, found her a position teaching high school, where she remained for the rest of her career. (Russell, 2000) Sverdrup, Johnson and Fleming published The Oceans in 1942, [ 27 ] which was a major landmark. The Sea (in three volumes, covering physical oceanography, seawater and geology) edited by M.N. Hill was published in 1962, while Rhodes Fairbridge 's Encyclopedia of Oceanography was published in 1966. The Great Global Rift, running along the Mid Atlantic Ridge, was discovered by Maurice Ewing and Bruce Heezen in 1953 and mapped by Heezen and Marie Tharp using bathymetric data; in 1954 a mountain range under the Arctic Ocean was found by the Arctic Institute of the USSR. The theory of seafloor spreading was developed in 1960 by Harry Hammond Hess . The Ocean Drilling Program started in 1966. Deep-sea vents were discovered in 1977 by Jack Corliss and Robert Ballard in the submersible DSV Alvin . [ 28 ] In the 1950s, Auguste Piccard invented the bathyscaphe and used the bathyscaphe Trieste to investigate the ocean's depths. The United States nuclear submarine Nautilus made the first journey under the ice to the North Pole in 1958. In 1962 the FLIP (Floating Instrument Platform), a 355-foot (108 m) spar buoy, was first deployed. In 1968, Tanya Atwater led the first all-woman oceanographic expedition. Until that time, gender policies restricted women oceanographers from participating in voyages to a significant extent. From the 1970s, there has been much emphasis on the application of large scale computers to oceanography to allow numerical predictions of ocean conditions and as a part of overall environmental change prediction. Early techniques included analog computers (such as the Ishiguro Storm Surge Computer ) generally now replaced by numerical methods (e.g. SLOSH .) An oceanographic buoy array was established in the Pacific to allow prediction of El Niño events. 1990 saw the start of the World Ocean Circulation Experiment (WOCE) which continued until 2002. Geosat seafloor mapping data became available in 1995. Study of the oceans is critical to understanding shifts in Earth's energy balance along with related global and regional changes in climate , the biosphere and biogeochemistry . The atmosphere and ocean are linked because of evaporation and precipitation as well as thermal flux (and solar insolation ). Recent studies have advanced knowledge on ocean acidification , ocean heat content , ocean currents , sea level rise , the oceanic carbon cycle , the water cycle , Arctic sea ice decline , coral bleaching , marine heatwaves , extreme weather , coastal erosion and many other phenomena in regards to ongoing climate change and climate feedbacks . In general, understanding the world ocean through further scientific study enables better stewardship and sustainable utilization of Earth's resources. [ 29 ] The Intergovernmental Oceanographic Commission reports that 1.7% of the total national research expenditure of its members is focused on ocean science. [ 30 ] The study of oceanography is divided into these five branches: Biological oceanography investigates the ecology and biology of marine organisms in the context of the physical, chemical and geological characteristics of their ocean environment. Chemical oceanography is the study of the chemistry of the ocean. Whereas chemical oceanography is primarily occupied with the study and understanding of seawater properties and its changes, ocean chemistry focuses primarily on the geochemical cycles . The following is a central topic investigated by chemical oceanography. Ocean acidification describes the decrease in ocean pH that is caused by anthropogenic carbon dioxide (CO 2 ) emissions into the atmosphere . [ 31 ] Seawater is slightly alkaline and had a preindustrial pH of about 8.2. More recently, anthropogenic activities have steadily increased the carbon dioxide content of the atmosphere; about 30–40% of the added CO 2 is absorbed by the oceans, forming carbonic acid and lowering the pH (now below 8.1 [ 32 ] ) through ocean acidification. [ 33 ] [ 34 ] [ 35 ] The pH is expected to reach 7.7 by the year 2100. [ 36 ] An important element for the skeletons of marine animals is calcium , but calcium carbonate becomes more soluble with pressure, so carbonate shells and skeletons dissolve below the carbonate compensation depth . [ 37 ] Calcium carbonate becomes more soluble at lower pH, so ocean acidification is likely to affect marine organisms with calcareous shells, such as oysters, clams, sea urchins and corals, [ 38 ] [ 39 ] and the carbonate compensation depth will rise closer to the sea surface. Affected planktonic organisms will include pteropods , coccolithophorids and foraminifera , all important in the food chain . In tropical regions, corals are likely to be severely affected as they become less able to build their calcium carbonate skeletons, [ 40 ] in turn adversely impacting other reef dwellers. [ 36 ] The current rate of ocean chemistry change seems to be unprecedented in Earth's geological history, making it unclear how well marine ecosystems will adapt to the shifting conditions of the near future. [ 41 ] Of particular concern is the manner in which the combination of acidification with the expected additional stressors of higher ocean temperatures and lower oxygen levels will impact the seas. [ 42 ] Geological oceanography is the study of the geology of the ocean floor including plate tectonics and paleoceanography . Physical oceanography studies the ocean's physical attributes including temperature-salinity structure, mixing, surface waves , internal waves, surface tides , internal tides , and currents . The following are central topics investigated by physical oceanography. Since the early ocean expeditions in oceanography, a major interest was the study of ocean currents and temperature measurements. The tides , the Coriolis effect , changes in direction and strength of wind , salinity, and temperature are the main factors determining ocean currents. The thermohaline circulation (THC) ( thermo- referring to temperature and -haline referring to salt content ) connects the ocean basins and is primarily dependent on the density of sea water . It is becoming more common to refer to this system as the 'meridional overturning circulation' because it more accurately accounts for other driving factors beyond temperature and salinity. Oceanic heat content (OHC) refers to the extra heat stored in the ocean from changes in Earth's energy balance . The increase in the ocean heat play an important role in sea level rise , because of thermal expansion . Ocean warming accounts for 90% of the energy accumulation associated with global warming since 1971. [ 43 ] [ 44 ] Paleoceanography is the study of the history of the oceans in the geologic past with regard to circulation, chemistry, biology, geology and patterns of sedimentation and biological productivity. Paleoceanographic studies using environment models and different proxies enable the scientific community to assess the role of the oceanic processes in the global climate by the reconstruction of past climate at various intervals. Paleoceanographic research is also intimately tied to palaeoclimatology. The earliest international organizations of oceanography were founded at the turn of the 20th century, starting with the International Council for the Exploration of the Sea created in 1902, followed in 1919 by the Mediterranean Science Commission . Marine research institutes were already in existence, starting with the Stazione Zoologica Anton Dohrn in Naples, Italy (1872), the Biological Station of Roscoff, France (1876), the Arago Laboratory in Banyuls-sur-mer, France (1882), the Laboratory of the Marine Biological Association in Plymouth, UK (1884), the Norwegian Institute for Marine Research in Bergen, Norway (1900), the Laboratory für internationale Meeresforschung, Kiel, Germany (1902). On the other side of the Atlantic, the Scripps Institution of Oceanography was founded in 1903, followed by the Woods Hole Oceanographic Institution in 1930, the Virginia Institute of Marine Science in 1938, the Lamont–Doherty Earth Observatory at Columbia University in 1949, and later the School of Oceanography at University of Washington . In Australia , the Australian Institute of Marine Science (AIMS), established in 1972 soon became a key player in marine tropical research. In 1921 the International Hydrographic Bureau , called since 1970 the International Hydrographic Organization , was established to develop hydrographic and nautical charting standards.
https://en.wikipedia.org/wiki/Oceanography
Oceanography and Marine Biology: An Annual Review is an annual review of oceanography and marine biology that has been published since 1963. It was originally edited by Harold Barnes. It was originally published by Aberdeen University Press and Allen & Unwin [ 1 ] but is now published by CRC Press , part of Taylor & Francis. [ 2 ] The 55th volume was published in 2017. [ 3 ] This article about a biology journal is a stub . You can help Wikipedia by expanding it . See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
https://en.wikipedia.org/wiki/Oceanography_and_Marine_Biology:_An_Annual_Review
In mathematics, an Ockham algebra is a bounded distributive lattice L {\displaystyle L} with a dual endomorphism , that is, an operation ∼ : L → L {\displaystyle \sim \colon L\to L} satisfying They were introduced by Berman (1977) , and were named after William of Ockham by Urquhart (1979) . Ockham algebras form a variety . Examples of Ockham algebras include Boolean algebras , De Morgan algebras , Kleene algebras , and Stone algebras . This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Ockham_algebra
Oct-2 (octamer-binding protein 2) also known as POU domain, class 2, transcription factor 2 is a protein that in humans is encoded by the POU2F2 gene . Oct-2 is an octamer transcription factor which is a member of the POU family . [ 1 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Oct-2
OctaDist is computer software for crystallography and inorganic chemistry program. It is mainly used for computing distortion parameters of coordination complex such as spin crossover complex (SCO), magnetic metal complex and metal–organic framework (MOF). The program is developed and maintained in an international collaboration between the members of the Computational Chemistry Research Unit at Thammasat University , [ 1 ] the Functional Materials & Nanotechnology CoE at Walailak University [ 2 ] and the Switchable Molecules and Materials group at University of Bordeaux . [ 3 ] OctaDist is written entirely in Python binding to Tkinter graphical user interface toolkit. It is available for Windows , macOS , and Linux . It is free and open-source software distributed under a GNU General Public License (GPL) 3.0. The following are the main features [ 4 ] of the latest version of OctaDist:
https://en.wikipedia.org/wiki/OctaDist
Octaazacubane is a hypothetical explosive allotrope of nitrogen with formula N 8 , whose molecules have eight atoms arranged into a cube . (By comparison, nitrogen usually occurs as the diatomic molecule N 2 .) It can be regarded as a cubane-type cluster , where all eight corners are nitrogen atoms bonded along the edges. [ 2 ] It is predicted to be a metastable molecule , in which despite the thermodynamic instability caused by bond strain , and the high energy of the N–N single bonds , the molecule remains kinetically stable for reasons of orbital symmetry . [ 3 ] Octaazacubane is predicted to have an energy density (assuming decomposition into N 2 ) of 22.9 MJ / kg , [ 4 ] which is over 5 times the standard value of TNT . It has therefore been proposed (along with other exotic nitrogen allotropes) as an explosive , and as a component of high performance rocket fuel . Its velocity of detonation is predicted to be 15,000 m/s, much (48.5%) more than octanitrocubane , the fastest known nonnuclear explosive. [ 1 ] A prediction for cubic gauche nitrogen energy density is 33 MJ / kg , exceeding octaazacubane by 44%, [ 5 ] though a more recent one is of 10.22 MJ/kg, making it less than half of octaazacubane. [ 6 ]
https://en.wikipedia.org/wiki/Octaazacubane
Octachlorotetraphosphazene is an inorganic compound with the formula (NPCl 2 ) 4 . The molecule has a cyclic, unsaturated backbone consisting of alternating phosphorus and nitrogen centers, and can be viewed as a tetramer of the hypothetical compound N≡PCl 2 . The compound has not been studied as much as the related species hexachlorotriphosphazene , in the samples of which octachlorotetraphosphazene is usually found as an unwanted contamintant. [ 1 ] Octachlorotetraphosphazene has a P 4 N 4 core with six equivalent P–N bonds. [ 2 ] Some spiro-, ansa-, and spiro-ansa-cyclic derivatives have been prepared via nucleophilic substitution of octachlorotetraphosphazene with alkoxides. [ 3 ]
https://en.wikipedia.org/wiki/Octachlorotetraphosphazene
The octadecanoid pathway is a biosynthetic pathway for the production of the phytohormone jasmonic acid (JA), an important hormone for induction of defense genes. JA is synthesized from alpha-linolenic acid , which can be released from the plasma membrane by certain lipase enzymes. For example, in the wound defense response, phospholipase C will cause the release of alpha-linolenic acid for JA synthesis. In the first step, alpha-linolenic acid is oxidized by the enzyme lipoxygenase . This forms 13-hydroperoxylinolenic acid, which is then modified by a dehydrase and undergoes cyclization by allene oxide cyclase to form 12-oxo-phytodienoic acid. This undergoes reduction and three rounds of beta oxidation to form jasmonic acid. [ 1 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Octadecanoid_pathway
Octadecyltrichlorosilane ( ODTS or n -octadecyltrichlorosilane ) is an organosilicon compound with the formula CH 3 (CH 2 ) 17 SiCl 3 . A colorless liquid, it is used as a silanization agent to prepare hydrophobic stationary phase , for reversed-phase chromatography . [ 2 ] It is also evaluated for forming self-assembled monolayers on silicon dioxide substrates. Its structural chemical formula is CH 3 (CH 2 ) 17 SiCl 3 . It is flammable and hydrolyzes readily with release of hydrogen chloride . Dodecyltrichlorosilane, an ODTS analog with shorter alkyl chain, is used for the same purpose. ODTS- PVP films are used in organic-substrate LCD displays. [ 3 ]
https://en.wikipedia.org/wiki/Octadecyltrichlorosilane
In astronomy , an octaeteris ( Greek : ὀκταετηρίς , plural: octaeterides ) is the period of eight solar years after which the moon phase occurs on the same day of the year plus one or two days. This period is also in a very good synchronicity with five Venusian visibility cycles (the Venusian synodic period ) and thirteen Venusian revolutions around the Sun (Venusian sidereal period ). This means, that if Venus is visible beside the Moon , after eight years the two will be again close together near the same date of the calendar . The octaeteris, also known as oktaeteris , was noted by Cleostratus in ancient Greece as a ⁠2 923 + 1 / 2 ⁠ day cycle. The octaeteris is the calendar used for the Olympic games ; if one Olympiad was 50 months long, the next would be 49 lunar months long. This octaeteris calendar is used for the Olympic dial of the Antikythera mechanism , to determine the time of the Olympic games and other Greek festivities. The 8 year short lunisolar cycle was probably known to many ancient cultures. The mathematical proportions of the octaeteris cycles were noted in Classic Vernal rock art in northeastern Utah by J.Q. Jacobs in 1990. [ citation needed ] The Three Kings panel also contains more accurate ratios, ratios related to other planets, and apparent astronomical symbolism [ clarification needed ] .
https://en.wikipedia.org/wiki/Octaeteris
Octafluorocubane or perfluorocubane is an organofluorine compound with the formula C 8 F 8 , consisting of eight carbon atoms joined into a cube , with a fluorine bonded to each carbon corner. It is a colorless, sublimable solid at room temperature. It has been of longstanding theoretical interest, but was not synthesised until 2022, when it was prepared in several steps from a cubane carboxylic ester beginning with its hepta fluorination . According to X-ray crystallography , the C-C distances (1.570 Å) in octafluorocubane are identical in length to those in the parent cubane (1.572 Å). [ 1 ] Octafluorocubane has attracted interest from theorists because of its unusual electronic structure , [ 2 ] which is indicated by its susceptibility to undergo reduction to a detectable anion C 8 F − 8 , with the free electron trapped inside of the cube. [ 3 ] The compound was voted "favorite molecule of 2022" by readers of Chemical & Engineering News . [ 4 ] This stereochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Octafluorocubane
Octafluoropropane (C 3 F 8 ) is the perfluorocarbon counterpart to the hydrocarbon propane . This non-flammable and non-toxic synthetic substance has applications in semiconductor production and medicine. It is also an extremely potent greenhouse gas . Octafluoropropane can be produced either by electrochemical fluorination or by the Fowler process using cobalt fluoride . [ 2 ] In the electronics industry, octafluoropropane is mixed with oxygen and used as a plasma etching material for SiO 2 layers in semiconductor applications, as oxides are selectively etched versus their metal substrates. [ 3 ] In medicine, octafluoropropane may compose the gas cores of microbubble contrast agents used in contrast-enhanced ultrasound . Octafluoropropane microbubbles reflect sound waves well and are used to improve the ultrasound signal backscatter. It is used in eye surgery, such as pars plana vitrectomy procedures where a retina hole or tear is repaired. The gas provides a long-term tamponade, or plug, of a retinal hole or tear and allows re-attachment of the retina to occur over the several days following the procedure. Under the name R-218 , octafluoropropane is used in other industries as a component of refrigeration mixtures. It has been featured in some plans for terraforming Mars . With a greenhouse gas effect 24,000 times greater than carbon dioxide (CO 2 ), octafluoropropane could dramatically reduce the time and resources it takes to terraform Mars. [ 4 ] It is the active liquid in PICO-2L dark matter bubble detector (joined PICASSO and COUPP collaborations).
https://en.wikipedia.org/wiki/Octafluoropropane
This page provides supplementary chemical data on octafluoropropane . The handling of this chemical may incur notable safety precautions. It is highly recommend that you seek the Material Safety Datasheet ( MSDS ) for this chemical from a reliable source such as SIRI , and follow its directions.
https://en.wikipedia.org/wiki/Octafluoropropane_(data_page)
Octagon Systems Corporation is an industrial computer design and manufacturing company originally based in Westminster, Colorado . Octagon Systems designs, manufactures, sells, repairs and supports its line of industrial, mobile and rugged computer systems for industries including mining, military, transportation and others. [ 1 ] [ 2 ] The company has international representatives in Africa, Asia, Europe, North America and South America. Octagon Systems was founded in 1981, and introduced an embedded computer with a high level language and software development system and operating systems on a solid state disk . Octagon’s services and systems grew with industrial computer systems including the STD Bus market and development of single-board computers . Octagon Systems has been ISO certified since 1993. [ 3 ] Octagon Systems’ XMB Mobile Servers were mentioned by the trade press in 2006. [ 4 ] Octagon Systems was a founding member of the Small Form Factor Special Interest Group in 2007. Octagon co-authored the EPIC embedded computing specification. Octagon’s products were used in public transportation systems, rugged computing systems for mining operations as well as others. [ 5 ] [ 6 ] Octagon Systems products expanded into new markets continuing the sell of industrial, transportation and rugged computer systems. The U.S. Navy chose Octagon’s products for a contract to support amphibious warfare computing, and Octagon products were deployed in mines. [ 7 ] [ 8 ] In 2018, J-Squared Technologies acquired the TRAX family of Octagon products. [ 9 ] The Westminster manufacturing facility was closed.
https://en.wikipedia.org/wiki/Octagon_Systems
Octahedral clusters are inorganic or organometallic cluster compounds composed of six metals in an octahedral array. [ 1 ] Many types of compounds are known, but all are synthetic. These compounds are bound together by metal-metal bonding as well as two kinds of ligands. Ligands that span the faces or edges of the M 6 core are labeled L i , for inner (innen in the original German description), and those ligands attached only to one metal are labeled outer, or L a for ausser . [ 2 ] Typically, the outer ligands can be exchanged whereas the bridging ligands are more inert toward substitution. The premier example is of the class is Mo 6 Cl 14 2− . This dianion is available as a variety of salts by treating the polymer molybdenum(II) chloride with sources of chloride, even hydrochloric acid . A related example is W 6 Cl 14 2− anion, which is obtained by extraction of tungsten(II) chloride . A related class of octahedral clusters are of the type M 6 X 8 L 6 where M is a metal usually of group 6 or group 7, X is a ligand and more specifically an inner ligand of the chalcohalide group such as chloride or sulfide and L is an "outer ligand." The metal atoms define the vertices of an octahedron . The overall point group symmetry is O h . Each face of the octahedron is capped with a chalcohalide and eight such atoms are at the corners of a cube . For this reason this geometry is called a face capped octahedral cluster. Examples of this type of clusters are the Re 6 S 8 Cl 6 4− anion. A well-studied class of solid-state compounds related to the chalcohalides are molybdenum clusters of the type A x Mo 6 X 8 with X sulfur or selenium and A x an interstitial atom such as Pb. These materials, called Chevrel phases or Chevrel clusters, have been actively studied because they are type II superconductors with relatively high critical fields. [ 3 ] Such materials are prepared by high temperature (1100 °C) reactions of the chalcogen and Mo metal. Structurally related, soluble analogues have been prepared, e.g., Mo 6 S 8 (PEt 3 ) 6 . [ 4 ] With metals in group 4 or 5 a so-called edge-capped octahedral clusters are more common. Twelve halides are located along the edge of the octahedron and six are terminal. Examples of this structure type are tungsten(III) chloride , Ta 6 Cl 14 (H 2 O) 4 , [ 5 ] [ 6 ] Nb 6 F 15 , and Nb 6 F 18 2− . [ 1 ] Many of the early metal clusters can only be prepared when they incorporate interstitial atoms. One example is Zr 6 CCl 12 . [ 2 ] Octahedral clusters of tin(II) have been observed in several solid state compounds. The reaction of tin(II) salts with an aqueous base leads to the formation of tin(II) oxyhydroxide (Sn 6 O 4 (OH) 4 ), the structure of which comprises discrete Sn 6 O 4 (OH) 4 clusters. In Sn 6 O 4 (OH) 4 clusters, the six tin atoms form an octahedral array with alternate faces of the octahedron occupied by an oxide or hydroxide moiety, each bonded in a μ 3 -binding mode to three tin atoms. [ 8 ] Crystal structures have been reported for compounds with the formula Sn 6 O 4 (OR) 4 , where R is an alkoxide such as a methyl or ethyl group. [ 9 ] [ 10 ] Recently, it was demonstrated that anionic tin(II) clusters [Sn 6 O 8 ] 4- may form the close packed arrays as in the case of α-Sn 6 SiO 8 , which adopts the zinc blende structure, comprising a face-centred-cubic array of [Sn 6 O 8 ] 4- clusters with Si 4+ occupying half of the tetrahedral holes. [ 11 ] A polymorph, β-Sn 6 SiO 8 , has been identified as a product of pewter corrosion in aqueous conditions, and is a structural analogue of wurtzite . [ 12 ] The species Mo 6 Cl 14 2− feature Mo(II) ( d 4 ) centers. Six Mo(II) centers gives rise to a total of 24 valence electrons, or 2e/Mo-Mo vector. More electron-deficient derivatives such as Ta 6 Cl 18 4− have fewer d -electrons. For example, the naked cluster Ta 6 14+ , the core of Ta 6 Cl 18 4− would have 5(6) - 14 = 16 valence electrons. Fewer d-electrons result in weakened M-M bonding and the extended Ta---Ta distances accommodate doubly bridging halides. In the area of metal carbonyl clusters , a prototypical octahedral cluster is [Fe 6 C(CO) 16 ] 2− , which is obtained by heating iron pentacarbonyl with sodium. Some of the CO ligands are bridging and many are terminal. A carbide ligand resides at the center of the cluster. A variety of analogous compounds have been reported where some or all of the Fe centres are replaced by Ru, Mn and other metals. Outside of carbonyl clusters, gold forms octahedral clusters.
https://en.wikipedia.org/wiki/Octahedral_cluster
In 4-dimensional geometry , the octahedral cupola is a 4-polytope bounded by one octahedron and a parallel rhombicuboctahedron , connected by 20 triangular prisms , and 6 square pyramids . [ 1 ] The octahedral cupola can be sliced off from a runcinated 24-cell , on a hyperplane parallel to an octahedral cell. The cupola can be seen in a B 2 and B 3 Coxeter plane orthogonal projection of the runcinated 24-cell: This 4-polytope article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Octahedral_cupola
In chemistry , octahedral molecular geometry , also called square bipyramidal , [ 1 ] describes the shape of compounds with six atoms or groups of atoms or ligands symmetrically arranged around a central atom, defining the vertices of an octahedron . The octahedron has eight faces, hence the prefix octa . The octahedron is one of the Platonic solids , although octahedral molecules typically have an atom in their centre and no bonds between the ligand atoms. A perfect octahedron belongs to the point group O h . Examples of octahedral compounds are sulfur hexafluoride SF 6 and molybdenum hexacarbonyl Mo(CO) 6 . The term "octahedral" is used somewhat loosely by chemists, focusing on the geometry of the bonds to the central atom and not considering differences among the ligands themselves. For example, [Co(NH 3 ) 6 ] 3+ , which is not octahedral in the mathematical sense due to the orientation of the N−H bonds, is referred to as octahedral. [ 2 ] The concept of octahedral coordination geometry was developed by Alfred Werner to explain the stoichiometries and isomerism in coordination compounds . His insight allowed chemists to rationalize the number of isomers of coordination compounds. Octahedral transition-metal complexes containing amines and simple anions are often referred to as Werner-type complexes . When two or more types of ligands (L a , L b , ...) are coordinated to an octahedral metal centre (M), the complex can exist as isomers. The naming system for these isomers depends upon the number and arrangement of different ligands. For ML a 4 L b 2 , two isomers exist. These isomers of ML a 4 L b 2 are cis , if the L b ligands are mutually adjacent, and trans , if the L b groups are situated 180° to each other. It was the analysis of such complexes that led Alfred Werner to the 1913 Nobel Prize–winning postulation of octahedral complexes. For ML a 3 L b 3 , two isomers are possible - a facial isomer ( fac ) in which each set of three identical ligands occupies one face of the octahedron surrounding the metal atom, so that any two of these three ligands are mutually cis, and a meridional isomer ( mer ) in which each set of three identical ligands occupies a plane passing through the metal atom. Complexes with three bidentate ligands or two cis bidentate ligands can exist as enantiomeric pairs. Examples are shown below. For ML a 2 L b 2 L c 2 , a total of five geometric isomers and six stereoisomers are possible. [ 3 ] The number of possible isomers can reach 30 for an octahedral complex with six different ligands (in contrast, only two stereoisomers are possible for a tetrahedral complex with four different ligands). The following table lists all possible combinations for monodentate ligands: Thus, all 15 diastereomers of ML a L b L c L d L e L f are chiral, whereas for ML a 2 L b L c L d L e , six diastereomers are chiral and three are not (the ones where L a are trans ). One can see that octahedral coordination allows much greater complexity than the tetrahedron that dominates organic chemistry . The tetrahedron ML a L b L c L d exists as a single enantiomeric pair. To generate two diastereomers in an organic compound, at least two carbon centers are required. The term can also refer to octahedral influenced by the Jahn–Teller effect , which is a common phenomenon encountered in coordination chemistry . This reduces the symmetry of the molecule from O h to D 4h and is known as a tetragonal distortion. Some molecules, such as XeF 6 or IF − 6 , have a lone pair that distorts the symmetry of the molecule from O h to C 3v . [ 4 ] [ 5 ] The specific geometry is known as a monocapped octahedron , since it is derived from the octahedron by placing the lone pair over the centre of one triangular face of the octahedron as a "cap" (and shifting the positions of the other six atoms to accommodate it). [ 6 ] These both represent a divergence from the geometry predicted by VSEPR, which for AX 6 E 1 predicts a pentagonal pyramidal shape. Pairs of octahedra can be fused in a way that preserves the octahedral coordination geometry by replacing terminal ligands with bridging ligands . Two motifs for fusing octahedra are common: edge-sharing and face-sharing. Edge- and face-shared bioctahedra have the formulas [M 2 L 8 (μ-L)] 2 and M 2 L 6 (μ-L) 3 , respectively. Polymeric versions of the same linking pattern give the stoichiometries [ML 2 (μ-L) 2 ] ∞ and [M(μ-L) 3 ] ∞ , respectively. The sharing of an edge or a face of an octahedron gives a structure called bioctahedral. Many metal penta halide and penta alkoxide compounds exist in solution and the solid with bioctahedral structures. One example is niobium pentachloride . Metal tetrahalides often exist as polymers with edge-sharing octahedra. Zirconium tetrachloride is an example. [ 7 ] Compounds with face-sharing octahedral chains include MoBr 3 , RuBr 3 , and TlBr 3 . For compounds with the formula MX 6 , the chief alternative to octahedral geometry is a trigonal prismatic geometry, which has symmetry D 3h . In this geometry, the six ligands are also equivalent. There are also distorted trigonal prisms, with C 3v symmetry; a prominent example is W(CH 3 ) 6 . The interconversion of Δ - and Λ -complexes, which is usually slow, is proposed to proceed via a trigonal prismatic intermediate, a process called the " Bailar twist ". An alternative pathway for the racemization of these same complexes is the Ray–Dutt twist . For a free ion, e.g. gaseous Ni 2+ or Mo 0 , the energy of the d-orbitals are equal in energy; that is, they are "degenerate". In an octahedral complex, this degeneracy is lifted. The energy of the d z 2 and d x 2 − y 2 , the so-called e g set, which are aimed directly at the ligands are destabilized. On the other hand, the energy of the d xz , d xy , and d yz orbitals, the so-called t 2g set, are stabilized. The labels t 2g and e g refer to irreducible representations , which describe the symmetry properties of these orbitals. The energy gap separating these two sets is the basis of crystal field theory and the more comprehensive ligand field theory . The loss of degeneracy upon the formation of an octahedral complex from a free ion is called crystal field splitting or ligand field splitting . The energy gap is labeled Δ o , which varies according to the number and nature of the ligands. If the symmetry of the complex is lower than octahedral, the e g and t 2g levels can split further. For example, the t 2g and e g sets split further in trans -ML a 4 L b 2 . Ligand strength has the following order for these electron donors: So called "weak field ligands" give rise to small Δ o and absorb light at longer wavelengths . Given that a virtually uncountable variety of octahedral complexes exist, it is not surprising that a wide variety of reactions have been described. These reactions can be classified as follows: Many reactions of octahedral transition metal complexes occur in water. When an anionic ligand replaces a coordinated water molecule the reaction is called an anation . The reverse reaction, water replacing an anionic ligand, is called aquation . For example, the [CoCl(NH 3 ) 5 ] 2+ slowly yields [Co(NH 3 ) 5 (H 2 O)] 3+ in water, especially in the presence of acid or base. Addition of concentrated HCl converts the aquo complex back to the chloride, via an anation process.
https://en.wikipedia.org/wiki/Octahedral_molecular_geometry
In 4-dimensional geometry , the octahedral pyramid is bounded by one octahedron on the base and 8 triangular pyramid cells which meet at the apex. Since an octahedron has a circumradius divided by edge length less than one, [ 1 ] the triangular pyramids can be made with regular faces (as regular tetrahedrons ) by computing the appropriate height. Having all regular cells, it is a Blind polytope . Two copies can be augmented to make an octahedral bipyramid which is also a Blind polytope. The regular 16-cell has octahedral pyramids around every vertex, with the octahedron passing through the center of the 16-cell. Therefore placing two regular octahedral pyramids base to base constructs a 16-cell. The 16-cell tessellates 4-dimensional space as the 16-cell honeycomb . Exactly 24 regular octahedral pyramids will fit together around a vertex in four-dimensional space (the apex of each pyramid). This construction yields a 24-cell with octahedral bounding cells, surrounding a central vertex with 24 edge-length long radii. The 4-dimensional content of a unit-edge-length 24-cell is 2, so the content of the regular octahedral pyramid is 1/12. The 24-cell tessellates 4-dimensional space as the 24-cell honeycomb . The octahedral pyramid is the vertex figure for a truncated 5-orthoplex , . The graph of the octahedral pyramid is the only possible minimal counterexample to Negami's conjecture , that the connected graphs with planar covers are themselves projective-planar. [ 2 ] Example 4-dimensional coordinates, 6 points in first 3 coordinates for cube and 4th dimension for the apex. ( ± 1 , 0 , 0 ; 0 ) ( 0 , ± 1 , 0 ; 0 ) ( 0 , 0 , ± 1 ; 0 ) ( 0 , 0 , 0 ; 1 ) {\displaystyle {\begin{array}{lllr}(\pm 1,&0,&0;&0)\\(0,&\pm 1,&0;&0)\\(0,&0,&\pm 1;&0)\\(0,&0,&0;&\ 1)\end{array}}} The dual to the octahedral pyramid is a cubic pyramid , seen as a cubic base and 6 square pyramids meeting at an apex . Example 4-dimensional coordinates, 8 points in first 3 coordinates for cube and 4th dimension for the apex. ( ± 1 , ± 1 , ± 1 ; 0 ) ( 0 , 0 , 0 ; 1 ) {\displaystyle {\begin{array}{lllr}(\pm 1,&\pm 1,&\pm 1;&0)\\(0,&0,&0;&1)\end{array}}} The square-pyramidal pyramid , ( ) ∨ [( ) ∨ {4}] , is a bisected octahedral pyramid. It has a square pyramid base, and 4 tetrahedrons along with another one more square pyramid meeting at the apex. It can also be seen in an edge-centered projection as a square bipyramid with four tetrahedra wrapped around the common edge. If the height of the two apexes are the same, it can be given a higher symmetry name [( ) ∨ ( )] ∨ {4} = { } ∨ {4}, joining an edge to a perpendicular square. [ 3 ] The square-pyramidal pyramid can be distorted into a rectangular-pyramidal pyramid , { } ∨ [{ } × { }] or a rhombic-pyramidal pyramid , { } ∨ [{ } + { }], or other lower symmetry forms. The square-pyramidal pyramid exists as a vertex figure in uniform polytopes of the form , including the bitruncated 5-orthoplex and bitruncated tesseractic honeycomb . Example 4-dimensional coordinates, 2 coordinates for square, and axial points for pyramidal points. ( ± 1 , ± 1 ; 0 ; 0 ) ( 0 , 0 ; 1 ; 0 ) ( 0 , 0 ; 0 ; 1 ) {\displaystyle {\begin{array}{lllr}(\pm 1,&\pm 1;&0;&0)\\(0,&0;&1;&0)\\(0,&0;&0;&\ \ 1)\end{array}}} This 4-polytope article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Octahedral_pyramid
In organic chemistry , a Platonic hydrocarbon is a hydrocarbon whose structure matches one of the five Platonic solids , with carbon atoms replacing its vertices, carbon–carbon bonds replacing its edges, and hydrogen atoms as needed. [ 1 ] [ page needed ] Not all Platonic solids have molecular hydrocarbon counterparts; those that do are the tetrahedron ( tetrahedrane ), the cube ( cubane ), and the dodecahedron ( dodecahedrane ). The possibility and existence of each platonic hydrocarbon is affected by the number of bonds to each carbon vertex and the angle strain between the bonds at each vertex. Tetrahedrane (C 4 H 4 ) is a hypothetical compound . It has not yet been synthesized without substituents , but it is predicted to be kinetically stable in spite of its angle strain. Some stable derivatives , including tetra( tert -butyl )tetrahedrane and tetra( trimethylsilyl )tetrahedrane, have been produced. Cubane (C 8 H 8 ) has been synthesized. Although it has high angle strain, cubane is kinetically stable , due to a lack of readily available decomposition paths. Angle strain would make an octahedron highly unstable due to inverted tetrahedral geometry at each vertex. There would also be no hydrogen atoms because four edges meet at each corner; thus, the hypothetical octahedrane molecule, with a molecular formula of C 6 , would be an allotrope of elemental carbon rather than a hydrocarbon. The existence of octahedrane cannot be ruled out completely, although calculations have shown that it is unlikely. [ 2 ] Dodecahedrane (C 20 H 20 ) was first synthesized in 1982, and has minimal angle strain; the tetrahedral angle is 109.5° and the dodecahedral angle is 108°, only a slight discrepancy. [ 3 ] The tetravalency (4-connectedness) of carbon excludes an icosahedron because 5 edges meet at each vertex. True pentavalent carbon is unlikely; methanium , nominally CH + 5 , usually exists as CH 3 (H 2 ) + . The hypothetical icosahedral C 12+ 12 lacks hydrogen so it is not a hydrocarbon; it is also an ion. Both icosahedral and octahedral structures have been observed in boron compounds [ 2 ] such as the dodecaborate ion and some of the carbon-containing carboranes . Increasing the number of atoms that comprise the carbon skeleton leads to a geometry that increasingly approximates a sphere, and the space enclosed in the carbon "cage" increases. This trend continues with buckyballs or spherical fullerene (C 60 ). Although not a Platonic hydrocarbon, buckminsterfullerene has the shape of a truncated icosahedron , an Archimedean solid . The concept can also be extended to regular Euclidean tilings, with the hexagonal tiling producing graphane . A square tiling (which would resemble an infinitely large fenestrane ) would suffer from the same problem as octahedrane, and the triangular tiling icosahedrane. No generalisations to hyperbolic tilings seem to be known. The regular convex 4-polytopes may also have hydrocarbon analogues; hypercubane has been proposed.
https://en.wikipedia.org/wiki/Octahedrane
Octamethylenediamine ( OMDA ) is an organic chemical compound from the substance group of aliphatic diamines. It is used as a versatile reaction intermediate in the manufacture of pesticides , especially fungicides . The industrial production of octamethylene diamine is carried out by the catalytic hydrogenation of suberonitrile at temperatures of 150 to 180 °C and a pressure of 50 to 180 bar in the presence of ammonia over heterogeneous cobalt unsupported catalysts: The reaction is carried out in the liquid phase and is carried out continuously or batchwise. The catalyst is arranged as a fixed bed in a shaft, tube, or tube bundle reactor. Octamethylenediamine is a combustible but difficult to ignite. It is a solid that is easily soluble in water. The aqueous solutions are strongly alkaline ( pH value of 12.1 at a concentration of 10 g/L). [ 4 ] Octamethylenediamine is used as a versatile intermediate in manufacturing pesticides , especially fungicides . [ 5 ] While octamethylenediamine is combustible, it is difficult to ignite because it is solid at moderate temperatures. It has a lower explosive limit (LEL) of 1.1 % by volume and an upper explosive limit (UEL) of 6.8 % by volume. The ignition temperature is 280 °C The substance therefore falls into temperature class T3. With a flash point of 113 °C, the liquid is considered difficult to ignite. [ 4 ]
https://en.wikipedia.org/wiki/Octamethylenediamine
An octane rating , or octane number , is a standard measure of a fuel 's ability to withstand compression in an internal combustion engine without causing engine knocking . The higher the octane number, the more compression the fuel can withstand before detonating. Octane rating does not relate directly to the power output or the energy content of the fuel per unit mass or volume, but simply indicates the resistance to detonating under pressure without a spark. Whether a higher octane fuel improves or impairs an engine's performance depends on the design of the engine. In broad terms, fuels with a higher octane rating are used in higher-compression gasoline engines , which may yield higher power for these engines. The added power in such cases comes from the way the engine is designed to compress the air/fuel mixture, and not directly from the rating of the gasoline. [ 1 ] In contrast, fuels with lower octane (but higher cetane numbers ) are ideal for diesel engines because diesel engines (also called compression-ignition engines) do not compress the fuel, but rather compress only air, and then inject fuel into the air that was heated by compression. Gasoline engines rely on ignition of compressed air and fuel mixture, which is ignited only near the end of the compression stroke by electric spark plugs . Therefore, being able to compress the air/fuel mixture without causing detonation is important mainly for gasoline engines. Using gasoline with lower octane than an engine is built for may cause engine knocking and/or pre-ignition . [ 2 ] The octane rating of aviation gasoline was extremely important in determining aero engine performance in the aircraft of World War II . [ 3 ] The octane rating affected not only the performance of the gasoline, but also its versatility; the higher octane fuel allowed a wider range of lean to rich operating conditions. [ 3 ] In spark ignition internal combustion engines , knocking (also knock , detonation , spark knock , pinging , or pinking ) occurs when combustion of some of the air/fuel mixture in the cylinder does not result from propagation of the flame front ignited by the spark plug , but when one or more pockets of air/fuel mixture explode outside the envelope of the normal combustion front. The fuel-air charge is meant to be ignited by the spark plug only, and at a precise point in the piston's stroke. Knock occurs when the peak of the combustion process no longer occurs at the optimum moment for the four-stroke cycle . In a simple explanation, the forward moving wave of combustion that burns the hydrocarbon + oxygen mixture inside the cylinder like a wave that a surfer would wish to surf upon is violently disrupted by a secondary wave that has started elsewhere. The shock wave of these two separate waves creates the characteristic metallic "pinging" sound, and cylinder pressure increases dramatically. Effects of engine knocking range from inconsequential (incremental heating plus power loss) to completely destructive (detonation while one of the valves is still open). Knocking should not be confused with pre-ignition – they are two separate events with pre-ignition occurring before the combustion event. However, pre-ignition is highly correlated with knock because knock will cause rapid heat increase within the cylinder eventually leading to destructive pre-detonation. [ 4 ] Most engine management systems commonly found in automobiles today, typically electronic fuel injection (EFI), have a knock sensor that monitors if knock is being produced by the fuel being used. In modern computer-controlled engines, the ignition timing will be automatically altered by the engine management system to reduce the knock to an acceptable level. Octanes are a family of hydrocarbons that are typical components of gasoline. They are colorless liquids that boil around 125 °C (260 °F). One member of the octane family, 2,2,4-Trimethylpentane (iso-octane), is used as a reference standard to benchmark the tendency of gasoline or LPG fuels to resist self-ignition. The octane rating of gasoline is measured in a test engine and is defined by comparison with the mixture of 2,2,4-trimethylpentane (iso-octane) and normal heptane that would have the same anti-knocking capability as the fuel under test. The percentage, by volume, of 2,2,4-trimethylpentane in that mixture is the octane number of the fuel. For example, gasoline with the same knocking characteristics as a mixture of 90% iso-octane and 10% heptane would have an octane rating of 90. [ 5 ] A rating of 90 does not mean that the gasoline contains just iso-octane and heptane in these proportions, but that it has the same detonation resistance properties (generally, gasoline sold for common use never consists solely of iso-octane and heptane; it is a mixture of many hydrocarbons and often other additives). Octane ratings are not indicators of the energy content of fuels. (See Effects below and Heat of combustion ). They are only a measure of the fuel's tendency to burn in a controlled manner, rather than exploding in an uncontrolled manner. [ 6 ] Where the octane number is raised by blending in ethanol, energy content per volume is reduced. Ethanol energy density can be compared with gasoline in heat-of-combustion tables. It is possible for a fuel to have a Research Octane Number (RON) more than 100, because iso-octane is not the most knock-resistant substance available today. Racing fuels, avgas , LPG and alcohol fuels such as methanol may have octane ratings of 110 or significantly higher. Typical "octane booster" gasoline additives include MTBE , ETBE , toluene and iso-octane itself. Lead in the form of tetraethyllead was once a common additive, but concerns about its toxicity have led to its use for fuels for road vehicles being progressively phased out worldwide beginning in the 1970s. [ 7 ] The most common type of octane rating worldwide is the Research Octane Number ( RON ). RON is determined by running the fuel in a test engine at 600 rpm with a variable compression ratio under controlled conditions, and comparing the results with those for mixtures of iso-octane and n-heptane. [ 8 ] The compression ratio is varied during the test to challenge the fuel's antiknocking tendency, as an increase in the compression ratio will increase the chances of knocking. Another type of octane rating, called Motor Octane Number ( MON ), is determined at 900 rpm engine speed instead of the 600 rpm for RON. [ 2 ] MON testing uses a similar test engine to that used in RON testing, but with a preheated fuel mixture, higher engine speed, and variable ignition timing to further stress the fuel's knock resistance. Depending on the composition of the fuel, the MON of a modern pump gasoline will be about 8 to 12 lower than the RON, [ citation needed ] but there is no direct link between RON and MON. See the table below. In Canada, The United States, and Mexico, the advertised octane rating is the average of the RON and the MON, called the Anti-Knock Index ( AKI ). It is often written on pumps as (R+M)/2 . AKI is also sometimes called PON (Pump Octane Number). Because of the 8 to 12 octane number difference between RON and MON noted above, the AKI shown in Canada and the United States is 4 to 6 octane numbers lower than elsewhere in the world for the same fuel. This difference between RON and MON is known as the fuel's sensitivity, [ 9 ] and is not typically published for those countries that use the Anti-Knock Index labelling system. See the table in the following section for a comparison. Another type of octane rating, called Observed Road Octane Number ( RdON ), is derived from testing the gasoline in ordinary multi-cylinder engines (rather than in a purpose-built test engine), normally at wide open throttle. This type of test was developed in the 1920s and is still reliable today. The original RdON tests were done in cars on the road, but as technology developed the testing was moved to chassis dynamometers with environmental controls to improve consistency. [ 10 ] The evaluation of the octane number by either of the two laboratory methods requires a special engine built to match the tests' rigid standards, and the procedure can be both expensive and time-consuming. The standard engine required for the test may not always be available, especially in out-of-the-way places or in small or mobile laboratories. These and other considerations led to the search for a rapid method for the evaluation of the anti-knock quality of gasoline. Such substitute methods include FTIR, near infrared on-line analyzers, and others. Deriving an equation that can be used to calculate ratings accurately enough would also serve the same purpose, with added advantages. The term Octane Index is often used to refer to the use of an equation to determine a theoretical rating, in contradistinction to the direct measurements required for research or motor octane numbers. An octane index can be of great service in the blending of gasoline. Motor gasoline, as marketed, is usually a blend of several types of refinery grades that are derived from different processes such as straight-run gasoline, reformate, cracked gasoline etc. These different grades are blended in amounts that will meet final product specifications. Most refiners produce and market more than one grade of motor gasoline, differing principally in their anti-knock quality. Being able to make sufficiently accurate estimates of the octane rating that will result from blending different refinery products is essential, something for which the calculated octane index is specially suited. [ 11 ] Aviation gasolines used in piston aircraft engines common in general aviation have a slightly different method of measuring the octane of the fuel. Similar to an AKI, it has two different ratings, although it is usually referred to only by the lower of the two. One is referred to as the "aviation lean" rating, which for ratings up to 100 is the same as the MON of the fuel. [ 12 ] The second is the "aviation rich" rating and corresponds to the octane rating of a test engine under forced induction operation common in high-performance and military piston aircraft. This utilizes a supercharger , and uses a significantly richer fuel/air ratio for improved detonation resistance. [ 9 ] [ unreliable source? ] The most common currently used fuel, 100LL , has an aviation lean rating of 100 octane, and an aviation rich rating of 130. [ 13 ] The RON/MON values of n- heptane and iso-octane are exactly 0 and 100, respectively, by the definition of octane rating. The following table lists octane ratings for various other fuels. [ 14 ] [ 15 ] Higher octane ratings correlate to higher activation energies : the amount of applied energy required to initiate combustion. Since higher octane fuels have higher activation energy requirements, it is less likely that a given compression will cause uncontrolled ignition, otherwise known as autoignition, self-ignition, pre-ignition, detonation, or knocking. Because octane is a measured and/or calculated rating of the fuel's ability to resist autoignition, the higher the octane of the fuel, the harder that fuel is to ignite and the more heat is required to ignite it. The result is that a hotter ignition spark is required for ignition. Creating a hotter spark requires more energy from the ignition system, which in turn increases the parasitic electrical load on the engine. The spark also must begin earlier in order to generate sufficient heat at the proper time for precise ignition. As octane, ignition spark energy, and the need for precise timing increase, the engine becomes more difficult to "tune" and keep "in tune". The resulting sub-optimal spark energy and timing can cause major engine problems, from a simple "miss" to uncontrolled detonation and catastrophic engine failure. Mechanically within the cylinder, stability can be visualized as having a flame wave initiate at the spark plug and then "travel in a fairly uniform manner across the combustion chamber" [ 39 ] with the expanding gas mix pushing the piston throughout the entirety of the power stroke. A stable gasoline and air mix will combust when the flame wave reaches the molecules, adding heat at the interface. Knock occurs when a secondary flame wave forms from instability and then travels against the path of the primary flame wave, thus depriving the power stroke of its uniformity and causing issues including power loss and heat buildup. [ 40 ] The other rarely-discussed reality with high-octane fuels associated with "high performance" is that as octane increases, the specific gravity and energy content of the fuel per unit of weight are reduced. The net result is that to make a given amount of power , more high-octane fuel must be burned in the engine. Lighter and "thinner" fuel also has a lower specific heat , so the practice of running an engine "rich" to use excess fuel to aid in cooling requires richer and richer mixtures as octane increases. Higher-octane, lower-energy-dense "thinner" fuels often contain alcohol compounds incompatible with the stock fuel system components, which also makes them hygroscopic . They also evaporate away much more easily than heavier, lower-octane fuel which leads to more accumulated contaminants in the fuel system. It is typically the hydrochloric acids that form due to that water [ citation needed ] and the compounds in the fuel that have the most detrimental effects on the engine fuel system components, as such acids corrode many metals used in gasoline fuel systems. During the compression stroke of an internal combustion engine, the temperature of the air-fuel mix rises as it is compressed, in accordance with the ideal gas law . Higher compression ratios necessarily add parasitic load to the engine, and are only necessary if the engine is being specifically designed to run on high-octane fuel. Aircraft engines run at relatively low speeds and are " undersquare ". They run best on lower-octane, slower-burning fuels that require less heat and a lower compression ratio for optimum vaporization and uniform fuel-air mixing, with the ignition spark coming as late as possible in order to extend the production of cylinder pressure and torque as far down the power stroke as possible. The main reason for using high-octane fuel in air-cooled engines is that it is more easily vaporized in a cold carburetor and engine and absorbs less intake air heat which greatly reduces the tendency for carburetor icing to occur. With their reduced densities and weight per volume of fuel, the other obvious benefit is that an aircraft with any given volume of fuel in the tanks is automatically lighter. And since many airplanes are flown only occasionally and may sit unused for weeks or months, the lighter fuels tend to evaporate away and leave behind fewer deposits such as "varnish" (gasoline components, particularly alkenes and oxygenates slowly polymerize into solids). [ clarification needed ] Aircraft also typically have dual "redundant" ignition systems which are nearly impossible to tune and time to produce identical ignition timing, so using a lighter fuel that's less prone to autoignition is a wise "insurance policy". For the same reasons, those lighter fuels which are better solvents are much less likely to cause any "varnish" or other fouling on the "backup" spark plugs. [ citation needed ] In almost all general aviation piston engines, the fuel mixture is directly controlled by the pilot, via a knob and cable or lever similar to (and next to) the throttle control. Leaning – reducing the mixture from its maximum amount – must be done with knowledge, as some combinations of fuel mixture and throttle position (that produce the highest ) can cause detonation and/or pre-ignition , in the worst case destroying the engine within seconds. [ citation needed ] Pilots are taught in primary training to avoid settings that produce the highest exhaust gas temperatures, and run the engine either "rich of peak EGT " (more fuel than can be burned with the available air) or "lean of peak" (less fuel, leaving some oxygen in the exhaust) as either will keep the fuel-air mixture from detonating prematurely. [ 41 ] Because of the high cost of unleaded, high-octane avgas , and possible increased range before refueling, some general aviation pilots attempt to save money by tuning their fuel-air mixtures and ignition timing to run "lean of peak". Additionally, the decreased air density at higher altitudes (such as Colorado) and temperatures (as in summer) requires leaning (reduction in amount of fuel per volume or mass of air) for the peak EGT and power (crucial for takeoff). The selection of octane ratings available at filling stations can vary greatly between countries. Due to its name, the chemical "octane" is often misunderstood as the only substance that determines the octane rating (or octane number) of a fuel. This is an inaccurate description. In reality, the octane rating is defined as a number describing the stability and ability of a fuel to prevent an engine from unwanted combustions [ 83 ] that occur spontaneously in the other regions within a cylinder (i.e., delocalized explosions from the spark plug). This phenomenon of combustion is more commonly known as engine knocking or self-ignition, which causes damage to pistons over time and reduces the lifespan of engines. In 1927, Graham Edgar [ 84 ] devised the method of using iso-octane and n-heptane as reference chemicals, in order to rate the knock resistance of a fuel with respect to this isomer of octane, [ 85 ] thus the name "octane rating". By definition, the isomers iso-octane and n-heptane have an octane rating of 100 and 0, respectively. [ 86 ] Because of its more volatile nature, n-heptane ignites and knocks readily, which gives it a relatively low octane rating; [ 87 ] the isomer iso-octane causes less knocking because it is more branched and combusts more smoothly. In general, branched compounds with a higher intermolecular force (e.g., London dispersion force for iso-octane) will have a higher octane rating, as they are harder to ignite. [ 88 ] Octane isomers such as n-octane and 2,3,3-trimethylpentane have an octane rating of -20 and 106.1, respectively ( RON measurement). [ 89 ] The large differences between the octane ratings for the isomers show that the compound octane itself is clearly not the only factor that determines octane ratings, especially for commercial fuels consist of a wide variety of compounds. "Octane" is colloquially used in the expression "high-octane". [ 90 ] The term is used to describe a powerful action because of the association with the concept of "octane rating". This is a misleading term, because the octane rating of gasoline is not directly related to the power output of an engine. Using gasoline of a higher octane than an engine is designed for cannot increase its power output. Octane became well known in American popular culture in the 1960s, when gasoline companies boasted of "high octane" levels in their gasoline advertisements. The compound adjective "high-octane", meaning powerful or dynamic, is recorded in a figurative sense from 1944. By the 1990s, the phrase was commonly being used as a word intensifier, and it has found a place in modern English slang.
https://en.wikipedia.org/wiki/Octane_rating
Octanitrocubane (molecular formula: C 8 (NO 2 ) 8 ) is a proposed high explosive that, like TNT , is shock-insensitive (not readily detonated by shock). [ 1 ] The octanitrocubane molecule has the same chemical structure as cubane (C 8 H 8 ) except that each of the eight hydrogen atoms is replaced by a nitro group (NO 2 ). As of 1998, octanitrocubane had not been produced in quantities large enough to test its performance as an explosive. [ 2 ] It is, however, not as powerful an explosive as once thought, as the high-density theoretical crystal structure has not been achieved. For this reason, heptanitrocubane , the slightly less nitrated form, is believed to have marginally better performance, despite having a worse oxygen balance. Octanitrocubane is thought to have 20–25% greater performance than HMX (octogen). This increase in power is due to its highly expansive breakdown into CO 2 and N 2 , as well as to the presence of strained chemical bonds in the molecule which have stored potential energy . In addition, it produces no water vapor upon combustion, making it less visible, and both the chemical itself and its decomposition products ( nitrogen and carbon dioxide ) are considered to be non-toxic. Octanitrocubane was first synthesized by Philip Eaton (who was also the first to synthesize cubane in 1964) and Mao-Xi Zhang at the University of Chicago in 1999, with the structure proven by crystallographer Richard Gilardi of the United States Naval Research Laboratory . [ 3 ] [ 4 ] Although octanitrocubane is predicted to be one of the most effective explosives, the difficulty of its synthesis inhibits practical use. Philip Eaton's synthesis was difficult and lengthy, and required cubane (rare to begin with) as a starting point. As a result, octanitrocubane is more valuable, gram for gram, than gold . [ 5 ] A proposed path to synthesis is the cyclotetramerization of the as yet undiscovered and presumably highly unstable dinitroacetylene . [ 6 ]
https://en.wikipedia.org/wiki/Octanitrocubane
The n -octanol-water partition coefficient, K ow is a partition coefficient for the two-phase system consisting of n -octanol and water. [ 1 ] K ow is also frequently referred to by the symbol P, especially in the English literature. It is also called n -octanol-water partition ratio . [ 2 ] [ 3 ] [ 4 ] K ow serves as a measure of the relationship between lipophilicity (fat solubility) and hydrophilicity (water solubility) of a substance. The value is greater than one if a substance is more soluble in fat-like solvents such as n-octanol, and less than one if it is more soluble in water. [ citation needed ] If a substance is present as several chemical species in the octanol-water system due to association or dissociation , each species is assigned its own K ow value. A related value, D, does not distinguish between different species, only indicating the concentration ratio of the substance between the two phases. [ citation needed ] In 1899, Charles Ernest Overton and Hans Horst Meyer independently proposed that the tadpole toxicity of non-ionizable organic compounds depends on their ability to partition into lipophilic compartments of cells. They further proposed the use of the partition coefficient in an olive oil/water mixture as an estimate of this lipophilic associated toxicity. Corwin Hansch later proposed the use of n-octanol as an inexpensive synthetic alcohol that could be obtained in a pure form as an alternative to olive oil. [ 5 ] [ 6 ] K ow values are used, among others, to assess the environmental fate of persistent organic pollutants . Chemicals with high partition coefficients, for example, tend to accumulate in the fatty tissue of organisms ( bioaccumulation ). Under the Stockholm Convention , chemicals with a log K ow greater than 5 are considered to bioaccumulate. [ 7 ] Furthermore, the parameter plays an important role in drug research ( Rule of Five ) and toxicology . Ernst Overton and Hans Meyer discovered as early as 1900 that the efficacy of an anaesthetic increased with increasing K ow value (the so-called Meyer-Overton rule ). [ 8 ] K ow values also provide a good estimate of how a substance is distributed within a cell between the lipophilic biomembranes and the aqueous cytosol . [ citation needed ] Since it is not possible to measure K ow for all substances, various models have been developed to allow for their prediction, e.g. Quantitative structure–activity relationships (QSAR) or linear free energy relationships (LFER) [ 9 ] [ 10 ] such as the Hammett equation . [ 9 ] A variant of the UNIFAC system can also be used to estimate octanol-water partition coefficients. [ 11 ] Values for log K ow typically range between -3 (very hydrophilic) and +10 (extremely lipophilic/hydrophobic). [ 12 ] The values listed here [ 13 ] are sorted by the partition coefficient. Acetamide is hydrophilic, and 2,2′,4,4′,5-Pentachlorobiphenyl is lipophilic.
https://en.wikipedia.org/wiki/Octanol-water_partition_coefficient
Octanoyl-coenzyme A is the endpoint of beta oxidation in peroxisomes . It is produced alongside acetyl-CoA and transferred to the mitochondria to be further oxidized into acetyl-CoA. [ 1 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Octanoyl-CoA
The octant , also called a reflecting quadrant , is a reflecting instrument used in navigation . The name octant derives from the Latin octans meaning eighth part of a circle , because the instrument's arc is one eighth of a circle . Reflecting quadrant derives from the instrument using mirrors to reflect the path of light to the observer and, in doing so, doubles the angle measured. This allows the instrument to use a one-eighth of a turn to measure a quarter- turn or quadrant . Isaac Newton 's reflecting quadrant was invented around 1699. [ 1 ] A detailed description of the instrument was given to Edmond Halley , but the description was not published until after Halley's death in 1742. It is not known why Halley did not publish the information during his life, as this prevented Newton from getting the credit for the invention that is generally given to John Hadley and Thomas Godfrey . One copy of this instrument was constructed by Thomas Heath (instrument maker) and may have been shown in Heath's shop window prior to its being published by the Royal Society in 1742. [ 2 ] Newton's instrument used two mirrors, but they were used in an arrangement somewhat different from the two mirrors found in modern octants and sextants . The diagram on the right shows the configuration of the instrument. [ 3 ] The 45° arc of the instrument (PQ), was graduated with 90 divisions of a half-degree each. Each such division was subdivided into 60 parts and each part further divided into sixths. This results in the arc being marked in degrees, minutes and sixths of a minute (10 seconds). Thus the instrument could have readings interpolated to 5 seconds of arc. This fineness of graduation is only possible due to the large size of the instrument - the sighting telescope alone was three to four feet long. A sighting telescope (AB), three or four feet long, was mounted along one side of the instrument. A horizon mirror was fixed at a 45° angle in front of the telescope's objective lens (G). This mirror was small enough to allow the observer to see the image in the mirror on one side and to see directly ahead on the other. The index arm (CD) held an index mirror (H), also at 45° to the edge of the index arm. The reflective sides of the two mirrors nominally faced each other, so that the image seen in the first mirror is that reflected from the second. With the two mirrors parallel, the index reads 0°. The view through the telescope sees directly ahead on one side and the view from the mirror G sees the same image reflected from mirror H (see detail drawing to the right). When the index arm is moved from zero to a large value, the index mirror reflects an image that is in a direction away from the direct line of sight. As the index arm movement increases, the line of sight for the index mirror moves toward S (to the right in the detail image). This shows a slight deficiency with this mirror arrangement. The horizon mirror will block the view of the index mirror at angles approaching 90°. The length of the sighting telescope seems remarkable, given the small size of the telescopes on modern instruments. This was likely Newton's choice of a way to reduce chromatic aberrations . Short– focal length telescopes, prior to the development of achromatic lenses , produced an objectionable degree of aberration, so much so that it could affect the perception of a star's position. Long focal lengths were the solution, and this telescope would likely have had both a long–focal length objective lens and a long–focal length eyepiece . This would decrease aberrations without excessive magnification. Two men independently developed the octant around 1730: John Hadley (1682–1744), an English mathematician, and Thomas Godfrey (1704–1749), a glazier in Philadelphia . While both have a legitimate and equal claim to the invention, Hadley generally gets the greater share of the credit. This reflects the central role that London and the Royal Society played in the history of scientific instruments in the eighteenth century. Two others who created octants during this period were Caleb Smith, an English insurance broker with a strong interest in astronomy (in 1734), and Jean-Paul Fouchy, a mathematics professor and astronomer in France (in 1732). Hadley produced two versions of the reflecting quadrant. Only the second is well known and is the familiar octant. Hadley's first reflecting quadrant was a simple device with a frame spanning a 45° arc. In the image at the right, from Hadley's article in the Philosophical Transactions of the Royal Society, [ 4 ] you can see the nature of his design. A small sighting telescope was mounted on the frame along one side. One large index mirror was mounted at the point of rotation of the index arm. A second, smaller horizon mirror was mounted on the frame in the line of sight of the telescope. The horizon mirror allows the observer to see the image of the index mirror in one half of the view and to see a distant object in the other half. A shade was mounted at the vertex of the instrument to allow one to observe a bright object. The shade pivots to allow it to move out of the way for stellar observations. Observing through the telescope, the navigator would sight one object directly ahead. The second object would be seen by reflection in the horizon mirror. The light in the horizon mirror is reflected from the index mirror. By moving the index arm, the index mirror can be made to reveal any object up to 90° from the direct line of sight. When both objects are in the same view, aligning them together allows the navigator to measure the angular distance between them. Very few of the original reflecting quadrant designs were ever produced. One, constructed by Baradelle, is in the collection of the Musée de la Marine , Paris. [ 5 ] Hadley's second design had the form familiar to modern navigators. The image to the right, also taken from his Royal Society publication, [ 4 ] shows the details. He placed an index mirror on the index arm. Two horizon mirrors were provided. The upper mirror, in the line of the sighting telescope, was small enough to allow the telescope to see directly ahead as well as seeing the reflected view. The reflected view was that of the light from the index mirror. As in the previous instrument, the arrangement of the mirrors allowed the observer to simultaneously see an object straight ahead and to see one reflected in the index mirror to the horizon mirror and then into the telescope. Moving the index arm allowed the navigator to see any object within 90° of the direct view. The significant difference with this design was that the mirrors allowed the instrument to be held vertically rather than horizontally and it provided more room for configuring the mirrors without suffering from mutual interference. The second horizon mirror was an interesting innovation. The telescope was removable. It could be remounted so that the telescope viewed the second horizon mirror from the opposite side of the frame. By mounting the two horizon mirrors at right angles to each other and permitting the movement of the telescope, the navigator could measure angles from 0 to 90° with one horizon mirror and from 90° to 180° with the other. This made the instrument very versatile. For unknown reasons, this feature was not implemented on octants in general use. Comparing this instrument to the photo of a typical octant at the top of the article, one can see that the only significant differences in the more modern design are: Caleb Smith , an English insurance broker with a strong interest in astronomy, had created an octant in 1734. He called it an Astroscope or Sea-Quadrant . [ 6 ] His used a fixed prism in addition to an index mirror to provide reflective elements. Prisms provide advantages over mirrors in an era when polished speculum metal mirrors were inferior and both the silvering of a mirror and the production of glass with flat, parallel surfaces was difficult. In the drawing to the right, the horizon element (B) could be a mirror or a prism. On the index arm, the index mirror (A) rotated with the arm. A sighting telescope was mounted on the frame (C). The index did not use a vernier or other device at the scale (D). Smith called the instrument's index arm a label , in the manner of Elton for his mariner's quadrant . [ 7 ] Various design elements of Smith's instrument made it inferior to Hadley's octant and it was not used significantly. [ 5 ] For example, one problem with the Astroscope was that angle of the observer's line of sight. By looking down, he had greater difficulty in observing than an orientation with his head in a normal orientation. The octant provided a number of advantages over previous instruments. The sight was easy to align because the horizon and the star seem to move together as the ship pitched and rolled. This also created a situation where the error in observation was less dependent on the observer, as they could directly see both objects at once. With the use of the manufacturing techniques available in the 18th century, the instruments were capable of reading very accurately. The size of the instruments was reduced with no loss of accuracy. An octant could be half the size of a Davis quadrant with no increase in error. Using shades over the light paths, one could observe the sun directly, while moving the shades out of the light path allowed the navigator to observe faint stars. This made the instrument usable both night and day. By 1780, the octant and sextant had almost completely displaced all previous navigational instruments. [ 5 ] Early octants were constructed primarily in wood, with later versions incorporating ivory and brass components. The earliest mirrors were polished metal, since the technology to produce silvered glass mirrors with flat, parallel surfaces was limited. As glass polishing techniques improved, glass mirrors began to be provided. These used coatings of mercury-containing tin amalgam; coatings of silver or aluminum were not available until the 19th century. The poor optical quality of the early polished speculum metal mirrors meant that telescopic sights were not practical. For that reason, most early octants employed a simple naked-eye sighting pinnula instead. Early octants retained some of the features common to backstaves , such as transversals on the scale. However, as engraved, they showed the instrument to have an apparent accuracy of only two minutes of arc while the backstaff appeared to be accurate to one minute. The use of the vernier scale allowed the scale to be read to one minute, so improved the marketability of the instrument. This and the ease in making verniers compared to transversals, lead to adoption of the vernier on octants produced later in the 18th century. [ 8 ] Octants were produced in large numbers. In wood and ivory, their relatively low price compared to an all-brass sextant made them a popular instrument. The design was standardized with many manufacturers using the identical frame style and components. Different shops could make different components, with woodworkers specializing in frames and others in the brass components. For example, Spencer, Browning and Rust, a manufacturer of scientific instruments in England from 1787 to 1840 (operating as Spencer, Browning and Co. after 1840) used a Ramsden dividing engine to produce graduated scales in ivory. These were widely used by others and the SBR initials could be found on octants from many other manufacturers. [ 9 ] Examples of these very similar octants are in the photos in this article. The image at the top is essentially the same instrument as the one in the detail photos. However, they are from two different instrument makers - the upper is labelled Crichton - London, Sold by J Berry Aberdeen while the detail images are of an instrument from Spencer, Browning & Co. London . The only obvious difference is the presence of horizon shades on the Crichton octant that are not on the other. These octants were available with many options. A basic octant with graduations directly on the wood frame were least expensive. These dispensed with a telescopic sight, using a single- or double-holed sighting pinnula instead. Ivory scales would increase the price, as would the use of a brass index arm or a vernier. In 1767 the first edition of The Nautical Almanac tabulated lunar distances , enabling navigators to find the current time from the angle between the Sun and the Moon. This angle is sometimes larger than 90°, and thus not possible to measure with an octant. For that reason, Admiral John Campbell , who conducted shipboard experiments with the lunar distance method, suggested a larger instrument and the sextant was developed. [ 10 ] From that time onward, the sextant was the instrument that experienced significant development and improvements and was the instrument of choice for naval navigators. The octant continued to be produced well into the 19th century, though it was generally a less accurate and less expensive instrument. The lower price of the octant, including versions without telescope, made it a practical instrument for ships in the merchant and fishing fleets. One common practice among navigators up to the late nineteenth century was to use both a sextant and an octant. The sextant was used with great care and only for lunars , while the octant was used for routine meridional altitude measurements of the Sun every day. [ 7 ] This protected the very accurate and pricier sextant, while using the more affordable octant where it performs well. From the early 1930s through the end of the 1950s, several types of civilian and military bubble octant instruments were produced for use aboard aircraft. [ 11 ] All were fitted with an artificial horizon in the form of a bubble, which was centered to align the horizon for a navigator flying thousands of feet above the Earth; some had recording features. [ 12 ] Use and adjustment of the octant is essentially identical to the navigator's sextant . Hadley's was not the first reflecting quadrant. Robert Hooke invented a reflecting quadrant in 1684 [ 13 ] and had written about the concept as early as 1666. [ 14 ] Hooke's was a single-reflecting instrument. [ 14 ] Other octants were developed by Jean-Paul Fouchy and Caleb Smith in the early 1730s, however, these did not become significant in the history of navigation instruments. Media related to Octants at Wikimedia Commons
https://en.wikipedia.org/wiki/Octant_(instrument)
In geometry , an octant of a sphere is a spherical triangle with three right angles and three right sides. It is sometimes called a trirectangular (spherical) triangle . [ 1 ] It is one face of a spherical octahedron . [ 2 ] For a sphere embedded in three-dimensional Euclidean space , the vectors from the sphere's center to each vertex of an octant are the basis vectors of a Cartesian coordinate system relative to which the sphere is a unit sphere . The spherical octant itself is the intersection of the sphere with one octant of space . Uniquely among spherical triangles, the octant is its own polar triangle . [ 3 ] The octant can be parametrized using a rational quartic Bézier triangle . [ 4 ] The solid angle subtended by a spherical octant is π /2 steradian or one-eight of a spat , the solid angle of a full sphere. [ 5 ] This geometry-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Octant_of_a_sphere
Octasulfur is an inorganic substance with the chemical formula S 8 . It is an odourless and tasteless yellow solid, and is a major industrial chemical. It is the most common allotrope of sulfur and occurs widely in nature. [ 4 ] The name octasulfur is the most commonly used for this chemical. It is systematically named cyclo -octasulfur (which is the preferred IUPAC name) and cyclooctasulfane . It is also the final member of the thiocane heterocylic series , where every carbon atom is substituted with a sulfur atom, thus this sulfur allotrope is systematically named octathiocane as well. The chemical consists of rings of 8 sulfur atoms. It adopts a crown conformation with D 4d point group symmetry . The S–S bond lengths are equal, at about 2.05 Å . Octasulfur crystallizes in three distinct polymorphs : rhombohedral, and two monoclinic forms, of which only two are stable at standard conditions. The rhombohedral crystal form is the accepted standard state . The remaining polymorph is only stable between 96 and 115 °C at 100 kPa. Octasulfur forms several allotropes: α-sulfur, β-sulfur, γ-sulfur, and λ-sulfur. λ-Sulfur is the liquid form of octasulfur, from which γ-sulfur can be crystallised by quenching. If λ-sulfur is crystallised slowly, it will revert to β-sulfur. Since it must have been heated over 115 °C, neither crystallised β-sulfur or γ-sulfur will be pure. The only known method of obtaining pure γ-sulfur is by crystallising from solution. Octasulfur easily forms large crystals, which are typically yellow and are somewhat translucent. Octasulfur is not typically produced as S 8 per se. It is the main (99%) component of elemental sulfur, which is recovered from volcanic sources and is a major product of the Claus process , associated with petroleum refineries.
https://en.wikipedia.org/wiki/Octasulfur
Octatetraynyl radical ( C 8 H ) is an organic free radical with eight carbon atoms linked in a linear chain with alternating single bonds and triple bonds ( H−C≡C−C≡C−C≡C−C≡C• ). In 2007 negatively charged octatetraynyl was detected in Galactic molecular source TMC-1 , making it the second type of anion to be found in the interstellar medium (after hexatriynyl radical ) and the largest such molecule detected to date. [ 1 ] [ 2 ]
https://en.wikipedia.org/wiki/Octatetraynyl_radical
In chemistry , an octatomic element is an element that, at some standard temperature and pressure , is in a configuration of eight atoms bound together (a homonuclear molecule ). The canonical example is sulfur , S 8 , [ 1 ] but red selenium is also an octatomic element stable at room temperature . Octaoxygen is also known, but it is extremely unstable. This chemistry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Octatomic_element
The octet is a unit of digital information in computing and telecommunications that consists of eight bits . The term is often used when the term byte might be ambiguous, as the byte has historically been used for storage units of a variety of sizes. The term octad(e) for eight bits is no longer common. [ 1 ] [ 2 ] The international standard IEC 60027-2, chapter 3.8.2, states that a byte is an octet of bits. However, the unit byte has historically been platform -dependent and has represented various storage sizes in the history of computing . Due to the influence of several major computer architectures and product lines, the byte became overwhelmingly associated with eight bits. This meaning of byte is codified in such standards as ISO/IEC 80000-13 . While byte and octet are often used synonymously, those working with certain legacy systems are careful to avoid ambiguity. [ citation needed ] Octets can be represented using number systems of varying bases such as the hexadecimal , decimal , or octal number systems . The binary value of all eight bits set (or activated) is 11111111 2 , equal to the hexadecimal value FF 16 , the decimal value 255 10 , and the octal value 377 8 . One octet can be used to represent decimal values ranging from 0 to 255. The term octet (symbol: o [ nb 1 ] ) is often used when the use of byte might be ambiguous. It is frequently used in the Request for Comments (RFC) publications of the Internet Engineering Task Force to describe storage sizes of network protocol parameters. The earliest example is RFC 635 from 1974. In 2000, Bob Bemer claimed to have earlier proposed the usage of the term octet for "8-bit bytes" when he headed software operations for Cie. Bull in France in 1965 to 1966. [ 3 ] In France , French Canada and Romania , octet is used in common language instead of byte when the eight-bit sense is required; for example, a megabyte (MB) is termed a megaoctet (Mo). A variable-length sequence of octets, as in Abstract Syntax Notation One (ASN.1), is referred to as an octet string. Historically, in Western Europe , the term octad (or octade ) was used to specifically denote eight bits, [ 2 ] [ 1 ] a usage no longer common. Early examples of usage exist in British, [ 2 ] Dutch and German sources of the 1960s and 1970s, and throughout the documentation of Philips mainframe computers . [ 1 ] Similar terms are triad for a grouping of three bits and decade for ten bits. Unit multiples of the octet may be formed with SI prefixes and binary prefixes (power of 2 prefixes) as standardized by the International Electrotechnical Commission in 1998. The octet is used in representations of Internet Protocol computer network addresses. [ 4 ] An IPv4 address consists of four octets, usually displayed individually as a series of decimal values ranging from 0 to 255, each separated by a dot (a full stop /period). Using octets with all eight bits set, the representation of the highest-numbered IPv4 address is 255.255.255.255 . An IPv6 address consists of sixteen octets, displayed in hexadecimal representation (two hexits per octet), using a colon character (:) after each pair of octets (16 bits are also known as hextet ) for readability, such as 2001:0db8:0000:0000:0123:4567:89ab:cdef . [ 5 ]
https://en.wikipedia.org/wiki/Octet_(computing)
The octet rule is a chemical rule of thumb that reflects the theory that main-group elements tend to bond in such a way that each atom has eight electrons in its valence shell , giving it the same electronic configuration as a noble gas . The rule is especially applicable to carbon , nitrogen , oxygen , and the halogens ; although more generally the rule is applicable for the s-block and p-block of the periodic table . Other rules exist for other elements, such as the duplet rule for hydrogen and helium , and the 18-electron rule for transition metals . The valence electrons in molecules like carbon dioxide (CO₂) can be visualized using a Lewis electron dot diagram . In covalent bonds , electrons shared between two atoms are counted toward the octet of both atoms. In carbon dioxide each oxygen shares four electrons with the central carbon, two (shown in red) from the oxygen itself and two (shown in black) from the carbon. All four of these electrons are counted in both the carbon octet and the oxygen octet, so that both atoms are considered to obey the octet rule. The octet rule is simplest in the case of ionic bonding between two atoms, one a metal of low electronegativity and the other a nonmetal of high electronegativity. For example, sodium metal and chlorine gas combine to form sodium chloride , a crystal lattice composed of alternating sodium and chlorine nuclei . Electron density inside this lattice forms clumps at the atomic scale, as follows. An isolated chlorine atom (Cl) has two and eight electrons in its first and second electron shells, located near the nucleus. However, it has only seven electrons in the third and outermost electron shell . One additional electron would completely fill the outer electron shell with eight electrons, a situation the octet rule commends. Indeed, adding an electron to the produce the chloride ion (Cl − ) releases 3.62 eV of energy. [ 1 ] Conversely, another surplus electron cannot fit in the same shell, instead beginning the fourth electron shell around the nucleus. Thus the octet rule proscribes formation of a hypothetical Cl 2− ion , and indeed the latter has only been observed as a plasma under extreme conditions. A sodium atom (Na) has a single electron in its outermost electron shell, the first and second shells again being full with two and eight electrons respectively. The octet rule favors removal of this outermost electron to form the Na + ion, which has the exact same electron configuration as Cl − . Indeed, sodium is observed to transfer one electron to chlorine during the formation of sodium chloride, such that the resulting lattice is best considered as a periodic array of Na + and Cl − ions. To remove the outermost Na electron and return to an "octet-approved" state requires a small amount of energy: 5.14 eV. [ 2 ] This energy is provided from the 3.62 eV released during chloride formation, and the electrostatic attraction between positively-charged Na + and negatively-charged Cl − ions, which releases a 8.12 eV lattice energy . [ 3 ] By contrast, any further electrons removed from Na would reside in the deeper second electron shell, and produce an octet-violating Na 2+ ion. Consequently, the second ionization energy required for the next removal is much larger — 47.28 eV [ 4 ] — and the corresponding ion is only observed under extreme conditions. In 1864, the English chemist John Newlands classified the sixty-two known elements into eight groups, based on their physical properties. [ 5 ] [ 6 ] [ 7 ] [ 8 ] In the late 19th century, it was known that coordination compounds (formerly called "molecular compounds") were formed by the combination of atoms or molecules in such a manner that the valencies of the atoms involved apparently became satisfied. In 1893, Alfred Werner showed that the number of atoms or groups associated with a central atom (the " coordination number ") is often 4 or 6; other coordination numbers up to a maximum of 8 were known, but less frequent. [ 9 ] In 1904, Richard Abegg was one of the first to extend the concept of coordination number to a concept of valence in which he distinguished atoms as electron donors or acceptors, leading to positive and negative valence states that greatly resemble the modern concept of oxidation states . Abegg noted that the difference between the maximum positive and negative valences of an element under his model is frequently eight. [ 10 ] In 1916, Gilbert N. Lewis referred to this insight as Abegg's rule and used it to help formulate his cubical atom model and the "rule of eight", which began to distinguish between valence and valence electrons . [ 11 ] In 1919, Irving Langmuir refined these concepts further and renamed them the "cubical octet atom" and "octet theory". [ 12 ] The "octet theory" evolved into what is now known as the "octet rule". Walther Kossel [ 13 ] and Gilbert N. Lewis saw that noble gases did not have the tendency of taking part in chemical reactions under ordinary conditions. On the basis of this observation, they concluded that atoms of noble gases are stable and on the basis of this conclusion they proposed a theory of valency known as "electronic theory of valency" in 1916: During the formation of a chemical bond, atoms combine together by gaining, losing or sharing electrons in such a way that they acquire nearest noble gas configuration. [ 14 ] The quantum theory of the atom explains the eight electrons as a closed shell with an s 2 p 6 electron configuration. A closed-shell configuration is one in which low-lying energy levels are full and higher energy levels are empty. For example, the neon atom ground state has a full n = 2 shell (2s 2 2p 6 ) and an empty n = 3 shell. According to the octet rule, the atoms immediately before and after neon in the periodic table (i.e. C, N, O, F, Na, Mg and Al), tend to attain a similar configuration by gaining, losing, or sharing electrons. The argon atom has an analogous 3s 2 3p 6 configuration. There is also an empty 3d level, but it is at considerably higher energy than 3s and 3p (unlike in the hydrogen atom), so that 3s 2 3p 6 is still considered a closed shell for chemical purposes. The atoms immediately before and after argon tend to attain this configuration in compounds. There are, however, some hypervalent molecules in which the 3d level may play a part in the bonding, although this is controversial (see below). For helium there is no 1p level according to the quantum theory, so that 1s 2 is a closed shell with no p electrons. The atoms before and after helium (H and Li) follow a duet rule and tend to have the same 1s 2 configuration as helium. Many reactive intermediates do not obey the octet rule. Most are unstable, although some can be isolated. Typically, octet rule violations occur in either low-dimensional coordination geometries or in radical species . Although hypervalent molecules are commonly taught to violate the octet rule, ab initio calculations show that almost all known examples obey the octet rule. The compounds form many fractional bonds through resonance (see § Hypervalent molecules below). In the trigonal planar coordination geometry, one p orbital points out of the bonding plane, and can only overlap with nearby atomic orbitals in a π bond . If that p orbital would be empty in an isolated atom, it may be filled through an intramolecular dative bond , as with aminoboranes . However, in some cases (e.g. boron trichloride and various boranes , triphenylmethanium ), no nearby filled orbital can profitably overlap with the empty p orbital. In such cases, the orbital remains empty, and the compound obeys a "sextet rule". Likewise, linear compounds, such as dimethylzinc , have two p orbitals perpendicular to the bonding axis, and may obey a "quartet rule". [ 15 ] In either case, the empty unshielded orbitals tend to attract adducts. Radicals satisfy the octet rule in one spin orientation , with four spin-up electrons in the valence shell, and almost satisfy it in the opposite spin orientation. Thus, for example, the methyl radical (CH 3 ), which has an unpaired electron in a non-bonding orbital on the carbon atom and no electron of opposite spin in the same orbital. Another example is the radical chlorine monoxide (ClO • ) which is involved in ozone depletion . Stable radicals tend to adopt states in which the unpaired electron can delocalize through resonance. In such cases, the octet rule can be restored through the formalism of a 1- or 3-electron bond . Species such as carbenes can be interpreted two different ways, depending on their spin state. Triplet carbenes are best thought of as two radicals localized on the same atom, and obey the octet rule in those radicals' shared spin-up orientation. Singlet carbenes tend to adopt a planar configuration, and are best thought of as obeying the planar sextet rule. Main-group elements in the third and later rows of the periodic table can form hypercoordinate or hypervalent molecules in which the central main-group atom is bonded to more than four other atoms, such as phosphorus pentafluoride , PF 5 , and sulfur hexafluoride , SF 6 . For example, in PF 5 , if it is supposed that there are five true covalent bonds in which five distinct electron pairs are shared, then the phosphorus would be surrounded by 10 valence electrons in violation of the octet rule. In the early days of quantum mechanics, Pauling proposed that third-row atoms can form five bonds by using one s, three p and one d orbitals, or six bonds by using one s, three p and two d orbitals. [ 16 ] To form five bonds, the one s, three p and one d orbitals combine to form five sp 3 d hybrid orbitals which each share an electron pair with a halogen atom, for a total of 10 shared electrons, two more than the octet rule predicts. Similarly to form six bonds, the six sp 3 d 2 hybrid orbitals form six bonds with 12 shared electrons. [ 17 ] In this model the availability of empty d orbitals is used to explain the fact that third-row atoms such as phosphorus and sulfur can form more than four covalent bonds, whereas second-row atoms such as nitrogen and oxygen are strictly limited by the octet rule. [ 18 ] However other models describe the bonding using only s and p orbitals in agreement with the octet rule. A valence bond description of PF 5 uses resonance between different PF 4 + F − structures, so that each F is bonded by a covalent bond in four structures and an ionic bond in one structure. Each resonance structure has eight valence electrons on P. [ 19 ] A molecular orbital theory description considers the highest occupied molecular orbital to be a non-bonding orbital localized on the five fluorine atoms, in addition to four occupied bonding orbitals, so again there are only eight valence electrons on the phosphorus. [ citation needed ] The validity of the octet rule for hypervalent molecules is further supported by ab initio molecular orbital calculations , which show that the contribution of d functions to the bonding orbitals is small. [ 20 ] [ 21 ] Nevertheless, for historical reasons, structures implying more than eight electrons around elements like P, S, Se, or I are still common in textbooks and research articles. In spite of the unimportance of d shell expansion in chemical bonding, this practice allows structures to be shown without using a large number of formal charges or using partial bonds and is recommended by the IUPAC as a convenient formalism in preference to depictions that better reflect the bonding. On the other hand, showing more than eight electrons around Be, B, C, N, O, or F (or more than two around H, He, or Li) is considered an error by most authorities. The octet rule is only applicable to main-group elements . Other elements follow other electron counting rules as their valence electron configurations are different from main-group elements. These other rules are shown below:
https://en.wikipedia.org/wiki/Octet_rule
A unit of information is any unit of measure of digital data size. In digital computing , a unit of information is used to describe the capacity of a digital data storage device. In telecommunications , a unit of information is used to describe the throughput of a communication channel . In information theory , a unit of information is used to measure information contained in messages and the entropy of random variables. Due to the need to work with data sizes that range from very small to very large, units of information cover a wide range of data sizes. Units are defined as multiples of a smaller unit except for the smallest unit which is based on convention and hardware design. Multiplier prefixes are used to describe relatively large sizes. For binary hardware , by far the most common hardware today, the smallest unit is the bit , a portmanteau of binary digit, [ 1 ] which represents a value that is one of two possible values; typically shown as 0 and 1. The nibble , 4 bits, represents the value of a single hexadecimal digit. The byte , 8 bits, 2 nibbles, is possibly the most commonly known and used base unit to describe data size. The word is a size that varies by and has a special importance for a particular hardware context. On modern hardware, a word is typically 2, 4 or 8 bytes, but the size varies dramatically on older hardware. Larger sizes can be expressed as multiples of a base unit via SI metric prefixes (powers of ten) or the newer and generally more accurate IEC binary prefixes (powers of two). In 1928, Ralph Hartley observed a fundamental storage principle, [ 2 ] which was further formalized by Claude Shannon in 1945: the information that can be stored in a system is proportional to the logarithm of N possible states of that system, denoted log b N . Changing the base of the logarithm from b to a different number c has the effect of multiplying the value of the logarithm by a fixed constant, namely log c N = (log c b ) log b N . Therefore, the choice of the base b determines the unit used to measure information. In particular, if b is a positive integer, then the unit is the amount of information that can be stored in a system with b possible states. When b is 2, the unit is the shannon , equal to the information content of one "bit". A system with 8 possible states, for example, can store up to log 2 8 = 3 bits of information. Other units that have been named include: The trit, ban, and nat are rarely used to measure storage capacity; but the nat, in particular, is often used in information theory, because natural logarithms are mathematically more convenient than logarithms in other bases. Several conventional names are used for collections or groups of bits. Historically, a byte was the number of bits used to encode a character of text in the computer, which depended on computer hardware architecture, but today it almost always means eight bits – that is, an octet . An 8-bit byte can represent 256 (2 8 ) distinct values, such as non-negative integers from 0 to 255, or signed integers from −128 to 127. The IEEE 1541-2002 standard specifies "B" (upper case) as the symbol for byte ( IEC 80000-13 uses "o" for octet in French, but also allows "B" in English). Bytes, or multiples thereof, are almost always used to specify the sizes of computer files and the capacity of storage units. Most modern computers and peripheral devices are designed to manipulate data in whole bytes or groups of bytes, rather than individual bits. A group of four bits, or half a byte, is sometimes called a nibble , nybble or nyble. This unit is most often used in the context of hexadecimal number representations, since a nibble has the same number of possible values as one hexadecimal digit has. [ 7 ] Computers usually manipulate bits in groups of a fixed size, conventionally called words . The number of bits in a word is usually defined by the size of the registers in the computer's CPU , or by the number of data bits that are fetched from its main memory in a single operation. In the IA-32 architecture more commonly known as x86-32, a word is 32 bits, but other past and current architectures use words with 4, 8, 9, 12, 13, 16, 18, 20, 21, 22, 24, 25, 29, 30, 31, 32, 33, 35, 36, 38, 39, 40, 42, 44, 48, 50, 52, 54, 56, 60, 64, 72 [ 8 ] bits or others. Some machine instructions and computer number formats use two words (a "double word" or "dword"), or four words (a "quad word" or "quad"). Computer memory caches usually operate on blocks of memory that consist of several consecutive words. These units are customarily called cache blocks , or, in CPU caches , cache lines . Virtual memory systems partition the computer's main storage into even larger units, traditionally called pages . A unit for a large amount of data can be formed using either a metric or binary prefix with a base unit. For storage, the base unit is typically byte. For communication throughput, a base unit of bit is common. For example, using the metric kilo prefix, a kilobyte is 1000 bytes and a kilobit is 1000 bits. Use of metric prefixes is common, but often inaccurate since binary storage hardware is organized with capacity that is a power of 2 – not 10 as the metric prefixes are. In the context of computing, the metric prefixes are often intended to mean something other than their normal meaning. For example, 'kilobyte' often refers to 1024 bytes even though the standard meaning of kilo is 1000. Also, 'mega' normally means one million, but in computing is often used to mean 2 20 = 1 048 576 . The table below illustrates the differences between normal metric sizes and the intended size – the binary size. The International Electrotechnical Commission (IEC) issued a standard that introduces binary prefixes that accurately represent binary sizes without changing the meaning of the standard metric terms. Rather than based on powers of 1000, these are based on powers of 1024 which is a power of 2. [ 9 ] The JEDEC memory standard JESD88F notes that the definitions of kilo (K), giga (G), and mega (M) based on powers of two are included only to reflect common usage, but are otherwise deprecated. [ 10 ] Some notable unit names that are today obsolete or only used in limited contexts.
https://en.wikipedia.org/wiki/Octlet
Octopus is a software package for performing Kohn‍–‍Sham density functional theory (DFT) and time-dependent density functional theory (TDDFT) calculations. [ 1 ] Octopus employs pseudopotentials and real-space numerical grids to propagate the Kohn‍–‍Sham orbitals in real time under the influence of time-varying electromagnetic fields. Specific functionality is provided for simulating one-, two-, and three-dimensional systems. Octopus can calculate static and dynamic polarizabilities and first hyperpolarizabilities , static magnetic susceptibilities , absorption spectra , and perform molecular dynamics simulations with Ehrenfest and Car–Parrinello methods . The code is written predominantly in Fortran and is released under the GPL . The latest version 15.0 was released on October 10, 2024.
https://en.wikipedia.org/wiki/Octopus_(software)
Octopussy , also known as 8Pussy , is a free and open-source computer-software which monitors systems, by constantly analyzing the syslog data they generate and transmit to such a central Octopussy server (thus often called a SIEM solution). [ 3 ] Therefore, software like Octopussy plays an important role in maintaining an information security management system within ISO/IEC 27001 -compliant environments. Octopussy has the ability to monitor any device that supports the syslog protocol , such as servers , routers , switches, firewalls , load balancers , and its important applications and services . The main purpose of the software is to alert its administrators and users to different kinds of events, like system outages, attacks on systems or errors in applications. [ 4 ] However, unlike Nagios or Icinga , Octopussy is not a state -checker and therefore problems cannot be resolved within the application. The software also makes no prescription whatsoever on which messages must be/must not be analyzed. As such, Octopussy can be seen as less powerful than other popular commercial software in the same category (event monitoring and log analysis). [ 5 ] Octopussy is compatible with many Linux system distributions like Debian , Ubuntu , OpenSUSE , CentOS , RHEL and even meta-distributions as Gentoo or Arch Linux . Although Octopussy was originally designed to run on Linux, it could be ported to other Unix variants like FreeBSD with minimal effort. Octopussy has extensive report generating features and also various interfaces to other software, like e.g. NSCA (Nagios), Jabber/XMPP and Zabbix . With the help of software like Snare even Windows EventLogs can be processed. [ 6 ] Octopussy is licensed under the terms of the GNU General Public License . Although Octopussy is free and open-source software it has a variety of characteristics also found in some professional enterprise applications like Splunk , SAWMILL or Kiwi Syslog. At the time of writing, Octopussy comes with the following set of features: Some of the (meta-)services supported by/known by Octopussy are: Apache 2, BIND, BSD Kernel, BSD PAM, BSD System, Cisco Routers (ASR), Cisco Switches, ClamAV, DenyAll Reverse Proxy, DRBD, F5 BigIP, Fortinet FW, HP-Tools, Ironport MailServer, Juniper Netscreen FW, Juniper Netscreen NSM, LDAP, Linux AppArmor, Linux Auditd, Linux IPTables, Linux Kernel, Linux PAM, Linux System, Monit, MySQL, Nagios, Neoteris/Juniper FW, NetApp NetCache, Postfix, PostgreSQL, Samba, Samhain, SNMPd, Squid, SSHd, Syslog-ng, TACACS, VMware ESX(i), Windows Snare Agent, Windows System, Xen ... [ 7 ] Events receivable from services and thus processible by Octopussy include: The software requires RSYSLOG installed on the syslog-server and expects systems that are monitored to run one of the numerous available syslog services, like e.g. syslogd /klogd, RSYSLOG or syslog-ng. [ 8 ] The software further depends on the Apache 2 HTTP Server installed, with Apache::ASP, Mod_Perl and Mod_SSL. Octopussy also requires a MySQL DBMS (actual database is installed/copied during Octopussy setup) as well as a recent Perl interpreter installed on the operating system, with a variety of Perl modules from CPAN (e.g. Crypt::PasswdMD5, DBD::mysql, JSON , Unix::Syslog, XML ::Simple). [ 9 ] A comprehensive list of those modules can be found within the software packages/archives README.txt file. In addition to that NSCD and RRDtool are a requirement. RRDtool aids in the creation of graphs that will be displayed on the Octopussy dashboard or shown on a per-device/per-service level. [ 10 ] Octopussy receives syslog messages via syslog protocol and therefore behaves passively, not running any type of network agent on the remote machines under monitoring / surveillance . [ 11 ] Octopussy completely conforms to RfC 3164 and RfC 3195 of the IETF , describing syslog as the logging mechanism in Unix-like/BSD operating systems. [ 12 ] [ 13 ] That especially includes the internal representation of the facility and severity -principle where applicable. The software is driven by a semi- stateful event correlation engine. This means that the engine records and thus knows its internal state, but only uses it to some extent to link together logically related elements for the same device, in order to draw a conclusion (i.e. to generate an alert). In Octopussy the semi-stateful correlation engine, with its so called sliding window (a shifting window being the logical boundary of a number of events during a certain period of time), is capable of comparing known past events with present ones based on a limited number of comparative values. The Octo-Dispatcher is the component used by the Octopussy software to receive syslog lines from RSYSLOG and dispatch them into device directories. [ 14 ] Every device registered and activated within Octopussy gets its syslog messages assigned to it depending on the device name. Noteworthy is also the adjacent Octo-Replay component, which is the program used by the Octopussy software to replay log messages for some device or service (it receives and processes recognized logs and puts them back into the incoming directory). The Octo-Parser and Octo-Uparser are two of Octopussy's most important core components. The Octo-Parser is the program used by the Octopussy software to parse logs in syslog format for each device registered within Octopussy. [ 15 ] It basically uses a regex -engine and commences pattern matching on incoming syslog messages. The Octo-Uparser is restarted every time device's services are changed, to check if previously received "unknown" log messages can be associated with a service. In some cases Octo-Pusher is also called in advance to process non-syslog messages incoming from some devices. In that regard, the device setting "asynchronous" is helpful to process such log messages, after they were sent to an Octopussy server using e.g. FTP, rsync or SSH/SCP. The Octopussy interface ( GUI ) is the default user-interface and provides configuration management , device and service management as well as alert definition and therefore extends the Octopussy core components. Devices are displayed in tabular form on the Devices page, with the following descriptors as a minimum: hostname , IP address , log type, device model/type, FQDN and OS . Hence, the interface (Octo-Web) mainly provides access to other Octopussy core components like Octo-Commander, Octo-Message-Finder, Octo-Reporter and Octo-Statistic-Reporter. The Octopussy front-end/GUI is written in Perl 5, employing Apache::ASP to structure and display content. [ 16 ] In addition to that, Octopussy core services can also be accessed from the operating system shell. That represents a convenient way for administrators to start/stop services or make fundamental configuration changes. The Octopussy RRD graph generator is a core component of the software and installed by default. Since the generation of such graphs is very resource intensive administrators may opt to disable it on an Octopussy syslog server with a less powerful CPU and a low amount of RAM . The generated RRD graphs displays the activity of all active services for monitored devices, highly depending on the specific service. After a restart of the Octopussy software or during operation, Octo-Dispatcher and Octo-Parser will always process syslog messages in their buffer and queue first and RRD graph generation is delayed. [ 17 ] Octo-RRD further depends on Octo-Scheduler, to execute the Octopussy::Report function in order to generate syslog activity RRD graphs, that have been scheduled previously. Finally Octo-Sender has the capability to send report data to arbitrary recipients. There is a plug-in / module system in Octopussy, which is mainly geared towards the modification of Octopussy reports. Such a plug-in consists out of a description file, which defines the plug-in name and functions, and a code file with perl code to process the actual data. [ 18 ] There are also extensions for software related to Octopussy, like e.g. a Nagios plug-in that checks the Octopussy core services (i.e. Octo-Dispatcher, Octo-Scheduler, etc.) as well as the Octopussy parser states and log partitions. [ 19 ] The creation of new services and service patterns presents the most important way to extend Octopussy without making changes to the source code. However, since patterns are outlined as simplified regular expressions , administrators should have at least some basic knowledge about regex in general. It is further strongly recommended to build on already existing services and also understand the meaning of a message objects' basic fields, which are message ID, pattern, log level, taxonomy, table and rank. [ 20 ] Usually the logs wizard is used to search the system for unrecognized syslog messages per device to generate new service patterns. During the process the creation of patterns should be in a way that enables Octopussy to distinguish messages based on their severity and taxonomy . [ 21 ]
https://en.wikipedia.org/wiki/Octopussy_(software)
In computing , octuple precision is a binary floating-point -based computer number format that occupies 32 bytes (256 bits ) in computer memory. This 256- bit octuple precision is for applications requiring results in higher than quadruple precision . The range greatly exceeds what is needed to describe all known physical limitations within the observable universe or precisions better than planck units . In its 2008 revision, the IEEE 754 standard specifies a binary256 format among the interchange formats (it is not a basic format), as having: The format is written with an implicit lead bit with value 1 unless the exponent is all zeros. Thus only 236 bits of the significand appear in the memory format, but the total precision is 237 bits (approximately 71 decimal digits: log 10 (2 237 ) ≈ 71.344 ). The bits are laid out as follows: The octuple-precision binary floating-point exponent is encoded using an offset binary representation, with the zero offset being 262143; also known as exponent bias in the IEEE 754 standard. Thus, as defined by the offset binary representation, in order to get the true exponent the offset of 262143 has to be subtracted from the stored exponent. The stored exponents 00000 16 and 7FFFF 16 are interpreted specially. The minimum strictly positive (subnormal) value is 2 −262378 ≈ 10 −78984 and has a precision of only one bit. The minimum positive normal value is 2 −262142 ≈ 2.4824 × 10 −78913 . The maximum representable value is 2 262144 − 2 261907 ≈ 1.6113 × 10 78913 . These examples are given in bit representation , in hexadecimal , of the floating-point value. This includes the sign, (biased) exponent, and significand. By default, 1/3 rounds down like double precision , because of the odd number of bits in the significand. So the bits beyond the rounding point are 0101... which is less than 1/2 of a unit in the last place . Octuple precision is rarely implemented since usage of it is extremely rare. Apple Inc. had an implementation of addition, subtraction and multiplication of octuple-precision numbers with a 224-bit two's complement significand and a 32-bit exponent. [ 1 ] One can use general arbitrary-precision arithmetic libraries to obtain octuple (or higher) precision, but specialized octuple-precision implementations may achieve higher performance. There is no known hardware implementation of octuple precision.
https://en.wikipedia.org/wiki/Octuple-precision_floating-point_format
An ocular micrometer or eyepiece micrometer is a glass disk, engraved with a ruled scale, that fits in an eyepiece of a microscope , [ 1 ] [ 2 ] which is used to measure the size of microscopic objects through magnification under a microscope. When the eyepiece micrometer is calibrated using a stage micrometer , the length of the divisions on the scale depends on the degree of magnification. [ 3 ] This standards - or measurement -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Ocular_micrometer
In mathematics , parity is the property of an integer of whether it is even or odd . An integer is even if it is divisible by 2, and odd if it is not. [ 1 ] For example, −4, 0, and 82 are even numbers, while −3, 5, 23, and 69 are odd numbers. The above definition of parity applies only to integer numbers, hence it cannot be applied to numbers with decimals or fractions like 1/2 or 4.6978. See the section "Higher mathematics" below for some extensions of the notion of parity to a larger class of "numbers" or in other more general settings. Even and odd numbers have opposite parities, e.g., 22 (even number) and 13 (odd number) have opposite parities. In particular, the parity of zero is even. [ 2 ] Any two consecutive integers have opposite parity. A number (i.e., integer) expressed in the decimal numeral system is even or odd according to whether its last digit is even or odd. That is, if the last digit is 1, 3, 5, 7, or 9, then it is odd; otherwise it is even—as the last digit of any even number is 0, 2, 4, 6, or 8. The same idea will work using any even base. In particular, a number expressed in the binary numeral system is odd if its last digit is 1; and it is even if its last digit is 0. In an odd base, the number is even according to the sum of its digits—it is even if and only if the sum of its digits is even. [ 3 ] An even number is an integer of the form x = 2 k {\displaystyle x=2k} where k is an integer; [ 4 ] an odd number is an integer of the form x = 2 k + 1. {\displaystyle x=2k+1.} An equivalent definition is that an even number is divisible by 2: 2 | x {\displaystyle 2\ |\ x} and an odd number is not: 2 ⧸ | x {\displaystyle 2\not |\ x} The sets of even and odd numbers can be defined as following: [ 5 ] { 2 k : k ∈ Z } {\displaystyle \{2k:k\in \mathbb {Z} \}} { 2 k + 1 : k ∈ Z } {\displaystyle \{2k+1:k\in \mathbb {Z} \}} The set of even numbers is a prime ideal of Z {\displaystyle \mathbb {Z} } and the quotient ring Z / 2 Z {\displaystyle \mathbb {Z} /2\mathbb {Z} } is the field with two elements . Parity can then be defined as the unique ring homomorphism from Z {\displaystyle \mathbb {Z} } to Z / 2 Z {\displaystyle \mathbb {Z} /2\mathbb {Z} } where odd numbers are 1 and even numbers are 0. The consequences of this homomorphism are covered below. The following laws can be verified using the properties of divisibility . They are a special case of rules in modular arithmetic , and are commonly used to check if an equality is likely to be correct by testing the parity of each side. As with ordinary arithmetic, multiplication and addition are commutative and associative in modulo 2 arithmetic, and multiplication is distributive over addition. However, subtraction in modulo 2 is identical to addition, so subtraction also possesses these properties, which is not true for normal integer arithmetic. By construction in the previous section, the structure ({even, odd}, +, ×) is in fact the field with two elements . The division of two whole numbers does not necessarily result in a whole number. For example, 1 divided by 4 equals 1/4, which is neither even nor odd, since the concepts of even and odd apply only to integers. But when the quotient is an integer, it will be even if and only if the dividend has more factors of two than the divisor. [ 6 ] The ancient Greeks considered 1, the monad , to be neither fully odd nor fully even. [ 7 ] Some of this sentiment survived into the 19th century: Friedrich Wilhelm August Fröbel 's 1826 The Education of Man instructs the teacher to drill students with the claim that 1 is neither even nor odd, to which Fröbel attaches the philosophical afterthought, It is well to direct the pupil's attention here at once to a great far-reaching law of nature and of thought. It is this, that between two relatively different things or ideas there stands always a third, in a sort of balance, seeming to unite the two. Thus, there is here between odd and even numbers one number (one) which is neither of the two. Similarly, in form, the right angle stands between the acute and obtuse angles; and in language, the semi-vowels or aspirants between the mutes and vowels. A thoughtful teacher and a pupil taught to think for himself can scarcely help noticing this and other important laws. [ 8 ] Integer coordinates of points in Euclidean spaces of two or more dimensions also have a parity, usually defined as the parity of the sum of the coordinates. For instance, the face-centered cubic lattice and its higher-dimensional generalizations (the D n lattices ) consist of all of the integer points whose coordinates have an even sum. [ 9 ] This feature also manifests itself in chess , where the parity of a square is indicated by its color: bishops are constrained to moving between squares of the same parity, whereas knights alternate parity between moves. [ 10 ] This form of parity was famously used to solve the mutilated chessboard problem : if two opposite corner squares are removed from a chessboard, then the remaining board cannot be covered by dominoes, because each domino covers one square of each parity and there are two more squares of one parity than of the other. [ 11 ] The parity of an ordinal number may be defined to be even if the number is a limit ordinal, or a limit ordinal plus a finite even number, and odd otherwise. [ 12 ] Let R be a commutative ring and let I be an ideal of R whose index is 2. Elements of the coset 0 + I {\displaystyle 0+I} may be called even , while elements of the coset 1 + I {\displaystyle 1+I} may be called odd . As an example, let R = Z (2) be the localization of Z at the prime ideal (2). Then an element of R is even or odd if and only if its numerator is so in Z . The even numbers form an ideal in the ring of integers, [ 13 ] but the odd numbers do not—this is clear from the fact that the identity element for addition, zero, is an element of the even numbers only. An integer is even if it is congruent to 0 modulo this ideal, in other words if it is congruent to 0 modulo 2, and odd if it is congruent to 1 modulo 2. All prime numbers are odd, with one exception: the prime number 2. [ 14 ] All known perfect numbers are even; it is unknown whether any odd perfect numbers exist. [ 15 ] Goldbach's conjecture states that every even integer greater than 2 can be represented as a sum of two prime numbers. Modern computer calculations have shown this conjecture to be true for integers up to at least 4 × 10 18 , but still no general proof has been found. [ 16 ] The parity of a permutation (as defined in abstract algebra ) is the parity of the number of transpositions into which the permutation can be decomposed. [ 17 ] For example (ABC) to (BCA) is even because it can be done by swapping A and B then C and A (two transpositions). It can be shown that no permutation can be decomposed both in an even and in an odd number of transpositions. Hence the above is a suitable definition. In Rubik's Cube , Megaminx , and other twisting puzzles, the moves of the puzzle allow only even permutations of the puzzle pieces, so parity is important in understanding the configuration space of these puzzles. [ 18 ] The Feit–Thompson theorem states that a finite group is always solvable if its order is an odd number. This is an example of odd numbers playing a role in an advanced mathematical theorem where the method of application of the simple hypothesis of "odd order" is far from obvious. [ 19 ] The parity of a function describes how its values change when its arguments are exchanged with their negations. An even function, such as an even power of a variable, gives the same result for any argument as for its negation. An odd function, such as an odd power of a variable, gives for any argument the negation of its result when given the negation of that argument. It is possible for a function to be neither odd nor even, and for the case f ( x ) = 0, to be both odd and even. [ 20 ] The Taylor series of an even function contains only terms whose exponent is an even number, and the Taylor series of an odd function contains only terms whose exponent is an odd number. [ 21 ] In combinatorial game theory , an evil number is a number that has an even number of 1's in its binary representation , and an odious number is a number that has an odd number of 1's in its binary representation; these numbers play an important role in the strategy for the game Kayles . [ 22 ] The parity function maps a number to the number of 1's in its binary representation, modulo 2 , so its value is zero for evil numbers and one for odious numbers. The Thue–Morse sequence , an infinite sequence of 0's and 1's, has a 0 in position i when i is evil, and a 1 in that position when i is odious. [ 23 ] In information theory , a parity bit appended to a binary number provides the simplest form of error detecting code . If a single bit in the resulting value is changed, then it will no longer have the correct parity: changing a bit in the original number gives it a different parity than the recorded one, and changing the parity bit while not changing the number it was derived from again produces an incorrect result. In this way, all single-bit transmission errors may be reliably detected. [ 24 ] Some more sophisticated error detecting codes are also based on the use of multiple parity bits for subsets of the bits of the original encoded value. [ 25 ] In wind instruments with a cylindrical bore and in effect closed at one end, such as the clarinet at the mouthpiece, the harmonics produced are odd multiples of the fundamental frequency . (With cylindrical pipes open at both ends, used for example in some organ stops such as the open diapason , the harmonics are even multiples of the same frequency for the given bore length, but this has the effect of the fundamental frequency being doubled and all multiples of this fundamental frequency being produced.) See harmonic series (music) . [ 26 ] In some countries, house numberings are chosen so that the houses on one side of a street have even numbers and the houses on the other side have odd numbers. [ 27 ] Similarly, among United States numbered highways , even numbers primarily indicate east–west highways while odd numbers primarily indicate north–south highways. [ 28 ] Among airline flight numbers , even numbers typically identify eastbound or northbound flights, and odd numbers typically identify westbound or southbound flights. [ 29 ]
https://en.wikipedia.org/wiki/Odd_number
The odd number theorem is a theorem in strong gravitational lensing which comes directly from differential topology . The theorem states that the number of multiple images produced by a bounded transparent lens must be odd . The gravitational lensing is a thought to mapped from what's known as image plane to source plane following the formula : M : ( u , v ) ↦ ( u ′ , v ′ ) {\displaystyle M:(u,v)\mapsto (u',v')} . If we use direction cosines describing the bent light rays , we can write a vector field on ( u , v ) {\displaystyle (u,v)} plane V : ( s , w ) {\displaystyle V:(s,w)} . However, only in some specific directions V 0 : ( s 0 , w 0 ) {\displaystyle V_{0}:(s_{0},w_{0})} , will the bent light rays reach the observer, i.e., the images only form where D = δ V = 0 | ( s 0 , w 0 ) {\displaystyle D=\delta V=0|_{(s_{0},w_{0})}} . Then we can directly apply the Poincaré–Hopf theorem χ = ∑ index D = constant {\displaystyle \chi =\sum {\text{index}}_{D}={\text{constant}}} . The index of sources and sinks is +1, and that of saddle points is −1. So the Euler characteristic equals the difference between the number of positive indices n + {\displaystyle n_{+}} and the number of negative indices n − {\displaystyle n_{-}} . For the far field case, there is only one image, i.e., χ = n + − n − = 1 {\displaystyle \chi =n_{+}-n_{-}=1} . So the total number of images is N = n + + n − = 2 n − + 1 {\displaystyle N=n_{+}+n_{-}=2n_{-}+1} , i.e., odd. The strict proof needs Uhlenbeck's Morse theory of null geodesics . This astrophysics -related article is a stub . You can help Wikipedia by expanding it . This topology-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Odd_number_theorem
In astronomy, an odd radio circle ( ORC ) is a very large (over 50 times the diameter of our Milky Way ~ 3 million light years) unexplained astronomical object that, at radio wavelengths , is highly circular and brighter along its edges. [ 3 ] As of 27 April 2021, there have been five such objects (and possibly six more) observed. [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] The observed ORCs are bright at radio wavelengths, but are not visible at visible , infrared or X-ray wavelengths. This is due to the physical process producing this radiation, which is thought to be synchrotron radiation . [ 4 ] [ 5 ] Three of the ORCs contain optical galaxies in their centers, suggesting that the galaxies might have formed these objects. [ 5 ] [ 10 ] The ORCs were detected in late 2019 after astronomer Anna Kapinska studied a Pilot Survey of the Evolutionary Map of the Universe (EMU), based on the Australian Square Kilometre Array Pathfinder (ASKAP) radio telescope array . [ 11 ] All of the ORCs are about 1 arcminute in diameter, and are some distance from the galactic plane , at high galactic latitudes . The possibility of a spherical shock wave , associated with fast radio bursts , gamma-ray bursts , or neutron star mergers , was considered, but, if related, would have to have taken place in the distant past due to the large angular size of the ORCs, according to the researchers. [ 7 ] Also, according to the astronomers, "Circular features are well-known in radio astronomical images, and usually represent a spherical object such as a supernova remnant , a planetary nebula , a circumstellar shell , or a face-on disc such as a protoplanetary disc or a star-forming galaxy , ... They may also arise from imaging artefact around bright sources caused by calibration errors or inadequate deconvolution . This class of circular feature in radio images does not seem to correspond to any of these known types of object or artefact, but rather appears to be a new class of astronomical object ." [ 7 ]
https://en.wikipedia.org/wiki/Odd_radio_circle
The Oddo–Harkins rule holds that an element with an even atomic number is more abundant than the elements with immediately adjacent atomic numbers . For example, carbon , with atomic number 6, is more abundant than boron (5) and nitrogen (7). Generally, the relative abundance of an even atomic numbered element is roughly two orders of magnitude greater than the relative abundances of the immediately adjacent odd atomic numbered elements to either side. This pattern was first reported by Giuseppe Oddo [ 1 ] in 1914 and William Draper Harkins [ 2 ] in 1917. [ 3 ] [ 4 ] The Oddo–Harkins rule is true for all elements beginning with carbon produced by stellar nucleosynthesis but not true for the lightest elements below carbon produced by big bang nucleosynthesis and cosmic ray spallation . [ citation needed ] All atoms heavier than hydrogen are formed in stars or supernovae through nucleosynthesis , when gravity , temperature and pressure reach levels high enough to fuse protons and neutrons together. Protons and neutrons form the atomic nucleus , which accumulates electrons to form atoms. The number of protons in the nucleus, called atomic number, uniquely identifies a chemical element. The early form of the rule derived from Harkin's 1917 study of meteorites. He reasoned, as did others at the time, that meteorites are more representative of the cosmological abundance of the elements. Harkins observed that elements with even atomic numbers ( Z ) were about 70 times more abundant than those with odd Z . The most common seven elements, making up almost 99% of the material in a meteorite, all had even Z . In addition, he observed that 90% of the material consisted of only 15 different isotopes, with atomic weights in multiples of four, the approximate weight of alpha particles . Three years earlier, Oddo made a similar observation for elements in the Earth's crust, speculating that elements are condensation products of helium . The nuclear core of helium is the same as an alpha particle. [ 5 ] : 385 This early work connecting geochemistry with nuclear physics and cosmology was greatly expanded by the Norwegian group created by Victor Goldschmidt . [ 5 ] : 389 The Oddo–Harkins rule for elements from 12 C to 56 Fe is explained by the alpha process of stellar nucleosynthesis . [ 6 ] : 42 The process involves the fusion of alpha particles (helium-4 nuclei) under high temperature and pressure within the stellar environment. Each step in the alpha process adds two protons (and two neutrons), favoring synthesis of even-numbered elements. Carbon itself is a product of a triple-alpha process from helium, a process that skips Li, Be, and B. These nuclides (and helium-3) are produced by cosmic ray spallation – a type of nuclear fission in which cosmic rays impact larger isotopes and fragment them. Spallation does not require high temperature and pressure of the stellar environment but can occur on Earth. Though the lighter products of spallation are relatively rare, the odd-mass-number isotopes in this class occur in greater relative abundance compared to even-number isotopes, in contravention of the Oddo–Harkins rule. This postulate, however, does not apply to the universe's most abundant and simplest element: hydrogen , with an atomic number of 1. This may be because, in its ionized form, a hydrogen atom becomes a single proton, of which it is theorized to have been one of the first major conglomerates of quarks during the initial second of the Universe's inflation period , following the Big Bang . In this period, when inflation of the universe had brought it from an infinitesimal point to about the size of a modern galaxy, temperatures in the particle soup fell from over a trillion kelvins to several million kelvins. This period allowed the fusion of single protons and deuterium nuclei to form helium and lithium nuclei but was too short for every H + ion to be reconstituted into heavier elements. In this case, helium, atomic number 2, remains the even-numbered counterpart to hydrogen. Thus, neutral hydrogen—or hydrogen paired with an electron , the only stable lepton —constituted the vast majority of the remaining unannihilated portions of matter following the conclusion of inflation. Another exception to the rule is beryllium , which, despite an even atomic number (4), is rarer than adjacent elements ( lithium and boron ). This is because most of the universe's lithium, beryllium, and boron are made by cosmic ray spallation , not ordinary stellar nucleosynthesis , and beryllium has only one stable isotope (even that is a Borromean nucleus near the boundary of stability), causing it to lag in abundance with regard to its neighbors, each of which has two stable isotopes. The elemental basis of the Oddo–Harkins has direct roots in the isotopic compositions of the elements. [ 7 ] While even-atomic-numbered elements are more abundant than odd, the spirit of Oddo–Harkins rule extends to the most abundant isotopes as well. Isotopes containing an equal number of protons and neutrons are the most abundant. These include He 2 4 {\displaystyle {\ce {^{4}_{2}He}}} , C 6 12 {\displaystyle {\ce {^{12}_{6}C}}} , N 7 14 {\displaystyle {\ce {^{14}_{7}N}}} , O 8 16 {\displaystyle {\ce {^{16}_{8}O}}} , Ne 10 20 {\displaystyle {\ce {^{20}_{10}Ne}}} , Mg 12 24 {\displaystyle {\ce {^{24}_{12}Mg}}} , Si 14 28 {\displaystyle {\ce {^{28}_{14}Si}}} , and S 16 32 {\displaystyle {\ce {^{32}_{16}S}}} . Seven of the eight are alpha nuclides containing whole multiples of He-4 nuclei ( N 7 14 {\displaystyle {\ce {^{14}_{7}N}}} is the exception). Two of the eight ( He 2 4 {\displaystyle {\ce {^{4}_{2}He}}} and O 8 16 {\displaystyle {\ce {^{16}_{8}O}}} ) contain magic numbers of either protons or neutrons (2, 8, 20, 28, 50, 82, and 126) and are therefore predicted by the nuclear shell model to be unusually abundant. The high abundances of the remaining six ( C 6 12 {\displaystyle {\ce {^{12}_{6}C}}} , N 7 14 {\displaystyle {\ce {^{14}_{7}N}}} , Ne 10 20 {\displaystyle {\ce {^{20}_{10}Ne}}} , Mg 12 24 {\displaystyle {\ce {^{24}_{12}Mg}}} , Si 14 28 {\displaystyle {\ce {^{28}_{14}Si}}} , and S 16 32 {\displaystyle {\ce {^{32}_{16}S}}} ) are not predicted by the shell model. "That nuclei of this type are unusually abundant indicates that the excess stability must have played a part in the process of the creation of elements", stated Maria Goeppert Mayer in her acceptance lecture for the Nobel Prize in Physics in 1963 for discoveries concerning nuclear shell structure. [ 8 ] The Oddo–Harkins rule may suggest that elements with odd atomic numbers have a single, unpaired proton and may swiftly capture another in order to achieve an even atomic number and proton parity. Protons are paired in elements with even atomic numbers, with each member of the pair balancing the spin of the other, thus enhancing nucleon stability. A challenge to this explanation is posed by N 7 14 {\displaystyle {\ce {^{14}_{7}N}}} , which is highly abundant in spite of having an unpaired proton. Additionally, even-parity isotopes that have exactly two more neutrons than protons are not particularly abundant despite their even parity. Each of the light elements oxygen, neon, magnesium, silicon, and sulfur, have two isotopes with even isospin (nucleon) parity. As shown in the plot above, the isotope with an equal number of protons and neutrons is one to two orders of magnitude more abundant than the isotope with even parity but two additional neutrons. Depending on the mass of a star, the Oddo–Harkins pattern arises from the burning of progressively more massive elements within a collapsing dying star by fusion processes such as the proton–proton chain , the CNO cycle , and the triple-alpha process . The newly formed elements are ejected slowly as stellar wind or in the explosion of a supernova and eventually join the rest of the galaxy's interstellar medium .
https://en.wikipedia.org/wiki/Oddo–Harkins_rule
The Oddy test is a procedure created at the British Museum by conservation scientist William Andrew Oddy [ 1 ] in 1973, [ 2 ] in order to test materials for safety in and around art objects. Often, materials for construction and museum contexts (including artefact conservation) are evaluated for safety. However, though materials may be safe for building purposes, they may emit trace amounts of chemicals that can harm art objects over time. Acids , formaldehyde , and other pollutants can damage and even destroy delicate artifacts if placed too close. This test calls for a sample of the material in question to be placed in an airtight container with three coupons of different metals— silver , lead , and copper —that are not touching each other or the sample of the material. [ 3 ] The container is sealed with a small amount of de-ionized water to maintain a high humidity , then heated at 60 degrees Celsius for 28 days. An identical container with three metal coupons acts as a control . If the metal coupons show no signs of corrosion , then the material is deemed suitable to be placed in and around art objects. The Oddy test is not a contact test, but is for testing off-gassing . Each metal detects a different set of corrosive agents . The silver is for detecting reduced sulfur compounds and carbonyl sulfides . The lead is for detecting organic acids, aldehyde , and acidic gases. The copper is for detecting chloride , oxide, and sulfur compounds. There are many types of materials testing for other purposes, including chemical testing and physical testing . The Oddy test has gone through many changes and refinements over time. Whereas Andrew Oddy proposed to place each metal coupon in a separate glass container with the material to be tested, Bamberger et al. [ 4 ] proposed a "three-in-one" test, where all three metal coupons shared one container, simplifying the procedure. Robinett and Thickett (2003) [ 5 ] refined the "three-in-one" test by stabilizing the metal coupons. One of the main issues with the Oddy test is that there is some subjectivity to the interpretation of the results, [ 6 ] since it is primarily a visual determination. Some proposals have been made to use objective quantification methods for assessment of the results of the Oddy test. [ 7 ] Institutions that use the Oddy test in their research are mainly art museums such as The J. Paul Getty Museum , The Nelson-Atkins Museum of Art , and the Metropolitan Museum of Art . This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Oddy_test
In computing, an odd–even sort or odd–even transposition sort (also known as brick sort [ 1 ] [ self-published source ] or parity sort ) is a relatively simple sorting algorithm , developed originally for use on parallel processors with local interconnections. It is a comparison sort related to bubble sort , with which it shares many characteristics. It functions by comparing all odd/even indexed pairs of adjacent elements in the list and swapping pairs where in the wrong order (where the first is larger than the second). The next step repeats this for even/odd indexed pairs (of adjacent elements). Then it alternates between odd/even and even/odd steps until the list is sorted. On parallel processors, with one value per processor and only local left–right neighbor connections, the processors all concurrently do a compare–exchange operation with their neighbors, alternating between odd–even and even–odd pairings. This algorithm was originally presented, and shown to be efficient on such processors, by Habermann in 1972. [ 2 ] The algorithm extends efficiently to the case of multiple items per processor. In the Baudet–Stevenson odd–even merge-splitting algorithm, each processor sorts its own sublist at each step, using any efficient sort algorithm, and then performs a merge splitting, or transposition–merge, operation with its neighbor, with neighbor pairing alternating between odd–even and even–odd on each step. [ 3 ] A related but more efficient sort algorithm is the Batcher odd–even mergesort , using compare–exchange operations and perfect-shuffle operations. [ 4 ] Batcher's method is efficient on parallel processors with long-range connections. [ 5 ] The single-processor algorithm, like bubblesort , is simple but not very efficient. Here a zero-based index is assumed: Claim: Let a 1 , . . . , a n {\displaystyle a_{1},...,a_{n}} be a sequence of data ordered by <. The odd–even sort algorithm correctly sorts this data in n {\displaystyle n} passes. (A pass here is defined to be a full sequence of odd–even, or even–odd comparisons. The passes occur in order pass 1: odd–even, pass 2: even–odd, etc.) Proof: This proof is based loosely on one by Thomas Worsch. [ 6 ] Since the sorting algorithm only involves comparison-swap operations and is oblivious (the order of comparison-swap operations does not depend on the data), by Knuth's 0–1 sorting principle, [ 7 ] [ 8 ] it suffices to check correctness when each a i {\displaystyle a_{i}} is either 0 or 1. Assume that there are e {\displaystyle e} 1s. Observe that the rightmost 1 can be either in an even or odd position, so it might not be moved by the first odd–even pass. But after the first odd–even pass, the rightmost 1 will be in an even position. It follows that it will be moved to the right by all remaining passes. Since the rightmost one starts in position greater than or equal to e {\displaystyle e} , it must be moved at most n − e {\displaystyle n-e} steps. It follows that it takes at most n − e + 1 {\displaystyle n-e+1} passes to move the rightmost 1 to its correct position. Now, consider the second rightmost 1. After two passes, the 1 to its right will have moved right by at least one step. It follows that, for all remaining passes, we can view the second rightmost 1 as the rightmost 1. The second rightmost 1 starts in position at least e − 1 {\displaystyle e-1} and must be moved to position at most n − 1 {\displaystyle n-1} , so it must be moved at most ( n − 1 ) − ( e − 1 ) = n − e {\displaystyle (n-1)-(e-1)=n-e} steps. After at most 2 passes, the rightmost 1 will have already moved, so the entry to the right of the second rightmost 1 will be 0. Hence, for all passes after the first two, the second rightmost 1 will move to the right. It thus takes at most n − e + 2 {\displaystyle n-e+2} passes to move the second rightmost 1 to its correct position. Continuing in this manner, by induction it can be shown that the i {\displaystyle i} -th rightmost 1 is moved to its correct position in at most n − e + i {\displaystyle n-e+i} passes. Since i ≤ e {\displaystyle i\leq e} , it follows that the i {\displaystyle i} -th rightmost 1 is moved to its correct position in at most n − e + e = n {\displaystyle n-e+e=n} passes. The list is thus correctly sorted in n {\displaystyle n} passes. QED. We remark that each pass takes O ( n ) {\displaystyle O(n)} steps, so this algorithm has O ( n 2 ) {\displaystyle O(n^{2})} complexity.
https://en.wikipedia.org/wiki/Odd–even_sort
Odfjell Drilling Ltd. is an oil drilling , well service , and engineering company. The company has 3 divisions: [ 3 ] The company was established in 1973 as an affiliate of Odfjell . In 1974, the first rigs were delivered from Aker ASA , and started service for ELF and Saga Petroleum . The first production drilling contract was awarded by Statoil on the Statfjord oil field in 1979. In 1984, the company expanded to the United Kingdom with a semi-submersible rig for Hamilton Brothers Oil & Gas. In 1989 the company opened an office in Singapore . In 1995, the company decided to concentrate on the North Sea . [ 4 ] In 2013, the company became a public company via an initial public offering on the Oslo Stock Exchange . [ 5 ] In 2017, the company sold its 37% interest in Robotic Drilling Systems, which it acquired in 2014. [ 6 ] [ 7 ] In 2018, the company announced plans to expand its rig count from 4 to 6 to 10. [ 8 ] In 2018, the company acquired a drilling rig from Samsung . [ 9 ]
https://en.wikipedia.org/wiki/Odfjell_Drilling
Odile Eisenstein ForMemRS is a theoretical chemist who specializes in modelling the structure and reactivity of transition metals and lanthanide complexes. She is currently the equivalent of an Emeritus Professor at the Institut Charles Gerhardt Montpellier, équipe CTMM at Montpellier 2 University and a professor at the Hylleraas Centre for Quantum Molecular Sciences at the University of Oslo. [ 1 ] She has been a member of the French Academy of Sciences since 2013, as the first female elect. [ 2 ] In 2018 she was awarded the «insignes d'officier dans l’ordre de la Légion d'honneur» at the Institut de France in Paris. [ 3 ] In 1977, Odile Eisenstein attended University of Paris-Sud where she earned a Ph.D in chemistry with Nguyen Trong Anh and Lionel Salem . She obtained postdoctoral appointments with Jack D. Dunitz at ETH Zurich and Roald Hoffmann at Cornell University . Here, she did work on the nature of transition metal-olefin bonding interactions. She began her independent career at the University of Michigan at Ann Arbor in 1982. This article about a French chemist is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Odile_Eisenstein
v4 - 1.2.1 (Linux) Odin is a utility software program developed and used by Samsung internally which is used to communicate with Samsung devices in Odin mode (also called download mode ) through the Thor (protocol) . It can be used to flash a custom recovery firmware image (as opposed to the stock recovery firmware image) to a Samsung Android device . Odin is also used for unbricking certain Android devices. [ 2 ] Odin is the Samsung proprietary alternative to Fastboot . There is no account of Samsung ever having officially openly released Odin, [ 3 ] though it is mentioned in the developer documents for Samsung Knox SDK [ 4 ] and some documents even instruct users to use Odin. [ 5 ] Some other docs on Knox SDK reference "engineering firmware", [ 6 ] [ 7 ] which presumably can be a part of the Knox SDK along with Odin. Publicly available binaries are believed to be the result of leaks. The tool is not intended for end-users, but for Samsung's own personnel and approved repair centers. [ 8 ] Although none of the publicly available downloads are authorized by Samsung itself, XDA-Developers consider the files offered on their Forum ( Patched Odin v3 3.14.1 for windows ) ( Odin v4 1.2.1 for linux ) the safest option. There is now a 3.14.4 but it is not stable release For the usage of Odin, the phone needs to be in Download mode. For this, some key combination need to be pressed, such as Power + Volume Down + Home , or Power + Volume Down + Bixby for later models. [ 9 ] Heimdall is a free/libre/open-source , cross-platform replacement for Odin which is based on libusb . [ 3 ] Heimdall can be used on Mac or Linux. [ 10 ] The name Heimdall , like Odin , is an allusion to Norse mythology ; both Odin and Heimdall are among the deities of the Norse pantheon . [ 11 ] [ non-primary source needed ] This software article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Odin_(firmware_flashing_software)
Odyssey Space Research, LLC is a small business based in Houston, Texas near NASA Lyndon B. Johnson Space Center providing engineering research and analysis services. This start-up in the space industry founded in November 2003 has already won major contracts and is the only private company working on the 5 next human-rated spacecraft ( ATV , HTV , Orion , and both COTS spacecraft with SpaceX and Orbital Sciences Corporation ). [ needs update ] June 9, 2011 Odyssey Space Research, L.L.C., announced a space-based, experimental app, dubbed SpaceLab for iOS , which will be used for space research aboard the International Space Station (ISS) . The SpaceLab for iOS app will make its way to the ISS on an iPhone 4 aboard the orbiter Atlantis on the space shuttle fleet's historic final mission, STS-135 , and will remain there for several months for the ISS crew to conduct a series of experiments. Odyssey also announced it is bringing the astronauts' on-orbit experimental tasks down to earth for "terrestrial" consumers to enjoy via the SpaceLab for iOS app available today from the App Store . [ 1 ] August 31, 2006 NASA announced the results of the Orion crew exploration vehicle (CEV) development contract competition. Odyssey Space Research is part of the winning Lockheed Martin team supporting NASA's Orion project. The Odyssey role will include support of the vehicle guidance, navigation and control (GN&C), simulation development, and related analysis. August 18, 2006 NASA announced the results of the Commercial Orbital Transportation Services (COTS) demonstration competition. Odyssey Space Research is part of one of the two COTS winning teams: SpaceX . Odyssey's role will include support of the Dragon vehicle guidance, navigation and control (GN&C) development, selected simulation and test-bed development, related analyses, systems engineering and operations. This space - or spaceflight -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Odyssey_Space_Research
Oenochroma subustaria , also known as the grey wine moth , [ 3 ] is a species of moth of the family Geometridae . [ 2 ] It is found in Australia , including Tasmania . [ 3 ] This Oenochrominae moth related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Oenochroma_subustaria
In mathematics , the phrase " of the form " indicates that a mathematical object, or (more frequently) a collection of objects, follows a certain pattern of expression. It is frequently used to reduce the formality of mathematical proofs . Here is a proof which should be appreciable with limited mathematical background: Statement: The product of any two even natural numbers is also even. Proof: Any even natural number is of the form 2 n , where n is a natural number. Therefore, let us assume that we have two even numbers which we will denote by 2 k and 2 l . Their product is (2 k )(2 l ) = 4( kl ) = 2(2 kl ). Since 2 kl is also a natural number, the product is even. Note: In this case, both exhaustivity and exclusivity were needed. That is, it was not only necessary that every even number is of the form 2 n (exhaustivity), but also that every expression of the form 2 n is an even number (exclusivity). This will not be the case in every proof, but normally, at least exhaustivity is implied by the phrase of the form . This mathematics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Of_the_form
Ofer Lahav ( Hebrew : עופר להב ) FRAS FInstP is Perren Chair of Astronomy at University College London (UCL) , Vice-Dean (International) of the UCL Faculty of Mathematical and Physical Sciences (MAPS) and Co-Director of the STFC Centre for Doctoral Training in Data Intensive Science. His research area is Observational Cosmology, in particular probing Dark Matter and Dark Energy. His work involves Machine Learning for Big Data. Lahav served as the UCL Head of Astrophysics (2004–2011), Vice-Dean (Research) of UCL's Faculty of Mathematical and Physical Sciences (2011–2015), and as Vice-President of the Royal Astronomical Society (2010–2012). He is one of the founders of the Dark Energy Survey (DES) , and he co-chaired the international DES Science Committee from inception until 2016. He chairs both the DES:UK and DESI :UK consortia, as well as the DES Advisory Board. He previously served as a member of the STFC Science Board (2016–2019). From 2012 to 2018, Lahav held a European Research Council (ERC) Advanced Grant on "Testing the Dark Energy Paradigm" (TESTDE programme). Lahav studied physics at Tel-Aviv University (BSc, 1980), physics at Ben-Gurion University (MSc, 1985) and earned his Ph.D. (1988) in astronomy [ 3 ] from the University of Cambridge , where he was later a member of staff at the Institute of Astronomy (1990–2003) and a Fellow of St Catharine's College, Cambridge . Lahav's research is focused on cosmological probes of Dark Matter and Dark Energy, [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] in particular large galaxy surveys. [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] Lahav has co-authored over 400 [ 15 ] research articles in peer reviewed scientific journals, including 10 invited review articles and book chapters. Lahav is a Thomson ISI highly cited author, [ 16 ] h-factor 83. His past doctoral students include Chris Lintott . [ 2 ]
https://en.wikipedia.org/wiki/Ofer_Lahav
Off-center ions in crystals are substitutional impurity ions whose equilibrium position is shifted away from the regular lattice site. The magnitude of the shift typically ranges from 0.2 to 1.0 Å. There are two possible mechanisms that can cause impurity ion displacement. If the impurity ion is smaller than the regular ion (by 10% or more), the displacement arises because the repulsive forces between the impurity ion and its nearest neighbors stabilizing the ion at the regular site are strongly weakened. [ 1 ] [ 2 ] [ 3 ] [ 4 ] If the impurity ion is bigger than the regular ion, the displacement arises because of different covalency of the chemical bonds with the nearest neighbors for the impurity and regular ions. [ 5 ] [ 6 ] Off-center position of substitutional ions was first discovered in lithium-doped KCl by two groups of American physicists in 1965. [ 7 ] [ 8 ] Since these pioneer works crystals with off-center impurity ions have attracted continuous attention. The cause of such interest is that these crystals can be used as good model objects for the investigation of such key phenomena in solid state physics as quantum tunnelling of atomic particles in solid state, cooperative properties of the system of local centers with internal degrees of freedom, and ferroelectricity .
https://en.wikipedia.org/wiki/Off-center_ions
Offsite construction refers to the planning, design, manufacture and assembly of building elements at a location other than their final installed location to support the rapid speed of, and efficient construction of a permanent structure. Such building elements may be prefabricated offsite in a different location and transported to the site or prefabricated on the construction site and then transported to their final location. Offsite construction is characterized by an integrated planning and supply chain optimization strategy. Offsite manufacturing (OSM), offsite production (OSP) and offsite fabrication (OSF) are terms used when referring primarily to the factory work proper. [ 1 ] Off-site construction (like on-site construction) can be used for a variety of purposes including residential, educational, health care and commercial applications. Buildings can range from a few modular units to several hundred. They can be arranged in architectural configurations and can be many stories in height. Boston Consulting Group writers Romain de Laubier et al. identify six advantages of offsite construction: Where these factors are measurable, they suggest, for example, that best-in-class offsite construction operates with a defect-free rate for new buildings at over 95%, and that construction waste and emissions can be reduced by 50% in comparison with onsite construction building completion times can be cut by one third. One area where risk is reduced relates to the engagement of subcontractors . [ 2 ] Off-site construction is very similar to modular construction , but it is focused primarily on permanent construction; modular construction can be either permanent or relocatable. Also known as offsite construction, or OSC, and also incorporates many MMC - or modern methods of construction technologies. Prefabrication of building components has been ongoing since the industrial revolution, especially with the adoption of the balloon frame construction method in the 1830s. Applied to single-family homes, it gave rise to many kit homes, such as the Sears Modern Homes imagined by the eponymous company in 1908. The rise of steel frames and the first skyscrapers led to the industrial production of steel components, produced off-site. In 1930, the Empire State building , one of the most famous skyscrapers in New-York City, was built essentially off site, in the record time of one year and 45 days. Thanks to the prefabricated elements, a new floor was built every day, seven every week. [ 3 ] After the Second World War, Walter Gropius and Konrad Wachsmann drew a new type of prefabricated single-family house, based on a grid of wooden panels and a seamless metal assembly. [ 4 ] Today, the most widely used form of prefabrication in building and civil engineering is the use of prefabricated concrete and prefabricated steel sections in structures where a particular part or form is repeated many times. From the 1990s, industry experts and scholars started using the term off-site construction has been in use since the 1990s to consider technological, engineering and industrialization evolutions in the building sector. [ 5 ] Off-site construction is a subject of research since at least 2004. [ 6 ] In 2004, a study of the building and civil engineering department of the Loughborough University showed the interest of off-site construction to enhance building quality. [ 7 ] In 2017, the British government ordered the Farmer Review of the UK Construction Labour Model . The review highlighted the weaknesses of traditional methods of construction, especially their lack of productivity, and called for the widespread adoption of off-site construction. As a result, in 2020, the UK Department for Education announced a 3 £billion investment to build one hundred and twenty off-site schools in four years. [ 8 ] The North America off-site construction market size was valued at $49,460.1 million in 2021, and is projected to reach $80,851.3 million by 2031, registering a CAGR of 4.9% from 2022 to 2031. [ 9 ] In May 2022, the Cree Nation communities in Canada received $17.4 million to deploy modular housing. Such factors are anticipated to significantly boost the North America off-site construction market. [ 9 ] An Autodesk study mentions 6.000 houses built off-site every year, that is 12% of the new homes built in a year. [ 10 ] Led by current research drawing attention to the industry’s potential, Melbourne School of Engineering and the Centre for Advanced Manufacturing of Prefabricated Housing wants to grow the prefab market share within the Australian construction industry from five per cent to 15 per cent by 2025. [ 11 ] This 225-meter tower, created by Richard Rogers, was finished in 2014. It was built 80% off-site. [ 12 ] A 30 meters high hotel, made by SeArch Agency. 176 of the 200 rooms were prefabricated. [ 13 ] TopHat is a British start-up born in 2019, it conceives modular buildings made of recycled and bio sourced materials. It first realization started in 2018 and achieved in 2019. The company received a capital investment from Goldman Sachs. [ 10 ] In Germany, the largest residential property company, Vonovia, with €33 billion under management, delivered its first modular operation of 38 homes on the outskirts of Wiesbaden, a city of 300,000 inhabitants, in 2018. [ 10 ] GA Smart Building is a pioneer on the off-site construction in France. The off-site approach of this developer and builder dates from the 1970s. It has eight French factories, three for wood in the Loire and Vosges regions, three for concrete in Normandy, the Grand-Est and Occitanic regions, one for thermal and lighting comfort equipment and one for joinery. Full Stack Modular, an American off-site player with a similar positioning, set up its main factory in Brooklyn in 2016 and has been operating it ever since. Dvele is a designer and producer of high-end prefabricated homes. Their modular homes are marketed to both individual home buyers as well as larger, multi-unit developers. They currently offer 20 different floor plans which range from 705 square foot tiny homes and ADUs to large, two-story homes with nearly 4,000 square feet of living space. [ 14 ]
https://en.wikipedia.org/wiki/Off-site_construction
An off-stoichiometry thiol-ene polymer is a polymer platform comprising off-stoichiometry thiol-enes ( OSTE ) and off-stoichiometry thiol-ene-epoxies ( OSTE+ ). The OSTE polymers comprise off-stoichiometry blends of thiols and allyls. After complete polymerization, typically by UV micromolding, the polymer articles contain a well-defined number of unreacted thiol or allyls groups both on the surface and in the bulk. These surface anchors can be used for subsequent direct surface modification or bonding. [ 1 ] In later versions epoxy monomers were added to form ternary thiol-ene-epoxy monomer systems (OSTE+), where the epoxy in a second step reacts with the excess of thiols creating a final polymer article that is completely inert. [ 2 ] Some of the critical features of OSTE+ polymers include uncomplicated and rapid fabrication of complex structures in a standard chemistry labs, hydrophilic native surface properties and covalent bonding via latent epoxy chemistry. [ 3 ] The OSTE polymer resins were originally developed by Tommy Haraldsson and Fredrik Carlborg at the group of Micro and Nanosystems [ 4 ] at the Royal Institute of Technology (KTH) to bridge the gap between research prototyping and commercial production of microfluidics devices. [ 1 ] The resins were later adapted and improved for commercial applications by the Swedish start-up Mercene Labs AB under the name OSTEMER. The OSTE resins are cured via a rapid thiol-ene "Click" reaction between thiols and allyls. The thiols and allyls react in a perfectly alternating fashion and has a very high conversion rate (up to 99%), [ 5 ] the initial off-stoichiometry of the monomers will exactly define the number off unreacted groups left after the polymerization. With the right choice of monomers very high off-stoichiometry ratios can be attained while maintaining good mechanical properties. [ 1 ] The off-stoichiometry thiol-ene-epoxies, or OSTE+ polymers, are created in a two-step curing process where a first rapid thiol-ene reaction defines the geometric shape of the polymer while leaving an excess of thiols and all the epoxy unreacted. In a second step all the remaining thiol groups and the epoxy groups are reacted to form an inert polymer. [ 6 ] The main advantages put forward of the UV-cured OSTE polymers in microsystems have been their i) dry bonding capacity by reacting a polymer with thiol excess to a second polymer with allyl excess at room-temperature using only UV-light, ii) their well-defined and tunable number of surface anchors (thiols or allyls) present on the surface that can be used for direct surface modification [ 7 ] and iii) their wide tuning range of mechanical properties from rubbery to thermoplastic-like depending only on the choice of off-stoichiometry. [ 8 ] [ 1 ] The glass transition temperature typically varies from below room-temperature for high off-stoichiometric ratios to 75 °C for a stoichiometric blend of tetrathiol and triallyl. [ 9 ] They are typically transparent in the visible range. A disadvantage put forward with the OSTE-polymers is the leaching out of unreacted monomers at very high off-stoichiometric ratios which may affect cells and proteins in lab-on-chips, [ 1 ] although cell viability has been observed for cell cultures on low off-stoichiometric OSTE. [ 10 ] The dual-cure thiol-ene-epoxies, or OSTE+ polymers, differ from the OSTE-polymers in that they have two separated curing steps. After the first UV-initiated step, the polymer is rubbery and can easily be deformed [ 11 ] and it has surface anchors available for surface modification. [ 12 ] During the second step, when all the thiols and epoxies are reacted the polymer stiffens and can bond to a wide number of substrates, including itself, via the epoxy chemistry. The advantages put forward for the OSTE+ are i) their unique ability for integration and bonding via the latent epoxy chemistry and the low built-in stresses in the thiol-enes polymers [ 13 ] ii) their complete inertness after final cure iii) their good barrier properties [ 14 ] and the possibility to scale up manufacturing using industrial reaction injection molding. [ 15 ] Both stiff and rubbery versions of the OSTE+ polymers have been demonstrated, showing their potential in microsystems for valving and pumping similar to PDMS components, but with the benefit of withstanding higher pressures. [ 11 ] The commercial version of the OSTE+ polymer, OSTEMER 322, has been shown to be compatible with many cell lines. [ 16 ] The OSTE resins can be cast and cured in a structured silicone molds [ 1 ] or coated permanent photoresist. [ 17 ] OSTE polymers have also shown excellent photostructuring capability [ 18 ] using photomasks, enabling for example powerful and flexible capillary pumps. [ 19 ] The OSTE+ resins are first UV-cured in the same way as the OSTE-polymers but are later thermally cured to stiffen and bond to a substrate. OSTE+ allows for soft lithography microstructuring, strong biocompatible dry bonding to almost any substrate during Lab-on-a-chip (LoC) manufacturing, while simultaneously mimicking the mechanical properties found in thermoplastic polymers, hence allowing for true prototyping of commercial LoC. [ 20 ] The commonly used materials for microfluidics suffer from unwieldy steps and often ineffective bonding processes, especially when packaging biofunctionalized surfaces, which makes LoC assembly difficult and costly [ 21 ] [ 22 ] OSTE+ polymer which effectively bonds to nine dissimilar types of substrates, requires no surface treatment prior to the bonding at room temperature, features high Tg, and achieves good bonding strength to at least 100 °C. [ 20 ] Moreover, it has been demonstrated that excellent results can be obtained using photolithography on OSTE polymer, opening wider potential applications. [ 23 ] Biosensors are used for a range of biological measurements. [ 24 ] [ 25 ] OSTE packaging for biosensing has been demonstrated for QCM, [ 26 ] and photonic ring resonator sensors. [ 27 ] Adhesive wafer bonding has become an established technology in microelectromechanical systems (MEMS) integration and packaging applications. [ 28 ] OSTE is suitable for heterogeneous silicon wafer level integration depending on its application in low temperature processes due to its ability to cure even in room temperatures. [ 29 ] Imprinting of arrays with hydrophilic-in-hydrophobic microwells is made possible using an innovative surface energy replication approach by means of a hydrophobic thiol-ene polymer formulation. In this polymer, hydrophobic-moiety-containing monomers self-assemble at the hydrophobic surface of the imprinting stamp, which results in a hydrophobic replica surface after polymerization. After removing the stamp, microwells with hydrophobic walls and a hydrophilic bottom are obtained. Such fast and inexpensive procedure can be utilised in digital microwell array technology toward diagnostic applications. [ 30 ] [ 31 ] OSTE resin can also be used as e-beam resist, resulting in nanostructures that allow direct protein functionalization. [ 32 ]
https://en.wikipedia.org/wiki/Off-stoichiometry_thiol-ene_polymer