id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
71944
https://en.wikipedia.org/wiki/Kidney%20bean
Kidney bean
The kidney bean is a variety of the common bean (Phaseolus vulgaris) named for its resemblance to a human kidney. Classification There are different classifications of kidney beans, such as: Red kidney bean (also known as: common kidney bean, rajma in India, surkh (red) lobia in Pakistan). Light speckled kidney bean (and long shape light speckled kidney bean). Red speckled kidney bean (and long shape light speckled kidney bean). White kidney bean (also known as cannellini in Italy and the UK, lobia in India, or safaid (white) lobia in Pakistan). Nutrition Kidney beans, cooked by boiling, are 67% water, 23% carbohydrates, 9% protein, and contain negligible fat. In a 100-gram reference amount, cooked kidney beans provide of food energy, and are a rich source (20% or more of the Daily Value, DV) of protein, folate (33% DV), iron (22% DV), and phosphorus (20% DV), with moderate amounts (10–19% DV) of thiamine, copper, magnesium, and zinc (11–14% DV). Dishes Red kidney beans are commonly used in chili con carne and are used in the cuisine of India, where the beans are known as rajma and Pakistan where they are called surkh lobia. Red kidney beans are used in southern Louisiana for the classic Monday Creole dish of red beans and rice. The smaller, darker red beans are also used, particularly in Louisiana families with a recent Caribbean heritage. In Jamaica, they are referred to as red peas. Small kidney beans used in La Rioja, Spain, are called caparrones. In the Netherlands and Indonesia, kidney beans are usually served as a soup called brenebon. In the Levant, a common dish consisting of kidney bean stew usually served with rice is known as fasoulia. To make bean paste, kidney beans are generally prepared from dried beans and boiled until they are soft, at which point the dark red beans are pulverized into a dry paste. Toxicity Red kidney beans contain relatively high amounts of phytohemagglutinin, and thus are more toxic than most other bean varieties if not soaked and then boiled for at least 10 minutes. The US Food and Drug Administration recommends boiling for 30 minutes to ensure they reach a sufficient temperature long enough to completely destroy the toxin. Cooking at the lower temperature of , such as in a slow cooker, is insufficient to denature the toxin and has been reported to cause food poisoning. Canned red kidney beans, though, are safe to eat straight from the can, as they are cooked prior to being shipped. As few as five raw beans or a single undercooked kidney bean can cause severe nausea, diarrhea, vomiting, and abdominal pains. Cookbook
Biology and health sciences
Pulses
Plants
72016
https://en.wikipedia.org/wiki/Genetic%20drift
Genetic drift
Genetic drift, also known as random genetic drift, allelic drift or the Wright effect, is the change in the frequency of an existing gene variant (allele) in a population due to random chance. Genetic drift may cause gene variants to disappear completely and thereby reduce genetic variation. It can also cause initially rare alleles to become much more frequent and even fixed. When few copies of an allele exist, the effect of genetic drift is more notable, and when many copies exist, the effect is less notable (due to the law of large numbers). In the middle of the 20th century, vigorous debates occurred over the relative importance of natural selection versus neutral processes, including genetic drift. Ronald Fisher, who explained natural selection using Mendelian genetics, held the view that genetic drift plays at most a minor role in evolution, and this remained the dominant view for several decades. In 1968, population geneticist Motoo Kimura rekindled the debate with his neutral theory of molecular evolution, which claims that most instances where a genetic change spreads across a population (although not necessarily changes in phenotypes) are caused by genetic drift acting on neutral mutations. In the 1990s, constructive neutral evolution was proposed which seeks to explain how complex systems emerge through neutral transitions. Analogy with marbles in a jar The process of genetic drift can be illustrated using 20 marbles in a jar to represent 20 organisms in a population. Consider this jar of marbles as the starting population. Half of the marbles in the jar are red and half are blue, with each colour corresponding to a different allele of one gene in the population. In each new generation, the organisms reproduce at random. To represent this reproduction, randomly select a marble from the original jar and deposit a new marble with the same colour into a new jar. This is the "offspring" of the original marble, meaning that the original marble remains in its jar. Repeat this process until 20 new marbles are in the second jar. The second jar will now contain 20 "offspring", or marbles of various colours. Unless the second jar contains exactly 10 red marbles and 10 blue marbles, a random shift has occurred in the allele frequencies. If this process is repeated a number of times, the numbers of red and blue marbles picked each generation fluctuates. Sometimes, a jar has more red marbles than its "parent" jar and sometimes more blue. This fluctuation is analogous to genetic drift – a change in the population's allele frequency resulting from a random variation in the distribution of alleles from one generation to the next. In any one generation, no marbles of a particular colour could be chosen, meaning they have no offspring. In this example, if no red marbles are selected, the jar representing the new generation contains only blue offspring. If this happens, the red allele has been lost permanently in the population, while the remaining blue allele has become fixed: all future generations are entirely blue. In small populations, fixation can occur in just a few generations. Probability and allele frequency The mechanisms of genetic drift can be illustrated with a very simple example. Consider a very large colony of bacteria isolated in a drop of solution. The bacteria are genetically identical except for a single gene with two alleles labeled A and B, which are neutral alleles, meaning that they do not affect the bacteria's ability to survive and reproduce; all bacteria in this colony are equally likely to survive and reproduce. Suppose that half the bacteria have allele A and the other half have allele B. Thus, A and B each has an allele frequency of 1/2. The drop of solution then shrinks until it has only enough food to sustain four bacteria. All other bacteria die without reproducing. Among the four that survive, 16 possible combinations for the A and B alleles exist: (A-A-A-A), (B-A-A-A), (A-B-A-A), (B-B-A-A), (A-A-B-A), (B-A-B-A), (A-B-B-A), (B-B-B-A), (A-A-A-B), (B-A-A-B), (A-B-A-B), (B-B-A-B), (A-A-B-B), (B-A-B-B), (A-B-B-B), (B-B-B-B). Since all bacteria in the original solution are equally likely to survive when the solution shrinks, the four survivors are a random sample from the original colony. The probability that each of the four survivors has a given allele is 1/2, and so the probability that any particular allele combination occurs when the solution shrinks is (The original population size is so large that the sampling effectively happens with replacement). In other words, each of the 16 possible allele combinations is equally likely to occur, with probability 1/16. Counting the combinations with the same number of A and B gives the following table: As shown in the table, the total number of combinations that have the same number of A alleles as of B alleles is six, and the probability of this combination is 6/16. The total number of other combinations is ten, so the probability of unequal number of A and B alleles is 10/16. Thus, although the original colony began with an equal number of A and B alleles, quite possibly, the number of alleles in the remaining population of four members will not be equal. The situation of equal numbers is actually less likely than unequal numbers. In the latter case, genetic drift has occurred because the population's allele frequencies have changed due to random sampling. In this example, the population contracted to just four random survivors, a phenomenon known as a population bottleneck. The probabilities for the number of copies of allele A (or B) that survive (given in the last column of the above table) can be calculated directly from the binomial distribution, where the "success" probability (probability of a given allele being present) is 1/2 (i.e., the probability that there are k copies of A (or B) alleles in the combination) is given by: where n=4 is the number of surviving bacteria. Mathematical models Mathematical models of genetic drift can be designed using either branching processes or a diffusion equation describing changes in allele frequency in an idealised population. Wright–Fisher model Consider a gene with two alleles, A or B. In diploidy, populations consisting of N individuals have 2N copies of each gene. An individual can have two copies of the same allele or two different alleles. The frequency of one allele is assigned p and the other q. The Wright–Fisher model (named after Sewall Wright and Ronald Fisher) assumes that generations do not overlap (for example, annual plants have exactly one generation per year) and that each copy of the gene found in the new generation is drawn independently at random from all copies of the gene in the old generation. The formula to calculate the probability of obtaining k copies of an allele that had frequency p in the last generation is then where the symbol "!" signifies the factorial function. This expression can also be formulated using the binomial coefficient, Moran model The Moran model assumes overlapping generations. At each time step, one individual is chosen to reproduce and one individual is chosen to die. So in each timestep, the number of copies of a given allele can go up by one, go down by one, or can stay the same. This means that the transition matrix is tridiagonal, which means that mathematical solutions are easier for the Moran model than for the Wright–Fisher model. On the other hand, computer simulations are usually easier to perform using the Wright–Fisher model, because fewer time steps need to be calculated. In the Moran model, it takes N timesteps to get through one generation, where N is the effective population size. In the Wright–Fisher model, it takes just one. In practice, the Moran and Wright–Fisher models give qualitatively similar results, but genetic drift runs twice as fast in the Moran model. Other models of drift If the variance in the number of offspring is much greater than that given by the binomial distribution assumed by the Wright–Fisher model, then given the same overall speed of genetic drift (the variance effective population size), genetic drift is a less powerful force compared to selection. Even for the same variance, if higher moments of the offspring number distribution exceed those of the binomial distribution then again the force of genetic drift is substantially weakened. Random effects other than sampling error Random changes in allele frequencies can also be caused by effects other than sampling error, for example random changes in selection pressure. One important alternative source of stochasticity, perhaps more important than genetic drift, is genetic draft. Genetic draft is the effect on a locus by selection on linked loci. The mathematical properties of genetic draft are different from those of genetic drift. The direction of the random change in allele frequency is autocorrelated across generations. Drift and fixation The Hardy–Weinberg principle states that within sufficiently large populations, the allele frequencies remain constant from one generation to the next unless the equilibrium is disturbed by migration, genetic mutations, or selection. However, in finite populations, no new alleles are gained from the random sampling of alleles passed to the next generation, but the sampling can cause an existing allele to disappear. Because random sampling can remove, but not replace, an allele, and because random declines or increases in allele frequency influence expected allele distributions for the next generation, genetic drift drives a population towards genetic uniformity over time. When an allele reaches a frequency of 1 (100%) it is said to be "fixed" in the population and when an allele reaches a frequency of 0 (0%) it is lost. Smaller populations achieve fixation faster, whereas in the limit of an infinite population, fixation is not achieved. Once an allele becomes fixed, genetic drift comes to a halt, and the allele frequency cannot change unless a new allele is introduced in the population via mutation or gene flow. Thus even while genetic drift is a random, directionless process, it acts to eliminate genetic variation over time. Rate of allele frequency change due to drift Assuming genetic drift is the only evolutionary force acting on an allele, after t generations in many replicated populations, starting with allele frequencies of p and q, the variance in allele frequency across those populations is Time to fixation or loss Assuming genetic drift is the only evolutionary force acting on an allele, at any given time the probability that an allele will eventually become fixed in the population is simply its frequency in the population at that time. For example, if the frequency p for allele A is 75% and the frequency q for allele B is 25%, then given unlimited time the probability A will ultimately become fixed in the population is 75% and the probability that B will become fixed is 25%. The expected number of generations for fixation to occur is proportional to the population size, such that fixation is predicted to occur much more rapidly in smaller populations. Normally the effective population size, which is smaller than the total population, is used to determine these probabilities. The effective population (Ne) takes into account factors such as the level of inbreeding, the stage of the lifecycle in which the population is the smallest, and the fact that some neutral genes are genetically linked to others that are under selection. The effective population size may not be the same for every gene in the same population. One forward-looking formula used for approximating the expected time before a neutral allele becomes fixed through genetic drift, according to the Wright–Fisher model, is where T is the number of generations, Ne is the effective population size, and p is the initial frequency for the given allele. The result is the number of generations expected to pass before fixation occurs for a given allele in a population with given size (Ne) and allele frequency (p). The expected time for the neutral allele to be lost through genetic drift can be calculated as When a mutation appears only once in a population large enough for the initial frequency to be negligible, the formulas can be simplified to for average number of generations expected before fixation of a neutral mutation, and for the average number of generations expected before the loss of a neutral mutation in a population of actual size N. Time to loss with both drift and mutation The formulae above apply to an allele that is already present in a population, and which is subject to neither mutation nor natural selection. If an allele is lost by mutation much more often than it is gained by mutation, then mutation, as well as drift, may influence the time to loss. If the allele prone to mutational loss begins as fixed in the population, and is lost by mutation at rate m per replication, then the expected time in generations until its loss in a haploid population is given by where is Euler's constant. The first approximation represents the waiting time until the first mutant destined for loss, with loss then occurring relatively rapidly by genetic drift, taking time The second approximation represents the time needed for deterministic loss by mutation accumulation. In both cases, the time to fixation is dominated by mutation via the term , and is less affected by the effective population size. Versus natural selection In natural populations, genetic drift and natural selection do not act in isolation; both phenomena are always at play, together with mutation and migration. Neutral evolution is the product of both mutation and drift, not of drift alone. Similarly, even when selection overwhelms genetic drift, it can act only on variation that mutation provides. While natural selection has a direction, guiding evolution towards heritable adaptations to the current environment, genetic drift has no direction and is guided only by the mathematics of chance. As a result, drift acts upon the genotypic frequencies within a population without regard to their phenotypic effects. In contrast, selection favors the spread of alleles whose phenotypic effects increase survival and/or reproduction of their carriers, lowers the frequencies of alleles that cause unfavorable traits, and ignores those that are neutral. The law of large numbers predicts that when the absolute number of copies of the allele is small (e.g., in small populations), the magnitude of drift on allele frequencies per generation is larger. The magnitude of drift is large enough to overwhelm selection at any allele frequency when the selection coefficient is less than 1 divided by the effective population size. Non-adaptive evolution resulting from the product of mutation and genetic drift is therefore considered to be a consequential mechanism of evolutionary change primarily within small, isolated populations. The mathematics of genetic drift depend on the effective population size, but it is not clear how this is related to the actual number of individuals in a population. Genetic linkage to other genes that are under selection can reduce the effective population size experienced by a neutral allele. With a higher recombination rate, linkage decreases and with it this local effect on effective population size. This effect is visible in molecular data as a correlation between local recombination rate and genetic diversity, and negative correlation between gene density and diversity at noncoding DNA regions. Stochasticity associated with linkage to other genes that are under selection is not the same as sampling error, and is sometimes known as genetic draft in order to distinguish it from genetic drift. Low allele frequency makes alleles more vulnerable to being eliminated by random chance, even overriding the influence of natural selection. For example, while disadvantageous mutations are usually eliminated quickly within the population, new advantageous mutations are almost as vulnerable to loss through genetic drift as are neutral mutations. Not until the allele frequency for the advantageous mutation reaches a certain threshold will genetic drift have no effect. Population bottleneck A population bottleneck is when a population contracts to a significantly smaller size over a short period of time due to some random environmental event. In a true population bottleneck, the odds for survival of any member of the population are purely random, and are not improved by any particular inherent genetic advantage. The bottleneck can result in radical changes in allele frequencies, completely independent of selection. The impact of a population bottleneck can be sustained, even when the bottleneck is caused by a one-time event such as a natural catastrophe. An interesting example of a bottleneck causing unusual genetic distribution is the relatively high proportion of individuals with total rod cell color blindness (achromatopsia) on Pingelap atoll in Micronesia. After a bottleneck, inbreeding increases. This increases the damage done by recessive deleterious mutations, in a process known as inbreeding depression. The worst of these mutations are selected against, leading to the loss of other alleles that are genetically linked to them, in a process of background selection. For recessive harmful mutations, this selection can be enhanced as a consequence of the bottleneck, due to genetic purging. This leads to a further loss of genetic diversity. In addition, a sustained reduction in population size increases the likelihood of further allele fluctuations from drift in generations to come. A population's genetic variation can be greatly reduced by a bottleneck, and even beneficial adaptations may be permanently eliminated. The loss of variation leaves the surviving population vulnerable to any new selection pressures such as disease, climatic change or shift in the available food source, because adapting in response to environmental changes requires sufficient genetic variation in the population for natural selection to take place. There have been many known cases of population bottleneck in the recent past. Prior to the arrival of Europeans, North American prairies were habitat for millions of greater prairie chickens. In Illinois alone, their numbers plummeted from about 100 million birds in 1900 to about 50 birds in the 1990s. The declines in population resulted from hunting and habitat destruction, but a consequence has been a loss of most of the species' genetic diversity. DNA analysis comparing birds from the mid century to birds in the 1990s documents a steep decline in the genetic variation in just the latter few decades. Currently the greater prairie chicken is experiencing low reproductive success. However, the genetic loss caused by bottleneck and genetic drift can increase fitness, as in Ehrlichia. Over-hunting also caused a severe population bottleneck in the northern elephant seal in the 19th century. Their resulting decline in genetic variation can be deduced by comparing it to that of the southern elephant seal, which were not so aggressively hunted. Founder effect The founder effect is a special case of a population bottleneck, occurring when a small group in a population splinters off from the original population and forms a new one. The random sample of alleles in the just formed new colony is expected to grossly misrepresent the original population in at least some respects. It is even possible that the number of alleles for some genes in the original population is larger than the number of gene copies in the founders, making complete representation impossible. When a newly formed colony is small, its founders can strongly affect the population's genetic make-up far into the future. A well-documented example is found in the Amish migration to Pennsylvania in 1744. Two members of the new colony shared the recessive allele for Ellis–Van Creveld syndrome. Members of the colony and their descendants tend to be religious isolates and remain relatively insular. As a result of many generations of inbreeding, Ellis–Van Creveld syndrome is now much more prevalent among the Amish than in the general population. The difference in gene frequencies between the original population and colony may also trigger the two groups to diverge significantly over the course of many generations. As the difference, or genetic distance, increases, the two separated populations may become distinct, both genetically and phenetically, although not only genetic drift but also natural selection, gene flow, and mutation contribute to this divergence. This potential for relatively rapid changes in the colony's gene frequency led most scientists to consider the founder effect (and by extension, genetic drift) a significant driving force in the evolution of new species. Sewall Wright was the first to attach this significance to random drift and small, newly isolated populations with his shifting balance theory of speciation. Following after Wright, Ernst Mayr created many persuasive models to show that the decline in genetic variation and small population size following the founder effect were critically important for new species to develop. However, there is much less support for this view today since the hypothesis has been tested repeatedly through experimental research and the results have been equivocal at best. History The role of random chance in evolution was first outlined by Arend L. Hagedoorn and Anna Cornelia Hagedoorn-Vorstheuvel La Brand in 1921. They highlighted that random survival plays a key role in the loss of variation from populations. Fisher (1922) responded to this with the first, albeit marginally incorrect, mathematical treatment of the "Hagedoorn effect". Notably, he expected that many natural populations were too large (an N ~10,000) for the effects of drift to be substantial and thought drift would have an insignificant effect on the evolutionary process. The corrected mathematical treatment and term "genetic drift" was later coined by a founder of population genetics, Sewall Wright. His first use of the term "drift" was in 1929, though at the time he was using it in the sense of a directed process of change, or natural selection. Random drift by means of sampling error came to be known as the "Sewall–Wright effect," though he was never entirely comfortable to see his name given to it. Wright referred to all changes in allele frequency as either "steady drift" (e.g., selection) or "random drift" (e.g., sampling error). "Drift" came to be adopted as a technical term in the stochastic sense exclusively. Today it is usually defined still more narrowly, in terms of sampling error, although this narrow definition is not universal. Wright wrote that the "restriction of "random drift" or even "drift" to only one component, the effects of accidents of sampling, tends to lead to confusion". Sewall Wright considered the process of random genetic drift by means of sampling error equivalent to that by means of inbreeding, but later work has shown them to be distinct. In the early days of the modern evolutionary synthesis, scientists were beginning to blend the new science of population genetics with Charles Darwin's theory of natural selection. Within this framework, Wright focused on the effects of inbreeding on small relatively isolated populations. He introduced the concept of an adaptive landscape in which phenomena such as cross breeding and genetic drift in small populations could push them away from adaptive peaks, which in turn allow natural selection to push them towards new adaptive peaks. Wright thought smaller populations were more suited for natural selection because "inbreeding was sufficiently intense to create new interaction systems through random drift but not intense enough to cause random nonadaptive fixation of genes". Wright's views on the role of genetic drift in the evolutionary scheme were controversial almost from the very beginning. One of the most vociferous and influential critics was colleague Ronald Fisher. Fisher conceded genetic drift played some role in evolution, but an insignificant one. Fisher has been accused of misunderstanding Wright's views because in his criticisms Fisher seemed to argue Wright had rejected selection almost entirely. To Fisher, viewing the process of evolution as a long, steady, adaptive progression was the only way to explain the ever-increasing complexity from simpler forms. But the debates have continued between the "gradualists" and those who lean more toward the Wright model of evolution where selection and drift together play an important role. In 1968, Motoo Kimura rekindled the debate with his neutral theory of molecular evolution, which claims that most of the genetic changes are caused by genetic drift acting on neutral mutations. The role of genetic drift by means of sampling error in evolution has been criticized by John H. Gillespie and William B. Provine, who argue that selection on linked sites is a more important stochastic force.
Biology and health sciences
Evolution
null
72038
https://en.wikipedia.org/wiki/C%2B%2B
C++
C++ (, pronounced "C plus plus" and sometimes abbreviated as CPP) is a high-level, general-purpose programming language created by Danish computer scientist Bjarne Stroustrup. First released in 1985 as an extension of the C programming language, it has since expanded significantly over time; , C++ has object-oriented, generic, and functional features, in addition to facilities for low-level memory manipulation for systems like microcomputers or to make operating systems like Linux or Windows. It is usually implemented as a compiled language, and many vendors provide C++ compilers, including the Free Software Foundation, LLVM, Microsoft, Intel, Embarcadero, Oracle, and IBM. C++ was designed with systems programming and embedded, resource-constrained software and large systems in mind, with performance, efficiency, and flexibility of use as its design highlights. C++ has also been found useful in many other contexts, with key strengths being software infrastructure and resource-constrained applications, including desktop applications, video games, servers (e.g., e-commerce, web search, or databases), and performance-critical applications (e.g., telephone switches or space probes). C++ is standardized by the International Organization for Standardization (ISO), with the latest standard version ratified and published by ISO in October 2024 as ISO/IEC 14882:2024 (informally known as C++23). The C++ programming language was initially standardized in 1998 as ISO/IEC 14882:1998, which was then amended by the C++03, C++11, C++14, C++17, and C++20 standards. The current standard supersedes these with new features and an enlarged standard library. Before the initial standardization in 1998, C++ was developed by Stroustrup at Bell Labs since 1979 as an extension of the C language; he wanted an efficient and flexible language similar to C that also provided high-level features for program organization. Since 2012, C++ has been on a three-year release schedule with C++26 as the next planned standard. Despite its widespread adoption, some notable programmers have criticized the C++ language, including Linus Torvalds, Richard Stallman, Joshua Bloch, Ken Thompson, and Donald Knuth. History In 1979, Bjarne Stroustrup, a Danish computer scientist, began work on "", the predecessor to C++. The motivation for creating a new language originated from Stroustrup's experience in programming for his PhD thesis. Stroustrup found that Simula had features that were very helpful for large software development, but the language was too slow for practical use, while BCPL was fast but too low-level to be suitable for large software development. When Stroustrup started working in AT&T Bell Labs, he had the problem of analyzing the UNIX kernel with respect to distributed computing. Remembering his PhD experience, Stroustrup set out to enhance the C language with Simula-like features. C was chosen because it was general-purpose, fast, portable, and widely used. In addition to C and Simula's influences, other languages influenced this new language, including ALGOL 68, Ada, CLU, and ML. Initially, Stroustrup's "C with Classes" added features to the C compiler, Cpre, including classes, derived classes, strong typing, inlining, and default arguments. In 1982, Stroustrup started to develop a successor to C with Classes, which he named "C++" (++ being the increment operator in C) after going through several other names. New features were added, including virtual functions, function name and operator overloading, references, constants, type-safe free-store memory allocation (new/delete), improved type checking, and BCPL-style single-line comments with two forward slashes (//). Furthermore, Stroustrup developed a new, standalone compiler for C++, Cfront. In 1984, Stroustrup implemented the first stream input/output library. The idea of providing an output operator rather than a named output function was suggested by Doug McIlroy (who had previously suggested Unix pipes). In 1985, the first edition of The C++ Programming Language was released, which became the definitive reference for the language, as there was not yet an official standard. The first commercial implementation of C++ was released in October of the same year. In 1989, C++ 2.0 was released, followed by the updated second edition of The C++ Programming Language in 1991. New features in 2.0 included multiple inheritance, abstract classes, static member functions, const member functions, and protected members. In 1990, The Annotated C++ Reference Manual was published. This work became the basis for the future standard. Later feature additions included templates, exceptions, namespaces, new casts, and a Boolean type. In 1998, C++98 was released, standardizing the language, and a minor update (C++03) was released in 2003. After C++98, C++ evolved relatively slowly until, in 2011, the C++11 standard was released, adding numerous new features, enlarging the standard library further, and providing more facilities to C++ programmers. After a minor update released in December 2014, various new additions were introduced in C++17. After becoming finalized in February 2020, a draft of the C++20 standard was approved on 4 September 2020, and officially published on 15 December 2020. On January 3, 2018, Stroustrup was announced as the 2018 winner of the Charles Stark Draper Prize for Engineering, "for conceptualizing and developing the C++ programming language". In December 2022, C++ ranked third on the TIOBE index, surpassing Java for the first time in the history of the index. , the language ranks second after Python, with Java being in third. Etymology According to Stroustrup, "the name signifies the evolutionary nature of the changes from C." This name is credited to Rick Mascitti (mid-1983) and was first used in December 1983. When Mascitti was questioned informally in 1992 about the naming, he indicated that it was given in a tongue-in-cheek spirit. The name comes from C's ++ operator (which increments the value of a variable) and a common naming convention of using "+" to indicate an enhanced computer program. During C++'s development period, the language had been referred to as "new C" and "C with Classes" before acquiring its final name. Philosophy Throughout C++'s life, its development and evolution has been guided by a set of principles: It must be driven by actual problems and its features should be immediately useful in real world programs. Every feature should be implementable (with a reasonably obvious way to do so). Programmers should be free to pick their own programming style, and that style should be fully supported by C++. Allowing a useful feature is more important than preventing every possible misuse of C++. It should provide facilities for organising programs into separate, well-defined parts, and provide facilities for combining separately developed parts. No implicit violations of the type system (but allow explicit violations; that is, those explicitly requested by the programmer). User-created types need to have the same support and performance as built-in types. Unused features should not negatively impact created executables (e.g. in lower performance). There should be no language beneath C++ (except assembly language). C++ should work alongside other existing programming languages, rather than fostering its own separate and incompatible programming environment. If the programmer's intent is unknown, allow the programmer to specify it by providing manual control. Standardization C++ is standardized by an ISO working group known as JTC1/SC22/WG21. So far, it has published seven revisions of the C++ standard and is currently working on the next revision, C++26. In 1998, the ISO working group standardized C++ for the first time as ISO/IEC 14882:1998, which is informally known as C++98. In 2003, it published a new version of the C++ standard called ISO/IEC 14882:2003, which fixed problems identified in C++98. The next major revision of the standard was informally referred to as "C++0x", but it was not released until 2011. C++11 (14882:2011) included many additions to both the core language and the standard library. In 2014, C++14 (also known as C++1y) was released as a small extension to C++11, featuring mainly bug fixes and small improvements. The Draft International Standard ballot procedures completed in mid-August 2014. After C++14, a major revision C++17, informally known as C++1z, was completed by the ISO C++ committee in mid July 2017 and was approved and published in December 2017. As part of the standardization process, ISO also publishes technical reports and specifications: ISO/IEC TR 18015:2006 on the use of C++ in embedded systems and on performance implications of C++ language and library features, ISO/IEC TR 19768:2007 (also known as the C++ Technical Report 1) on library extensions mostly integrated into C++11, ISO/IEC TR 29124:2010 on special mathematical functions, integrated into , ISO/IEC TR 24733:2011 on decimal floating-point arithmetic, ISO/IEC TS 18822:2015 on the standard filesystem library, integrated into C++17, ISO/IEC TS 19570:2015 on parallel versions of the standard library algorithms, integrated into C++17, ISO/IEC TS 19841:2015 on software transactional memory, ISO/IEC TS 19568:2015 on a new set of library extensions, some of which are already integrated into C++17, ISO/IEC TS 19217:2015 on the C++ concepts, integrated into C++20, ISO/IEC TS 19571:2016 on the library extensions for concurrency, some of which are already integrated into C++20, ISO/IEC TS 19568:2017 on a new set of general-purpose library extensions, ISO/IEC TS 21425:2017 on the library extensions for ranges, integrated into C++20, ISO/IEC TS 22277:2017 on coroutines, integrated into C++20, ISO/IEC TS 19216:2018 on the networking library, ISO/IEC TS 21544:2018 on modules, integrated into C++20, ISO/IEC TS 19570:2018 on a new set of library extensions for parallelism, and ISO/IEC TS 23619:2021 on new extensions for reflective programming (reflection). More technical specifications are in development and pending approval, including new set of concurrency extensions. Language The C++ language has two main components: a direct mapping of hardware features provided primarily by the C subset, and zero-overhead abstractions based on those mappings. Stroustrup describes C++ as "a light-weight abstraction programming language [designed] for building and using efficient and elegant abstractions"; and "offering both hardware access and abstraction is the basis of C++. Doing it efficiently is what distinguishes it from other languages." C++ inherits most of C's syntax. A hello world program that conforms to the C standard is also a valid C++ hello world program. The following is Bjarne Stroustrup's version of the Hello world program that uses the C++ Standard Library stream facility to write a message to standard output: #include <iostream> int main() { std::cout << "Hello, world!\n"; } Object storage As in C, C++ supports four types of memory management: static storage duration objects, thread storage duration objects, automatic storage duration objects, and dynamic storage duration objects. Static storage duration objects Static storage duration objects are created before main() is entered (see exceptions below) and destroyed in reverse order of creation after main() exits. The exact order of creation is not specified by the standard (though there are some rules defined below) to allow implementations some freedom in how to organize their implementation. More formally, objects of this type have a lifespan that "shall last for the duration of the program". Static storage duration objects are initialized in two phases. First, "static initialization" is performed, and only after all static initialization is performed, "dynamic initialization" is performed. In static initialization, all objects are first initialized with zeros; after that, all objects that have a constant initialization phase are initialized with the constant expression (i.e. variables initialized with a literal or constexpr). Though it is not specified in the standard, the static initialization phase can be completed at compile time and saved in the data partition of the executable. Dynamic initialization involves all object initialization done via a constructor or function call (unless the function is marked with constexpr, in C++11). The dynamic initialization order is defined as the order of declaration within the compilation unit (i.e. the same file). No guarantees are provided about the order of initialization between compilation units. Thread storage duration objects Variables of this type are very similar to static storage duration objects. The main difference is the creation time is just before thread creation, and destruction is done after the thread has been joined. Automatic storage duration objects The most common variable types in C++ are local variables inside a function or block, and temporary variables. The common feature about automatic variables is that they have a lifetime that is limited to the scope of the variable. They are created and potentially initialized at the point of declaration (see below for details) and destroyed in the reverse order of creation when the scope is left. This is implemented by allocation on the stack. Local variables are created as the point of execution passes the declaration point. If the variable has a constructor or initializer this is used to define the initial state of the object. Local variables are destroyed when the local block or function that they are declared in is closed. C++ destructors for local variables are called at the end of the object lifetime, allowing a discipline for automatic resource management termed RAII, which is widely used in C++. Member variables are created when the parent object is created. Array members are initialized from 0 to the last member of the array in order. Member variables are destroyed when the parent object is destroyed in the reverse order of creation. i.e. If the parent is an "automatic object" then it will be destroyed when it goes out of scope which triggers the destruction of all its members. Temporary variables are created as the result of expression evaluation and are destroyed when the statement containing the expression has been fully evaluated (usually at the ; at the end of a statement). Dynamic storage duration objects These objects have a dynamic lifespan and can be created directly with a call to and destroyed explicitly with a call to . C++ also supports malloc and free, from C, but these are not compatible with and . Use of returns an address to the allocated memory. The C++ Core Guidelines advise against using directly for creating dynamic objects in favor of smart pointers through for single ownership and for reference-counted multiple ownership, which were introduced in C++11. Templates C++ templates enable generic programming. supports function, class, alias, and variable templates. Templates may be parameterized by types, compile-time constants, and other templates. Templates are implemented by instantiation at compile-time. To instantiate a template, compilers substitute specific arguments for a template's parameters to generate a concrete function or class instance. Some substitutions are not possible; these are eliminated by an overload resolution policy described by the phrase "Substitution failure is not an error" (SFINAE). Templates are a powerful tool that can be used for generic programming, template metaprogramming, and code optimization, but this power implies a cost. Template use may increase object code size, because each template instantiation produces a copy of the template code: one for each set of template arguments, however, this is the same or smaller amount of code that would be generated if the code were written by hand. This is in contrast to run-time generics seen in other languages (e.g., Java) where at compile-time the type is erased and a single template body is preserved. Templates are different from macros: while both of these compile-time language features enable conditional compilation, templates are not restricted to lexical substitution. Templates are aware of the semantics and type system of their companion language, as well as all compile-time type definitions, and can perform high-level operations including programmatic flow control based on evaluation of strictly type-checked parameters. Macros are capable of conditional control over compilation based on predetermined criteria, but cannot instantiate new types, recurse, or perform type evaluation and in effect are limited to pre-compilation text-substitution and text-inclusion/exclusion. In other words, macros can control compilation flow based on pre-defined symbols but cannot, unlike templates, independently instantiate new symbols. Templates are a tool for static polymorphism (see below) and generic programming. In addition, templates are a compile-time mechanism in C++ that is Turing-complete, meaning that any computation expressible by a computer program can be computed, in some form, by a template metaprogram before runtime. In summary, a template is a compile-time parameterized function or class written without knowledge of the specific arguments used to instantiate it. After instantiation, the resulting code is equivalent to code written specifically for the passed arguments. In this manner, templates provide a way to decouple generic, broadly applicable aspects of functions and classes (encoded in templates) from specific aspects (encoded in template parameters) without sacrificing performance due to abstraction. Objects C++ introduces object-oriented programming (OOP) features to C. It offers classes, which provide the four features commonly present in OOP (and some non-OOP) languages: abstraction, encapsulation, inheritance, and polymorphism. One distinguishing feature of classes compared to classes in other programming languages is support for deterministic destructors, which in turn provide support for the Resource Acquisition is Initialization (RAII) concept. Encapsulation Encapsulation is the hiding of information to ensure that data structures and operators are used as intended and to make the usage model more obvious to the developer. C++ provides the ability to define classes and functions as its primary encapsulation mechanisms. Within a class, members can be declared as either public, protected, or private to explicitly enforce encapsulation. A public member of the class is accessible to any function. A private member is accessible only to functions that are members of that class and to functions and classes explicitly granted access permission by the class ("friends"). A protected member is accessible to members of classes that inherit from the class in addition to the class itself and any friends. The object-oriented principle ensures the encapsulation of all and only the functions that access the internal representation of a type. C++ supports this principle via member functions and friend functions, but it does not enforce it. Programmers can declare parts or all of the representation of a type to be public, and they are allowed to make public entities not part of the representation of a type. Therefore, C++ supports not just object-oriented programming, but other decomposition paradigms such as modular programming. It is generally considered good practice to make all data private or protected, and to make public only those functions that are part of a minimal interface for users of the class. This can hide the details of data implementation, allowing the designer to later fundamentally change the implementation without changing the interface in any way. Inheritance Inheritance allows one data type to acquire properties of other data types. Inheritance from a base class may be declared as public, protected, or private. This access specifier determines whether unrelated and derived classes can access the inherited public and protected members of the base class. Only public inheritance corresponds to what is usually meant by "inheritance". The other two forms are much less frequently used. If the access specifier is omitted, a "class" inherits privately, while a "struct" inherits publicly. Base classes may be declared as virtual; this is called virtual inheritance. Virtual inheritance ensures that only one instance of a base class exists in the inheritance graph, avoiding some of the ambiguity problems of multiple inheritance. Multiple inheritance is a C++ feature allowing a class to be derived from more than one base class; this allows for more elaborate inheritance relationships. For example, a "Flying Cat" class can inherit from both "Cat" and "Flying Mammal". Some other languages, such as C# or Java, accomplish something similar (although more limited) by allowing inheritance of multiple interfaces while restricting the number of base classes to one (interfaces, unlike classes, provide only declarations of member functions, no implementation or member data). An interface as in C# and Java can be defined in as a class containing only pure virtual functions, often known as an abstract base class or "ABC". The member functions of such an abstract base class are normally explicitly defined in the derived class, not inherited implicitly. C++ virtual inheritance exhibits an ambiguity resolution feature called dominance. Operators and operator overloading C++ provides more than 35 operators, covering basic arithmetic, bit manipulation, indirection, comparisons, logical operations and others. Almost all operators can be overloaded for user-defined types, with a few notable exceptions such as member access (. and .*) and the conditional operator. The rich set of overloadable operators is central to making user-defined types in C++ seem like built-in types. Overloadable operators are also an essential part of many advanced C++ programming techniques, such as smart pointers. Overloading an operator does not change the precedence of calculations involving the operator, nor does it change the number of operands that the operator uses (any operand may however be ignored by the operator, though it will be evaluated prior to execution). Overloaded "&&" and "||" operators lose their short-circuit evaluation property. Polymorphism Polymorphism enables one common interface for many implementations, and for objects to act differently under different circumstances. C++ supports several kinds of static (resolved at compile-time) and dynamic (resolved at run-time) polymorphisms, supported by the language features described above. Compile-time polymorphism does not allow for certain run-time decisions, while runtime polymorphism typically incurs a performance penalty. Static polymorphism Function overloading allows programs to declare multiple functions having the same name but with different arguments (i.e. ad hoc polymorphism). The functions are distinguished by the number or types of their formal parameters. Thus, the same function name can refer to different functions depending on the context in which it is used. The type returned by the function is not used to distinguish overloaded functions and differing return types would result in a compile-time error message. When declaring a function, a programmer can specify for one or more parameters a default value. Doing so allows the parameters with defaults to optionally be omitted when the function is called, in which case the default arguments will be used. When a function is called with fewer arguments than there are declared parameters, explicit arguments are matched to parameters in left-to-right order, with any unmatched parameters at the end of the parameter list being assigned their default arguments. In many cases, specifying default arguments in a single function declaration is preferable to providing overloaded function definitions with different numbers of parameters. Templates in C++ provide a sophisticated mechanism for writing generic, polymorphic code (i.e. parametric polymorphism). In particular, through the curiously recurring template pattern, it is possible to implement a form of static polymorphism that closely mimics the syntax for overriding virtual functions. Because C++ templates are type-aware and Turing-complete, they can also be used to let the compiler resolve recursive conditionals and generate substantial programs through template metaprogramming. Contrary to some opinion, template code will not generate a bulk code after compilation with the proper compiler settings. Dynamic polymorphism Inheritance Variable pointers and references to a base class type in C++ can also refer to objects of any derived classes of that type. This allows arrays and other kinds of containers to hold pointers to objects of differing types (references cannot be directly held in containers). This enables dynamic (run-time) polymorphism, where the referred objects can behave differently, depending on their (actual, derived) types. C++ also provides the dynamic_cast operator, which allows code to safely attempt conversion of an object, via a base reference/pointer, to a more derived type: downcasting. The attempt is necessary as often one does not know which derived type is referenced. (Upcasting, conversion to a more general type, can always be checked/performed at compile-time via static_cast, as ancestral classes are specified in the derived class's interface, visible to all callers.) dynamic_cast relies on run-time type information (RTTI), metadata in the program that enables differentiating types and their relationships. If a dynamic_cast to a pointer fails, the result is the nullptr constant, whereas if the destination is a reference (which cannot be null), the cast throws an exception. Objects known to be of a certain derived type can be cast to that with static_cast, bypassing RTTI and the safe runtime type-checking of dynamic_cast, so this should be used only if the programmer is very confident the cast is, and will always be, valid. Virtual member functions Ordinarily, when a function in a derived class overrides a function in a base class, the function to call is determined by the type of the object. A given function is overridden when there exists no difference in the number or type of parameters between two or more definitions of that function. Hence, at compile time, it may not be possible to determine the type of the object and therefore the correct function to call, given only a base class pointer; the decision is therefore put off until runtime. This is called dynamic dispatch. Virtual member functions or methods allow the most specific implementation of the function to be called, according to the actual run-time type of the object. In C++ implementations, this is commonly done using virtual function tables. If the object type is known, this may be bypassed by prepending a fully qualified class name before the function call, but in general calls to virtual functions are resolved at run time. In addition to standard member functions, operator overloads and destructors can be virtual. An inexact rule based on practical experience states that if any function in the class is virtual, the destructor should be as well. As the type of an object at its creation is known at compile time, constructors, and by extension copy constructors, cannot be virtual. Nonetheless, a situation may arise where a copy of an object needs to be created when a pointer to a derived object is passed as a pointer to a base object. In such a case, a common solution is to create a clone() (or similar) virtual function that creates and returns a copy of the derived class when called. A member function can also be made "pure virtual" by appending it with = 0 after the closing parenthesis and before the semicolon. A class containing a pure virtual function is called an abstract class. Objects cannot be created from an abstract class; they can only be derived from. Any derived class inherits the virtual function as pure and must provide a non-pure definition of it (and all other pure virtual functions) before objects of the derived class can be created. A program that attempts to create an object of a class with a pure virtual member function or inherited pure virtual member function is ill-formed. Lambda expressions C++ provides support for anonymous functions, also known as lambda expressions, with the following form: [capture](parameters) -> return_type { function_body } Since C++20, the keyword is optional for template parameters of lambda expressions: [capture]<template_parameters>(parameters) -> return_type { function_body } If the lambda takes no parameters, and no return type or other specifiers are used, the () can be omitted; that is, [capture] { function_body } The return type of a lambda expression can be automatically inferred, if possible; e.g.: [](int x, int y) { return x + y; } // inferred [](int x, int y) -> int { return x + y; } // explicit The [capture] list supports the definition of closures. Such lambda expressions are defined in the standard as syntactic sugar for an unnamed function object. Exception handling Exception handling is used to communicate the existence of a runtime problem or error from where it was detected to where the issue can be handled. It permits this to be done in a uniform manner and separately from the main code, while detecting all errors. Should an error occur, an exception is thrown (raised), which is then caught by the nearest suitable exception handler. The exception causes the current scope to be exited, and also each outer scope (propagation) until a suitable handler is found, calling in turn the destructors of any objects in these exited scopes. At the same time, an exception is presented as an object carrying the data about the detected problem. Some C++ style guides, such as Google's, LLVM's, and Qt's, forbid the usage of exceptions. The exception-causing code is placed inside a try block. The exceptions are handled in separate catch blocks (the handlers); each try block can have multiple exception handlers, as it is visible in the example below. #include <iostream> #include <vector> #include <stdexcept> int main() { try { std::vector<int> vec{3, 4, 3, 1}; int i{vec.at(4)}; // Throws an exception, std::out_of_range (indexing for vec is from 0-3 not 1-4) } // An exception handler, catches std::out_of_range, which is thrown by vec.at(4) catch (const std::out_of_range &e) { std::cerr << "Accessing a non-existent element: " << e.what() << '\n'; } // To catch any other standard library exceptions (they derive from std::exception) catch (const std::exception &e) { std::cerr << "Exception thrown: " << e.what() << '\n'; } // Catch any unrecognised exceptions (i.e. those which don't derive from std::exception) catch (...) { std::cerr << "Some fatal error\n"; } } It is also possible to raise exceptions purposefully, using the throw keyword; these exceptions are handled in the usual way. In some cases, exceptions cannot be used due to technical reasons. One such example is a critical component of an embedded system, where every operation must be guaranteed to complete within a specified amount of time. This cannot be determined with exceptions as no tools exist to determine the maximum time required for an exception to be handled. Unlike signal handling, in which the handling function is called from the point of failure, exception handling exits the current scope before the catch block is entered, which may be located in the current function or any of the previous function calls currently on the stack. Enumerated types Standard library The C++ standard consists of two parts: the core language and the standard library. C++ programmers expect the latter on every major implementation of C++; it includes aggregate types (vectors, lists, maps, sets, queues, stacks, arrays, tuples), algorithms (find, for_each, binary_search, random_shuffle, etc.), input/output facilities (iostream, for reading from and writing to the console and files), filesystem library, localisation support, smart pointers for automatic memory management, regular expression support, multi-threading library, atomics support (allowing a variable to be read or written to by at most one thread at a time without any external synchronisation), time utilities (measurement, getting current time, etc.), a system for converting error reporting that does not use C++ exceptions into C++ exceptions, a random number generator, and a slightly modified version of the C standard library (to make it comply with the C++ type system). A large part of the C++ library is based on the Standard Template Library (STL). Useful tools provided by the STL include containers as the collections of objects (such as vectors and lists), iterators that provide array-like access to containers, and algorithms that perform operations such as searching and sorting. Furthermore, (multi)maps (associative arrays) and (multi)sets are provided, all of which export compatible interfaces. Therefore, using templates it is possible to write generic algorithms that work with any container or on any sequence defined by iterators. As in C, the features of the library are accessed by using the #include directive to include a standard header. The C++ Standard Library provides 105 standard headers, of which 27 are deprecated. The standard incorporates the STL that was originally designed by Alexander Stepanov, who experimented with generic algorithms and containers for many years. When he started with C++, he finally found a language where it was possible to create generic algorithms (e.g., STL sort) that perform even better than, for example, the C standard library qsort, thanks to C++ features like using inlining and compile-time binding instead of function pointers. The standard does not refer to it as "STL", as it is merely a part of the standard library, but the term is still widely used to distinguish it from the rest of the standard library (input/output streams, internationalization, diagnostics, the C library subset, etc.). Most C++ compilers, and all major ones, provide a standards-conforming implementation of the C++ standard library. C++ Core Guidelines The C++ Core Guidelines are an initiative led by Bjarne Stroustrup, the inventor of C++, and Herb Sutter, the convener and chair of the C++ ISO Working Group, to help programmers write 'Modern C++' by using best practices for the language standards C++11 and newer, and to help developers of compilers and static checking tools to create rules for catching bad programming practices. The main aim is to efficiently and consistently write type and resource safe C++. The Core Guidelines were announced in the opening keynote at CPPCon 2015. The Guidelines are accompanied by the Guideline Support Library (GSL), a header only library of types and functions to implement the Core Guidelines and static checker tools for enforcing Guideline rules. Compatibility To give compiler vendors greater freedom, the C++ standards committee decided not to dictate the implementation of name mangling, exception handling, and other implementation-specific features. The downside of this decision is that object code produced by different compilers is expected to be incompatible. There are, however, attempts to standardize compilers for particular machines or operating systems. For example, the Itanium C++ ABI is processor-independent (despite its name) and is implemented by GCC and Clang. With C C++ is often considered to be a superset of C but this is not strictly true. Most C code can easily be made to compile correctly in C++ but there are a few differences that cause some valid C code to be invalid or behave differently in C++. For example, C allows implicit conversion from void* to other pointer types but C++ does not (for type safety reasons). Also, C++ defines many new keywords, such as new and class, which may be used as identifiers (for example, variable names) in a C program. Some incompatibilities have been removed by the 1999 revision of the C standard (C99), which now supports C++ features such as line comments (//) and declarations mixed with code. On the other hand, C99 introduced a number of new features that C++ did not support that were incompatible or redundant in C++, such as variable-length arrays, native complex-number types (however, the std::complex class in the C++ standard library provides similar functionality, although not code-compatible), designated initializers, compound literals, and the restrict keyword. Some of the C99-introduced features were included in the subsequent version of the C++ standard, C++11 (out of those which were not redundant). However, the C++11 standard introduces new incompatibilities, such as disallowing assignment of a string literal to a character pointer, which remains valid C. To intermix C and C++ code, any function declaration or definition that is to be called from/used both in C and C++ must be declared with C linkage by placing it within an extern "C" {/*...*/} block. Such a function may not rely on features depending on name mangling (i.e., function overloading).
Technology
Programming languages
null
72276
https://en.wikipedia.org/wiki/Scarabaeidae
Scarabaeidae
The family Scarabaeidae, as currently defined, consists of over 35,000 species of beetles worldwide; they are often called scarabs or scarab beetles. The classification of this family has undergone significant change. Several groups formerly treated as subfamilies have been elevated to family rank (e.g., Bolboceratidae, Geotrupidae, Glaresidae, Glaphyridae, Hybosoridae, Ochodaeidae, and Pleocomidae), and some reduced to lower ranks. The subfamilies listed in this article are in accordance with those in Catalog of Life (2023). Description Scarabs are stout-bodied beetles, many with bright metallic colours, measuring between . They have distinctive, clubbed antennae composed of plates called lamellae that can be compressed into a ball or fanned out like leaves to sense odours. Many species are fossorial, with legs adapted for digging. In some groups males (and sometimes females) have prominent horns on the head and/or pronotum to fight over mates or resources. The largest fossil scarabaeid was Oryctoantiquus borealis with a length of . The C-shaped larvae, called grubs, are pale yellow or white. Most adult beetles are nocturnal, although the flower chafers (Cetoniinae) and many leaf chafers (Rutelinae) are active during the day. The grubs mostly live underground or under debris, so are not exposed to sunlight. Many scarabs are scavengers that recycle dung, carrion, or decaying plant material. Others, such as the Japanese beetle, are plant-eaters, wreaking havoc on various crops and vegetation. Some of the well-known beetles from the Scarabaeidae are Japanese beetles, dung beetles, June beetles, rose chafers (Australian, European, and North American), rhinoceros beetles, Hercules beetles and Goliath beetles. Several members of this family have structurally coloured shells which act as left-handed circular polarisers; this was the first-discovered example of circular polarization in nature. Ancient Egypt In Ancient Egypt, the dung beetle now known as Scarabaeus sacer (formerly Ateuchus sacer) was revered as sacred. Egyptian amulets representing the sacred scarab beetles were traded throughout the Mediterranean world.
Biology and health sciences
Beetles (Coleoptera)
Animals
72325
https://en.wikipedia.org/wiki/Decay%20energy
Decay energy
The decay energy is the energy change of a nucleus having undergone a radioactive decay. Radioactive decay is the process in which an unstable atomic nucleus loses energy by emitting ionizing particles and radiation. This decay, or loss of energy, results in an atom of one type (called the parent nuclide) transforming to an atom of a different type (called the daughter nuclide). Decay calculation The energy difference of the reactants is often written as Q: Decay energy is usually quoted in terms of the energy units MeV (million electronvolts) or keV (thousand electronvolts): Types of radioactive decay include gamma ray beta decay (decay energy is divided between the emitted electron and the neutrino which is emitted at the same time) alpha decay The decay energy is the mass difference Δm between the parent and the daughter atom and particles. It is equal to the energy of radiation E. If A is the radioactive activity, i.e. the number of transforming atoms per time, M the molar mass, then the radiation power P is: or or Example: 60Co decays into 60Ni. The mass difference Δm is 0.003u. The radiated energy is approximately 2.8MeV. The molar weight is 59.93. The half life T of 5.27 year corresponds to the activity , where N is the number of atoms per mol, and T is the half-life. Taking care of the units the radiation power for 60Co is 17.9W/g Radiation power in W/g for several isotopes: 60Co: 17.9 238Pu: 0.57 137Cs: 0.6 241Am: 0.1 210Po: 140 (T = 136d) 90Sr: 0.9 226Ra: 0.02 For use in radioisotope thermoelectric generators (RTGs) high decay energy combined with a long half life is desirable. To reduce the cost and weight of radiation shielding, sources that do not emit strong gamma radiation are preferred. This table gives an indication why - despite its enormous cost - with its roughly eighty year half life and low gamma emissions has become the RTG nuclide of choice. performs worse than on almost all measures, being shorter lived, a beta emitter rather than an easily shielded alpha emitter and releasing significant gamma radiation when its daughter nuclide decays, but as it is a high yield product of nuclear fission and easy to chemically extract from other fission products, Strontium titanate based RTGs were in widespread use for remote locations during much of the 20th century. Cobalt-60 while widely used for purposes such as food irradiation is not a practicable RTG isotope as most of its decay energy is released by gamma rays, requiring substantial shielding. Furthermore, its five-year half life is too short for many applications.
Physical sciences
Nuclear physics
Physics
72339
https://en.wikipedia.org/wiki/Blackberry
Blackberry
The blackberry is an edible fruit produced by many species in the genus Rubus in the family Rosaceae, hybrids among these species within the subgenus Rubus, and hybrids between the subgenera Rubus and Idaeobatus. The taxonomy of blackberries has historically been confused because of hybridization and apomixis, so that species have often been grouped together and called species aggregates. Blackberry fruit production is abundant with annual volumes of per possible, making this plant commercially attractive. Rubus armeniacus ("Himalayan" blackberry) is considered a noxious weed and invasive species in many regions of the Pacific Northwest of Canada and the United States, where it grows out of control in urban and suburban parks and woodlands. Description What distinguishes the blackberry from its raspberry relatives is whether or not the torus (receptacle or stem) "picks with" (i.e., stays with) the fruit. When picking a blackberry fruit, the torus stays with the fruit. With a raspberry, the torus remains on the plant, leaving a hollow core in the raspberry fruit. The term bramble, a word referring to any impenetrable thicket, has in some circles traditionally been applied specifically to the blackberry or its products, though in the United States it applies to all members of the genus Rubus. In the western US, the term caneberry is used to refer to blackberries and raspberries as a group rather than the term bramble. Briar or brier may be used to refer to the dense vines of the plant, though this name is used for other thorny thickets (such as Smilax) as well. The usually black fruit is not a berry in the botanical sense, as it is termed botanically as an aggregate fruit, composed of small drupelets. It is a widespread and well-known group of over 375 species, many of which are closely related apomictic microspecies native throughout Europe, northwestern Africa, temperate western and central Asia and North and South America. Plants Blackberries are perennial plants bearing biennial stems (called canes) from their roots. In its first year, a new stem, the primocane, reaches a full length of about trailing on the ground and bearing large palmate compound leaves with 5–7 new leaves; it does not produce any flowers. In its second year, the cane is a floricane with a non-growing stem. The lateral buds open to produce flowering laterals. First- and second-year shoots produce short-curved, sharp thorns. Thornless cultivars have been developed during the early 21st century. Unmanaged plants tend to aggregate in a dense tangle of stems and branches, which can be controlled in gardens or farms using trellises. Blackberry shrubs can tolerate poor soils, spreading readily in wasteland, ditches, and roadsides. The flowers bloom in late spring and early summer on the tips of branches. Each flower is about in diameter, with five white-pink petals. The fruit only develops around ovules fertilized by the male gamete from a pollen grain. The most likely cause of undeveloped ovules is inadequate pollinator visits. Incomplete drupelet development can signal infection with raspberry bushy dwarf virus. Genetics The loci controlling the primocane fruiting was mapped in the F Locus, on LG7, whereas thorns/thornlessness was mapped on LG4. Better understanding of the genetics is useful for genetic screening of cross-breds, and for genetic engineering purposes. Ecology Blackberry leaves are food for certain caterpillars; some grazing mammals, especially deer, are also very fond of the leaves. Caterpillars of the concealer moth Alabonia geoffrella have been found feeding inside dead blackberry shoots. When mature, the berries are eaten and their seeds dispersed by mammals, such as the red fox, American black bear and the Eurasian badger, as well as by small birds. Blackberries grow wild throughout most of Europe. They are an important element in the ecology of many countries, and harvesting the berries is a common pastime. However, their vigorous growth and tendency to grow unchecked if not managed correctly means that the plants are also considered a weed, sending down roots from branches that touch the ground, and sending up suckers from the roots. In some parts of the world, such as in Australia, Chile, New Zealand, and the Pacific Northwest of North America, some blackberry species, particularly Rubus armeniacus (Himalayan blackberry) and Rubus laciniatus (evergreen blackberry), are naturalized and considered an invasive species and a noxious weed. Blackberry fruits are red when unripe, leading to an old expression that "blackberries are red when they're green". Cultivation Worldwide, Mexico is the leading producer of blackberries, with nearly the entire crop being produced for export into the off-season fresh markets in North America and Europe. Until 2018, the Mexican market was almost entirely based on the cultivar 'Tupy' (often spelled 'Tupi', but the EMBRAPA program in Brazil from which it was released prefers the 'Tupy' spelling), but Tupy fell out of favor in some Mexican growing regions. In the US, Oregon is the leading commercial blackberry producer, producing on in 2017. Numerous cultivars have been selected for commercial and amateur cultivation in Europe and the United States. Since the many species form hybrids easily, there are numerous cultivars with more than one species in their ancestry. History Modern hybridization and cultivar development took place mostly in the US. In 1880, a hybrid blackberry-raspberry named the loganberry was developed in Santa Cruz, California, by an American judge and horticulturalist, James Harvey Logan. One of the first thornless varieties was developed in 1921, but the berries lost much of their flavor. Common thornless cultivars developed from the 1990s to the early 21st century by the US Department of Agriculture enabled efficient machine-harvesting, higher yields, larger and firmer fruit, and improved flavor, including the Triple Crown, Black Diamond, Black Pearl, and Nightfall, a marionberry. Hybrids 'Marion' (marketed as "marionberry") is an important cultivar that was selected from seedlings from a cross between 'Chehalem' and 'Olallie' (commonly called "Olallieberry") berries. 'Olallie' in turn is a cross between loganberry and youngberry. 'Marion', 'Chehalem' and 'Olallie' are just three of many trailing blackberry cultivars developed by the United States Department of Agriculture Agricultural Research Service (USDA-ARS) blackberry breeding program at Oregon State University in Corvallis, Oregon. The most recent cultivars released from this program are the thornless cultivars 'Black Diamond', 'Black Pearl', and 'Nightfall' as well as the early-ripening 'Obsidian' and 'Metolius'. 'Black Diamond' is now the leading cultivar being planted in the Pacific Northwest. Some of the other cultivars from this program are 'Newberry', 'Waldo', 'Siskiyou', 'Black Butte', 'Kotata', 'Pacific', and 'Cascade'. Varieties with good commercial characteristics developed in Arkansas are grown in nurseries in Oklahoma. Such blackberries are easy to grow, and may produce fruit for a decade or more. These varieties have diverse flavors varying from sweet to tart. Trailing Trailing blackberries are vigorous and crown-forming, require a trellis for support, and are less cold-hardy than the erect or semi-erect blackberries. In addition to the Pacific Northwest, these types do well in similar climates, such as the United Kingdom, New Zealand, Chile, and the Mediterranean countries. Thornless Semi-erect, prickle-free blackberries were first developed at the John Innes Centre in Norwich, UK, and subsequently by the USDA-ARS in Beltsville, Maryland. These are crown forming and very vigorous and need a trellis for support. Cultivars include 'Black Satin', 'Chester Thornless', 'Dirksen Thornless', 'Hull Thornless', 'Loch Maree', 'Loch Ness', 'Loch Tay', 'Merton Thornless', 'Smoothstem', and 'Triple Crown'. 'Loch Ness' and 'Loch Tay' have gained the RHS's Award of Garden Merit. The cultivar 'Cacanska Bestrna' (also called 'Cacak Thornless') has been developed in Serbia and has been planted on many thousands of hectares there. Erect The University of Arkansas has developed cultivars of erect blackberries. These types are less vigorous than the semi-erect types and produce new canes from root initials (therefore they spread underground like raspberries). There are prickly and prickle-free cultivars from this program, including 'Navaho', 'Ouachita', 'Cherokee', 'Apache', 'Arapaho', and 'Kiowa'. They are also responsible for developing the primocane fruiting blackberries such as 'Prime-Jan' and 'Prime-Jim'. Primocane In raspberries, these types are called primocane fruiting, fall fruiting, or everbearing. 'Prime-Jim' and 'Prime-Jan' were released in 2004 by the University of Arkansas and are the first cultivars of primocane fruiting blackberry. They grow much like the other erect cultivars described above; however, the canes that emerge in the spring will flower in midsummer and fruit in late summer or fall. The fall crop has its highest quality when it ripens in cool mild climate such as in California or the Pacific Northwest. 'Illini Hardy', a semi-erect prickly cultivar introduced by the University of Illinois, is cane hardy in zone 5, where blackberry production has traditionally been problematic, since canes often failed to survive the winter. Mexico and Chile Blackberry production in Mexico expanded considerably in the early 21st century. In 2017, Mexico had 97% of the market share for fresh blackberries imported into the United States, while Chile had 61% of the market share for American imports of frozen blackberries. While once based on the cultivar 'Brazos', an old erect blackberry cultivar developed in Texas in 1959, the Mexican industry is now dominated by the Brazilian 'Tupy' released in the 1990s. The 'Tupy' has the erect blackberry 'Comanche', and a "wild Uruguayan blackberry" as parents. Since there are no native blackberries in Uruguay, the suspicion is that the widely grown 'Boysenberry' is the male parent. To produce these blackberries in regions of Mexico where there is no winter chilling to stimulate flower bud development, chemical defoliation and application of growth regulators are used to bring the plants into bloom. Diseases and pests Because blackberries belong to the same genus as raspberries, they share the same diseases, including anthracnose (a type of canker), which can cause the berry to have uneven ripening. Sap flow may also be slowed. They also share the same remedies, including the Bordeaux mixture, a combination of lime, water and copper(II) sulfate. The rows between blackberry plants must be free of weeds, blackberry suckers and grasses, which may lead to pests or diseases. Fruit growers are selective when planting blackberry bushes because wild blackberries may be infected, and gardeners are recommended to purchase only certified disease-free plants. The spotted-wing drosophila, Drosophila suzukii, is a serious pest of blackberries. Unlike its vinegar fly relatives, which are primarily attracted to rotting or fermented fruit, D. suzukii attacks fresh, ripe fruit by laying eggs under the soft skin. The larvae hatch and grow in the fruit, destroying the fruit's commercial value. Another pest is Amphorophora rubi, known as the blackberry aphid, which eats not just blackberries but raspberries as well. Byturus tomentosus (raspberry beetle), Lampronia corticella (raspberry moth) and Anthonomus rubi (strawberry blossom weevil) are also known to infest blackberries. Uses Nutrients Raw blackberries are 88% water, 10% carbohydrates, 1% protein, and 0.5% fat (table). In a reference amount, raw cultivated blackberries supply 43 calories and rich contents (20% or more of the Daily Value (DV) of dietary fiber, manganese (31% DV), vitamin C (25% DV), and vitamin K (19% DV) (table). Seed composition Blackberries contain numerous large seeds that are not always preferred by consumers. The seeds contain oil rich in omega-3 (alpha-linolenic acid) and omega-6 (linoleic acid) fats as well as protein, dietary fiber, carotenoids, ellagitannins, and ellagic acid. Culinary use The ripe fruit is commonly used in desserts, jams, jelly, wine and liqueurs. It may be mixed with other berries and fruits for pies and crumbles. Phytochemical research Blackberries contain numerous phytochemicals including polyphenols, flavonoids, anthocyanins, salicylic acid, ellagic acid, and fiber. Anthocyanins in blackberries are responsible for their rich dark color. One report placed blackberries at the top of more than 1,000 polyphenol-rich foods consumed in the United States, but this concept of a health benefit from consuming dark-colored foods like blackberries remains scientifically unverified and not accepted for health claims on food labels. Historical uses One of the earliest known instances of blackberry consumption comes from the remains of the Haraldskær Woman, the naturally preserved bog body of a Danish woman dating from approximately 2,500 years ago. Forensic evidence found blackberries in her stomach contents, among other foods. The use of blackberries to make wines and cordials was documented in the London Pharmacopoeia in 1696. In the culinary world, blackberries have a long history of use alongside other fruits to make pies, jellies and jams. Blackberry plants were used for traditional medicine by Greeks, other European peoples, and aboriginal Americans. A 1771 document described brewing blackberry leaves, stem, and bark for stomach ulcers. Blackberry fruit, leaves, and stems have been used to dye fabrics and hair. Native Americans have even been known to use the stems to make rope. The shrubs have also been used for barriers around buildings, crops and livestock. The wild plants have sharp, thick prickles, which offered some protection against enemies and large animals. In culture Folklore in the United Kingdom and Ireland tells that blackberries should not be picked after Old Michaelmas Day (11 October) as the devil (or a Púca) has made them unfit to eat by stepping, spitting or fouling on them. There is some value in this legend as autumn's wetter and cooler weather often allows the fruit to become infected by various molds such as Botryotinia which give the fruit an unpleasant look and may be toxic. According to some traditions, a blackberry's deep purple color represents Jesus's blood and the crown of thorns was made of brambles, although other thorny plants, such as Crataegus (hawthorn) and Euphorbia milii (crown of thorns plant), have been proposed as the material for the crown.
Biology and health sciences
Rosales
null
72381
https://en.wikipedia.org/wiki/Stag%20beetle
Stag beetle
Stag beetles comprise the family Lucanidae. It has about 1,200 species of beetles in four subfamilies. Some species grow to over , but most to about . Overview The English name is derived from the large and distinctive mandibles found on the males of most species, which resemble the antlers of stags. A well-known species in much of Europe is Lucanus cervus, referred to in some European countries (including the United Kingdom) as the stag beetle; it is the largest terrestrial insect in Europe. Pliny the Elder noted that Nigidius called the beetle lucanus after the Italian region of Lucania where they were used as amulets. The scientific name of Lucanus cervus adds cervus, deer. Male stag beetles are known for their oversize mandibles used to wrestle each other for favoured mating sites in a way that parallels the way stags fight over females. Fights may also be over food, such as tree sap and decaying fruits. Despite their often fearsome appearance, they are not normally aggressive to humans. During a battle between the two males, the main objective is to dislodge its opponent's tarsal claws with its mandible, thus disrupting their balance. Because its mandibles are capable of exceeding its own body size, stag beetles are generally inefficient runners and are very slow, and typically feel the need to fly from one location to another. Female stag beetles are usually smaller than the males, with smaller mandibles that are much more powerful than the males'. As larvae, females are distinguished by their cream-coloured, fat ovaries visible through the skin around two-thirds of the way down their back. The larvae feed for several years on rotting wood, growing through three larval stages until eventually pupating inside a pupal cell constructed from surrounding wood pieces and soil particles. In the final larval stage, "L3", the surviving grubs of larger species, such as Prosopocoilus giraffa, may be the size of a human finger. In England’s New Forest, it was once believed that the stag beetle, dubbed the "devil's imp", was sent to do some evil to the corn crops. The superstition led to stoning the insects on sight, as observed by a writer in the
Biology and health sciences
Beetles (Coleoptera)
Animals
72465
https://en.wikipedia.org/wiki/Tributary
Tributary
A tributary, or an affluent, is a stream or river that flows into a larger stream (main stem or "parent"), river, or a lake. A tributary does not flow directly into a sea or ocean. Tributaries, and the main stem river into which they flow, drain the surrounding drainage basin of its surface water and groundwater, leading the water out into an ocean. The Irtysh is a chief tributary of the Ob river and is also the longest tributary river in the world with a length of . The Madeira River is the largest tributary river by volume in the world with an average discharge of . A confluence, where two or more bodies of water meet, usually refers to the joining of tributaries. The opposite to a tributary is a distributary, a river or stream that branches off from and flows away from the main stream. Distributaries are most often found in river deltas. Terminology Right tributary, or right-bank tributary, and left tributary, or left-bank tributary, describe the orientation of the tributary relative to the flow of the main stem river. These terms are defined from the perspective of looking downstream, that is, facing the direction the water current of the main stem is going. In a navigational context, if one were floating on a raft or other vessel in the main stream, this would be the side the tributary enters from as one floats past; alternately, if one were floating down the tributary, the main stream meets it on the opposite bank of the tributary. This information may be used to avoid turbulent water by moving towards the opposite bank before approaching the confluence. An early tributary is a tributary that joins the main stem river closer to its source than its mouth, that is, before the river's midpoint; a late tributary joins the main stem further downstream, closer to its mouth than to its source, that is, after the midpoint. In the United States, where tributaries sometimes have the same name as the river into which they feed, they are called forks. These are typically designated by compass direction. For example, the American River in California receives flow from its North, Middle, and South forks. The Chicago River's North Branch has the East, West, and Middle Fork; the South Branch has its South Fork, and used to have a West Fork as well (now filled in). Forks are sometimes designated as right or left. Here, the handedness is from the point of view of an observer facing upstream. For instance, Steer Creek has a left tributary which is called Right Fork Steer Creek. These naming conventions are reflective of the circumstances of a particular river's identification and charting: people living along the banks of a river, with a name known to them, may then float down the river in exploration, and each tributary joining it as they pass by appears as a new river, to be given its own name, perhaps one already known to the people who live upon its banks. Conversely, explorers approaching a new land from the sea encounter its rivers at their mouths, where they name them on their charts, then, following a river upstream, encounter each tributary as a forking of the stream to the right and to the left, which then appear on their charts as such; or the streams are seen to diverge by the cardinal direction (north, south, east, or west) in which they proceed upstream, sometimes a third stream entering between two others is designated the middle fork; or the streams are distinguished by the relative height of one to the other, as one stream descending over a cataract into another becomes the upper fork, and the one it descends into, the lower; or by relative volume: the smaller stream designated the little fork, the larger either retaining its name unmodified, or receives the designation big. Ordering and enumeration Tributaries are sometimes listed starting with those nearest to the source of the river and ending with those nearest to the mouth of the river. The Strahler stream order examines the arrangement of tributaries in a hierarchy of first, second, third and higher orders, with the first-order tributary being typically the least in size. For example, a second-order tributary would be the result of two or more first-order tributaries combining to form the second-order tributary. Another method is to list tributaries from mouth to source, in the form of a tree structure, stored as a tree data structure. Gallery
Physical sciences
Hydrology
Earth science
72467
https://en.wikipedia.org/wiki/Distributary
Distributary
A distributary, or a distributary channel is a stream channel that branches off and flows a main stream channel. It is the opposite of a tributary, a stream that flows another stream or river. Distributaries are a result of river bifurcation and are often found where a river approaches a lake or an ocean and divides into distributary networks; as such they are a common feature of river deltas. They can also occur inland, on alluvial fans, or where a tributary stream bifurcates as it nears its confluence with a larger stream. In some cases, a minor distributary can divert so much water from the main channel that it can later become the main route. Related terms Common terms to name individual river distributaries in English-speaking countries are arm and channel. These terms may refer to a distributary that does not rejoin the channel from which it has branched (e.g., the North, Middle, and South Arms of the Fraser River, or the West Channel of the Mackenzie River), or to one that does (e.g. Annacis Channel and Annieville Channel of the Fraser River, separated by Annacis Island). In Australia, the term anabranch is used to refer to a distributary that diverts from the main course of the river and rejoins it later. In North America such a branching river is called a braided river. North America In Louisiana, the Atchafalaya River is an important distributary of the Mississippi River. Because the Atchafalaya takes a steeper route to the Gulf of Mexico than does the Mississippi, over several decades the Atchafalaya has captured more and more of the Mississippi's flow, after the Mississippi meandered into the Red River of the South. The Old River Control Structure, a dam which regulates the outflow from the Mississippi into the Atchafalaya, was completed by the Army Corps of Engineers in 1963. The dam is intended to prevent the Atchafalaya from capturing the main flow of the Mississippi and stranding the ports of Baton Rouge and New Orleans. In British Columbia, Canada, the Fraser River has numerous sloughs and side-channels which may be defined as distributaries. This river's final stretch has three main distributaries: the North Arm and the South Arm, and a few smaller ones adjoining them. Examples of inland distributaries: Teton River—a tributary of Henrys Fork in Idaho—splits into two distributary channels, the North Fork and South Fork, which join Henrys Fork miles apart. Parting of the Waters National Landmark within Wyoming's Teton Wilderness on the Continental Divide where North Two Ocean Creek splits into two distributaries, Pacific Creek and Atlantic Creek, which ultimately flow into their respective oceans. Kings River (California) has deposited a large alluvial fan at the transition from its canyon in the Sierra Nevada mountains to the flat Central Valley. Distributaries flow north into the Pacific Ocean via the San Joaquin River and south into an endorheic basin surrounding Tulare Lake. The Qu'Appelle River, in Saskatchewan and Manitoba, is a distributary of the South Saskatchewan River. Its flow is controlled by the Qu'Appelle River Dam. This dam forms the southern arm of Lake Diefenbaker. South America The Casiquiare canal is an inland distributary of the upper Orinoco, which flows southward into the Rio Negro, forming a unique natural canal between the Orinoco and Amazon river systems. It is the largest river on the planet that links two major river systems. Europe The IJssel, the Waal and the Nederrijn (Lower Rhine) are the three principal distributaries of the Rhine. These are formed by two separate bifurcations within the Rhine–Meuse–Scheldt delta. The Akhtuba River is a major distributary of the Volga. The bifurcation occurs close to, but before, the Volga Delta. The Tärendö River in northern Sweden is an inland distributary, far from the mouth of the river. It begins at the Torne River and ends at the Kalix River. The Little Danube in Slovakia branches off from the Danube near Bratislava, and flows into the Vah before rejoining the main river near Komárno. The area in the middle is the largest freshwater island in Europe. The Abbey River, Limerick, in Ireland is a distributary arm of the River Shannon. It rejoins the Shannon to form an island upon which King John's Castle is built. Asia Eastern Asia The Huai River in China splits into three streams. The main stream passes through the Sanhe Sluice, goes out of the Sanhe river, and enters the Yangtze River through Baoying Lake and Gaoyou Lake. On the east bank of Hongze Lake, another stream goes out of Gaoliangjian Gate and enters the Yellow Sea at the port of Bidan through Subei Guan'gai Zongqu, the main irrigation channel of Northern Jiangsu); its total length is 168 kilometers. The third stream leaves the Erhe lock on the northeast bank of Hongze Lake, passes the Huaishuhe River to the north of Lianyungang city, and flows into Haizhou Bay through the Hongkou. Southeast Asia The Tha Chin River and Noi River are distributaries of the Chao Phraya River in Thailand, splitting off from the latter about 200 kilometers upstream from the Bay of Bangkok. The Brantas River in East Java, Indonesia, branches off into two distributaries, Mas River, also known as Surabaya River, and Porong River. The Hong River in Northern Vietnam has notable distributaries such as the Day River, Ninh Co River and the Luoc River. All of these rivers empty into the Gulf of Tonkin. Indian Subcontinent Kollidam River is a distributary of the Kaveri River. Himalayan rivers including Ganges, Brahmaputra and Indus plus many tributaries form inland distributaries over vast alluvial fans as they transition from the mountain region to the flat Indo-Gangetic Plain. These areas are highly flood-prone, for example the 2008 Bihar flood on the Kosi River. Padma River is the main distributary of the Ganges in Bangladesh. Hoogli River is a Ganges distributary that flows through India, whereas most of the Ganges-Brahmaputra complex enters the sea through Bangladesh. Nara River is a distributary of the Indus River. Africa The Nile River has two distributaries, the Rosetta and the Damietta branches. According to Pliny the Elder it had in ancient times seven distributaries (east to west): The Pelusiac The Tanitic The Mendesian The Phatnitic The Sebennytic The Bolbitine The Canopic See History of the Nile Delta. The Okavango River ends in many distributaries in a large inland delta called the Okavango Delta. It is an example of distributaries that do not flow into any other body of water. Oceania Australia A number of the rivers that flow inland from Australia's Great Dividing Range form distributaries, most of which flow only intermittently during times of high river levels and end in shallow lakes or simply peter out in the deserts. Yarriambiack Creek, which flows from the Wimmera River into Lake Coorong, and Tyrrell Creek, which flows from the Avoca River into Lake Tyrrell, are two distributaries in Victoria. The Narran River flows from the Balonne River in Queensland into Narran Lake in New South Wales. Papua New Guinea Many of Papua New Guinea's major rivers flow into the Gulf of Papua through marshy, low-lying country, allowing for wide, many-branched deltas. These include the Fly River, which splits into three major and several minor rivers close to its mouth. The Bamu River splits into several channels close to its mouth, among them the Bebea, Bina, Dibiri, and Aramia. The Kikori River also splits into a multitude of channels as it crosses the plains close to the Gulf of Papua. The Purari River splits into three major channels as it approaches its mouth. New Zealand New Zealand's second-longest river, the Clutha River, splits into two arms, the Matau and the Koua, some 10 kilometres from the South Island's Pacific Coast. A large island, Inch Clutha, lies between the two arms. Many of the rivers crossing the Canterbury Plains in the central South Island are braided rivers, and several of these split into separate branches before reaching the coast. Notable among these is the Rangitata River, the two arms of which are separated by the low-lying Rangitata Island.
Physical sciences
Hydrology
Earth science
72536
https://en.wikipedia.org/wiki/Thermal%20conduction
Thermal conduction
Thermal conduction is the diffusion of thermal energy (heat) within one material or between materials in contact. The higher temperature object has molecules with more kinetic energy; collisions between molecules distributes this kinetic energy until an object has the same kinetic energy throughout. Thermal conductivity, frequently represented by , is a property that relates the rate of heat loss per unit area of a material to its rate of change of temperature. Essentially, it is a value that accounts for any property of the material that could change the way it conducts heat. Heat spontaneously flows along a temperature gradient (i.e. from a hotter body to a colder body). For example, heat is conducted from the hotplate of an electric stove to the bottom of a saucepan in contact with it. In the absence of an opposing external driving energy source, within a body or between bodies, temperature differences decay over time, and thermal equilibrium is approached, temperature becoming more uniform. Every process involving heat transfer takes place by only three methods: Conduction is heat transfer through stationary matter by physical contact. (The matter is stationary on a macroscopic scale—we know there is thermal motion of the atoms and molecules at any temperature above absolute zero.) Heat transferred between the electric burner of a stove and the bottom of a pan is transferred by conduction. Convection is the heat transfer by the macroscopic movement of a fluid. This type of transfer takes place in a forced-air furnace and in weather systems, for example. Heat transfer by radiation occurs when microwaves, infrared radiation, visible light, or another form of electromagnetic radiation is emitted or absorbed. An obvious example is the warming of the Earth by the Sun. A less obvious example is thermal radiation from the human body. Overview A region with greater thermal energy (heat) corresponds with greater molecular agitation. Thus when a hot object touches a cooler surface, the highly agitated molecules from the hot object bump the calm molecules of the cooler surface, transferring the microscopic kinetic energy and causing the colder part or object to heat up. Mathematically, thermal conduction works just like diffusion. As temperature difference goes up, the distance traveled gets shorter or the area goes up thermal conduction increases: Where: is the thermal conduction or power (the heat transferred per unit time over some distance between the two temperatures), is the thermal conductivity of the material, is the cross-sectional area of the object, is the difference in temperature from one side to the other, is the distance over which the heat is transferred. Conduction is the main mode of heat transfer for solid materials because the strong inter-molecular forces allow the vibrations of particles to be easily transmitted, in comparison to liquids and gases. Liquids have weaker inter-molecular forces and more space between the particles, which makes the vibrations of particles harder to transmit. Gases have even more space, and therefore infrequent particle collisions. This makes liquids and gases poor conductors of heat. Thermal contact conductance is the study of heat conduction between solid bodies in contact. A temperature drop is often observed at the interface between the two surfaces in contact. This phenomenon is said to be a result of a thermal contact resistance existing between the contacting surfaces. Interfacial thermal resistance is a measure of an interface's resistance to thermal flow. This thermal resistance differs from contact resistance, as it exists even at atomically perfect interfaces. Understanding the thermal resistance at the interface between two materials is of primary significance in the study of its thermal properties. Interfaces often contribute significantly to the observed properties of the materials. The inter-molecular transfer of energy could be primarily by elastic impact, as in fluids, or by free-electron diffusion, as in metals, or phonon vibration, as in insulators. In insulators, the heat flux is carried almost entirely by phonon vibrations. Metals (e.g., copper, platinum, gold, etc.) are usually good conductors of thermal energy. This is due to the way that metals bond chemically: metallic bonds (as opposed to covalent or ionic bonds) have free-moving electrons that transfer thermal energy rapidly through the metal. The electron fluid of a conductive metallic solid conducts most of the heat flux through the solid. Phonon flux is still present but carries less of the energy. Electrons also conduct electric current through conductive solids, and the thermal and electrical conductivities of most metals have about the same ratio. A good electrical conductor, such as copper, also conducts heat well. Thermoelectricity is caused by the interaction of heat flux and electric current. Heat conduction within a solid is directly analogous to diffusion of particles within a fluid, in the situation where there are no fluid currents. In gases, heat transfer occurs through collisions of gas molecules with one another. In the absence of convection, which relates to a moving fluid or gas phase, thermal conduction through a gas phase is highly dependent on the composition and pressure of this phase, and in particular, the mean free path of gas molecules relative to the size of the gas gap, as given by the Knudsen number . To quantify the ease with which a particular medium conducts, engineers employ the thermal conductivity, also known as the conductivity constant or conduction coefficient, k. In thermal conductivity, k is defined as "the quantity of heat, Q, transmitted in time (t) through a thickness (L), in a direction normal to a surface of area (A), due to a temperature difference (ΔT) [...]". Thermal conductivity is a material property that is primarily dependent on the medium's phase, temperature, density, and molecular bonding. Thermal effusivity is a quantity derived from conductivity, which is a measure of its ability to exchange thermal energy with its surroundings. Steady-state conduction Steady-state conduction is the form of conduction that happens when the temperature difference(s) driving the conduction are constant, so that (after an equilibration time), the spatial distribution of temperatures (temperature field) in the conducting object does not change any further. Thus, all partial derivatives of temperature concerning space may either be zero or have nonzero values, but all derivatives of temperature at any point concerning time are uniformly zero. In steady-state conduction, the amount of heat entering any region of an object is equal to the amount of heat coming out (if this were not so, the temperature would be rising or falling, as thermal energy was tapped or trapped in a region). For example, a bar may be cold at one end and hot at the other, but after a state of steady-state conduction is reached, the spatial gradient of temperatures along the bar does not change any further, as time proceeds. Instead, the temperature remains constant at any given cross-section of the rod normal to the direction of heat transfer, and this temperature varies linearly in space in the case where there is no heat generation in the rod. In steady-state conduction, all the laws of direct current electrical conduction can be applied to "heat currents". In such cases, it is possible to take "thermal resistances" as the analog to electrical resistances. In such cases, temperature plays the role of voltage, and heat transferred per unit time (heat power) is the analog of electric current. Steady-state systems can be modeled by networks of such thermal resistances in series and parallel, in exact analogy to electrical networks of resistors. See purely resistive thermal circuits for an example of such a network. Transient conduction During any period in which temperatures changes in time at any place within an object, the mode of thermal energy flow is termed transient conduction. Another term is "non-steady-state" conduction, referring to the time-dependence of temperature fields in an object. Non-steady-state situations appear after an imposed change in temperature at a boundary of an object. They may also occur with temperature changes inside an object, as a result of a new source or sink of heat suddenly introduced within an object, causing temperatures near the source or sink to change in time. When a new perturbation of temperature of this type happens, temperatures within the system change in time toward a new equilibrium with the new conditions, provided that these do not change. After equilibrium, heat flow into the system once again equals the heat flow out, and temperatures at each point inside the system no longer change. Once this happens, transient conduction is ended, although steady-state conduction may continue if heat flow continues. If changes in external temperatures or internal heat generation changes are too rapid for the equilibrium of temperatures in space to take place, then the system never reaches a state of unchanging temperature distribution in time, and the system remains in a transient state. An example of a new source of heat "turning on" within an object, causing transient conduction, is an engine starting in an automobile. In this case, the transient thermal conduction phase for the entire machine is over, and the steady-state phase appears, as soon as the engine reaches steady-state operating temperature. In this state of steady-state equilibrium, temperatures vary greatly from the engine cylinders to other parts of the automobile, but at no point in space within the automobile does temperature increase or decrease. After establishing this state, the transient conduction phase of heat transfer is over. New external conditions also cause this process: for example, the copper bar in the example steady-state conduction experiences transient conduction as soon as one end is subjected to a different temperature from the other. Over time, the field of temperatures inside the bar reaches a new steady-state, in which a constant temperature gradient along the bar is finally set up, and this gradient then stays constant in time. Typically, such a new steady-state gradient is approached exponentially with time after a new temperature-or-heat source or sink, has been introduced. When a "transient conduction" phase is over, heat flow may continue at high power, so long as temperatures do not change. An example of transient conduction that does not end with steady-state conduction, but rather no conduction, occurs when a hot copper ball is dropped into oil at a low temperature. Here, the temperature field within the object begins to change as a function of time, as the heat is removed from the metal, and the interest lies in analyzing this spatial change of temperature within the object over time until all gradients disappear entirely (the ball has reached the same temperature as the oil). Mathematically, this condition is also approached exponentially; in theory, it takes infinite time, but in practice, it is over, for all intents and purposes, in a much shorter period. At the end of this process with no heat sink but the internal parts of the ball (which are finite), there is no steady-state heat conduction to reach. Such a state never occurs in this situation, but rather the end of the process is when there is no heat conduction at all. The analysis of non-steady-state conduction systems is more complex than that of steady-state systems. If the conducting body has a simple shape, then exact analytical mathematical expressions and solutions may be possible (see heat equation for the analytical approach). However, most often, because of complicated shapes with varying thermal conductivities within the shape (i.e., most complex objects, mechanisms or machines in engineering) often the application of approximate theories is required, and/or numerical analysis by computer. One popular graphical method involves the use of Heisler Charts. Occasionally, transient conduction problems may be considerably simplified if regions of the object being heated or cooled can be identified, for which thermal conductivity is very much greater than that for heat paths leading into the region. In this case, the region with high conductivity can often be treated in the lumped capacitance model, as a "lump" of material with a simple thermal capacitance consisting of its aggregate heat capacity. Such regions warm or cool, but show no significant temperature variation across their extent, during the process (as compared to the rest of the system). This is due to their far higher conductance. During transient conduction, therefore, the temperature across their conductive regions changes uniformly in space, and as a simple exponential in time. An example of such systems is those that follow Newton's law of cooling during transient cooling (or the reverse during heating). The equivalent thermal circuit consists of a simple capacitor in series with a resistor. In such cases, the remainder of the system with a high thermal resistance (comparatively low conductivity) plays the role of the resistor in the circuit. Relativistic conduction The theory of relativistic heat conduction is a model that is compatible with the theory of special relativity. For most of the last century, it was recognized that the Fourier equation is in contradiction with the theory of relativity because it admits an infinite speed of propagation of heat signals. For example, according to the Fourier equation, a pulse of heat at the origin would be felt at infinity instantaneously. The speed of information propagation is faster than the speed of light in vacuum, which is physically inadmissible within the framework of relativity. Quantum conduction Second sound is a quantum mechanical phenomenon in which heat transfer occurs by wave-like motion, rather than by the more usual mechanism of diffusion. Heat takes the place of pressure in normal sound waves. This leads to a very high thermal conductivity. It is known as "second sound" because the wave motion of heat is similar to the propagation of sound in air.this is called Quantum conduction Fourier's law The law of heat conduction, also known as Fourier's law (compare Fourier's heat equation), states that the rate of heat transfer through a material is proportional to the negative gradient in the temperature and to the area, at right angles to that gradient, through which the heat flows. We can state this law in two equivalent forms: the integral form, in which we look at the amount of energy flowing into or out of a body as a whole, and the differential form, in which we look at the flow rates or fluxes of energy locally. Newton's law of cooling is a discrete analogue of Fourier's law, while Ohm's law is the electrical analogue of Fourier's law and Fick's laws of diffusion is its chemical analogue. Differential form The differential form of Fourier's law of thermal conduction shows that the local heat flux density is equal to the product of thermal conductivity and the negative local temperature gradient . The heat flux density is the amount of energy that flows through a unit area per unit time. where (including the SI units) is the local heat flux density, W/m2, is the material's conductivity, W/(m·K), is the temperature gradient, K/m. The thermal conductivity is often treated as a constant, though this is not always true. While the thermal conductivity of a material generally varies with temperature, the variation can be small over a significant range of temperatures for some common materials. In anisotropic materials, the thermal conductivity typically varies with orientation; in this case is represented by a second-order tensor. In non-uniform materials, varies with spatial location. For many simple applications, Fourier's law is used in its one-dimensional form, for example, in the direction: In an isotropic medium, Fourier's law leads to the heat equation with a fundamental solution famously known as the heat kernel. Integral form By integrating the differential form over the material's total surface , we arrive at the integral form of Fourier's law: where (including the SI units): is the thermal power transferred by conduction (in W), time derivative of the transferred heat (in J), is an oriented surface area element (in m2). The above differential equation, when integrated for a homogeneous material of 1-D geometry between two endpoints at constant temperature, gives the heat flow rate as where is the time interval during which the amount of heat flows through a cross-section of the material, is the cross-sectional surface area, is the temperature difference between the ends, is the distance between the ends. One can define the (macroscopic) thermal resistance of the 1-D homogeneous material: With a simple 1-D steady heat conduction equation which is analogous to Ohm's law for a simple electric resistance: This law forms the basis for the derivation of the heat equation. Conductance Writing where is the conductance, in W/(m2 K). Fourier's law can also be stated as: The reciprocal of conductance is resistance, is given by: Resistance is additive when several conducting layers lie between the hot and cool regions, because and are the same for all layers. In a multilayer partition, the total conductance is related to the conductance of its layers by: or equivalently So, when dealing with a multilayer partition, the following formula is usually used: For heat conduction from one fluid to another through a barrier, it is sometimes important to consider the conductance of the thin film of fluid that remains stationary next to the barrier. This thin film of fluid is difficult to quantify because its characteristics depend upon complex conditions of turbulence and viscosity—but when dealing with thin high-conductance barriers it can sometimes be quite significant. Intensive-property representation The previous conductance equations, written in terms of extensive properties, can be reformulated in terms of intensive properties. Ideally, the formulae for conductance should produce a quantity with dimensions independent of distance, like Ohm's law for electrical resistance, , and conductance, . From the electrical formula: , where ρ is resistivity, x is length, and A is cross-sectional area, we have , where G is conductance, k is conductivity, x is length, and A is cross-sectional area. For heat, where is the conductance. Fourier's law can also be stated as: analogous to Ohm's law, or The reciprocal of conductance is resistance, R, given by: analogous to Ohm's law, The rules for combining resistances and conductances (in series and parallel) are the same for both heat flow and electric current. Cylindrical shells Conduction through cylindrical shells (e.g. pipes) can be calculated from the internal radius, , the external radius, , the length, , and the temperature difference between the inner and outer wall, . The surface area of the cylinder is When Fourier's equation is applied: and rearranged: then the rate of heat transfer is: the thermal resistance is: and , where . It is important to note that this is the log-mean radius. Spherical The conduction through a spherical shell with internal radius, , and external radius, , can be calculated in a similar manner as for a cylindrical shell. The surface area of the sphere is: Solving in a similar manner as for a cylindrical shell (see above) produces: Transient thermal conduction Interface heat transfer The heat transfer at an interface is considered a transient heat flow. To analyze this problem, the Biot number is important to understand how the system behaves. The Biot number is determined by: The heat transfer coefficient , is introduced in this formula, and is measured in . If the system has a Biot number of less than 0.1, the material behaves according to Newtonian cooling, i.e. with negligible temperature gradient within the body. If the Biot number is greater than 0.1, the system behaves as a series solution. however, there is a noticeable temperature gradient within the material, and a series solution is required to describe the temperature profile. The cooling equation given is: This leads to the dimensionless form of the temperature profile as a function of time: This equation shows that the temperature decreases exponentially over time, with the rate governed by the properties of the material and the heat transfer coefficient. The heat transfer coefficient, , is measured in , and represents the transfer of heat at an interface between two materials. This value is different at every interface and is an important concept in understanding heat flow at an interface. The series solution can be analyzed with a nomogram. A nomogram has a relative temperature as the coordinate and the Fourier number, which is calculated by The Biot number increases as the Fourier number decreases. There are five steps to determine a temperature profile in terms of time. Calculate the Biot number Determine which relative depth matters, either x or L. Convert time to the Fourier number. Convert to relative temperature with the boundary conditions. Compared required to point to trace specified Biot number on the nomogram. Applications Splat cooling Splat cooling is a method for quenching small droplets of molten materials by rapid contact with a cold surface. The particles undergo a characteristic cooling process, with the heat profile at for initial temperature as the maximum at and at and , and the heat profile at for as the boundary conditions. Splat cooling rapidly ends in a steady state temperature, and is similar in form to the Gaussian diffusion equation. The temperature profile, with respect to the position and time of this type of cooling, varies with: Splat cooling is a fundamental concept that has been adapted for practical use in the form of thermal spraying. The thermal diffusivity coefficient, represented as , can be written as . This varies according to the material. Metal quenching Metal quenching is a transient heat transfer process in terms of the time temperature transformation (TTT). It is possible to manipulate the cooling process to adjust the phase of a suitable material. For example, appropriate quenching of steel can convert a desirable proportion of its content of austenite to martensite, creating a very hard and strong product. To achieve this, it is necessary to quench at the "nose" (or eutectic) of the TTT diagram. Since materials differ in their Biot numbers, the time it takes for the material to quench, or the Fourier number, varies in practice. In steel, the quenching temperature range is generally from 600 °C to 200 °C. To control the quenching time and to select suitable quenching media, it is necessary to determine the Fourier number from the desired quenching time, the relative temperature drop, and the relevant Biot number. Usually, the correct figures are read from a standard nomogram. By calculating the heat transfer coefficient from this Biot number, one can find a liquid medium suitable for the application. Zeroth law of thermodynamics One statement of the so-called zeroth law of thermodynamics is directly focused on the idea of conduction of heat. Bailyn (1994) writes that "the zeroth law may be stated: All diathermal walls are equivalent". A diathermal wall is a physical connection between two bodies that allows the passage of heat between them. Bailyn is referring to diathermal walls that exclusively connect two bodies, especially conductive walls. This statement of the "zeroth law" belongs to an idealized theoretical discourse, and actual physical walls may have peculiarities that do not conform to its generality. For example, the material of the wall must not undergo a phase transition, such as evaporation or fusion, at the temperature at which it must conduct heat. But when only thermal equilibrium is considered and time is not urgent, so that the conductivity of the material does not matter too much, one suitable heat conductor is as good as another. Conversely, another aspect of the zeroth law is that, subject again to suitable restrictions, a given diathermal wall is indifferent to the nature of the heat bath to which it is connected. For example, the glass bulb of a thermometer acts as a diathermal wall whether exposed to a gas or a liquid, provided that they do not corrode or melt it. These differences are among the defining characteristics of heat transfer. In a sense, they are symmetries of heat transfer. Instruments Thermal conductivity analyzer Thermal conduction property of any gas under standard conditions of pressure and temperature is a fixed quantity. This property of a known reference gas or known reference gas mixtures can, therefore, be used for certain sensory applications, such as the thermal conductivity analyzer. The working of this instrument is by principle based on the Wheatstone bridge containing four filaments whose resistances are matched. Whenever a certain gas is passed over such network of filaments, their resistance changes due to the altered thermal conductivity of the filaments and thereby changing the net voltage output from the Wheatstone Bridge. This voltage output will be correlated with the database to identify the gas sample. Gas sensor The principle of thermal conductivity of gases can also be used to measure the concentration of a gas in a binary mixture of gases. Working: if the same gas is present around all the Wheatstone bridge filaments, then the same temperature is maintained in all the filaments and hence same resistances are also maintained; resulting in a balanced Wheatstone bridge. However, If the dissimilar gas sample (or gas mixture) is passed over one set of two filaments and the reference gas on the other set of two filaments, then the Wheatstone bridge becomes unbalanced. And the resulting net voltage output of the circuit will be correlated with the database to identify the constituents of the sample gas. Using this technique many unknown gas samples can be identified by comparing their thermal conductivity with other reference gas of known thermal conductivity. The most commonly used reference gas is nitrogen; as the thermal conductivity of most common gases (except hydrogen and helium) are similar to that of nitrogen.
Physical sciences
Thermodynamics
Physics
72540
https://en.wikipedia.org/wiki/Newton%20%28unit%29
Newton (unit)
The newton (symbol: N) is the unit of force in the International System of Units (SI). Expressed in terms of SI base units, it is 1 kg⋅m/s2, the force that accelerates a mass of one kilogram at one metre per second squared. The unit is named after Isaac Newton in recognition of his work on classical mechanics, specifically his second law of motion. Definition A newton is defined as 1 kg⋅m/s2 (it is a named derived unit defined in terms of the SI base units). One newton is, therefore, the force needed to accelerate one kilogram of mass at the rate of one metre per second squared in the direction of the applied force. The units "metre per second squared" can be understood as measuring a rate of change in velocity per unit of time, i.e. an increase in velocity by one metre per second every second. In 1946, the General Conference on Weights and Measures (CGPM) Resolution 2 standardized the unit of force in the MKS system of units to be the amount needed to accelerate one kilogram of mass at the rate of one metre per second squared. In 1948, the 9th CGPM Resolution 7 adopted the name newton for this force. The MKS system then became the blueprint for today's SI system of units. The newton thus became the standard unit of force in the (SI), or International System of Units. The connection to Newton comes from Newton's second law of motion, which states that the force exerted on an object is directly proportional to the acceleration hence acquired by that object, thus: where represents the mass of the object undergoing an acceleration . When using the SI unit of mass, the kilogram (kg), and SI units for distance metre (m), and time, second (s) we arrive at the SI definition of the newton: 1 kg⋅m/s2. Examples At average gravity on Earth (conventionally, = ), a kilogram mass exerts a force of about 9.81 N. An average-sized apple with mass 200 g exerts about two newtons of force at Earth's surface, which we measure as the apple's weight on Earth. An average adult exerts a force of about 608 N on Earth. (where 62 kg is the world average adult mass). Kilonewtons Large forces may be expressed in kilonewtons (kN), where . For example, the tractive effort of a Class Y steam train locomotive and the thrust of an F100 jet engine are both around 130 kN. Climbing ropes are tested by assuming a human can withstand a fall that creates 12 kN of force. The ropes must not break when tested against 5 such falls. Conversion factors
Physical sciences
Energy, power, force and pressure
null
72576
https://en.wikipedia.org/wiki/Axial%20precession
Axial precession
In astronomy, axial precession is a gravity-induced, slow, and continuous change in the orientation of an astronomical body's rotational axis. In the absence of precession, the astronomical body's orbit would show axial parallelism. In particular, axial precession can refer to the gradual shift in the orientation of Earth's axis of rotation in a cycle of approximately 26,000 years. This is similar to the precession of a spinning top, with the axis tracing out a pair of cones joined at their apices. The term "precession" typically refers only to this largest part of the motion; other changes in the alignment of Earth's axis—nutation and polar motion—are much smaller in magnitude. Earth's precession was historically called the precession of the equinoxes, because the equinoxes moved westward along the ecliptic relative to the fixed stars, opposite to the yearly motion of the Sun along the ecliptic. Historically, the discovery of the precession of the equinoxes is usually attributed in the West to the 2nd-century-BC astronomer Hipparchus. With improvements in the ability to calculate the gravitational force between planets during the first half of the nineteenth century, it was recognized that the ecliptic itself moved slightly, which was named planetary precession, as early as 1863, while the dominant component was named lunisolar precession. Their combination was named general precession, instead of precession of the equinoxes. Lunisolar precession is caused by the gravitational forces of the Moon and Sun on Earth's equatorial bulge, causing Earth's axis to move with respect to inertial space. Planetary precession (an advance) is due to the small angle between the gravitational force of the other planets on Earth and its orbital plane (the ecliptic), causing the plane of the ecliptic to shift slightly relative to inertial space. Lunisolar precession is about 500 times greater than planetary precession. In addition to the Moon and Sun, the other planets also cause a small movement of Earth's axis in inertial space, making the contrast in the terms lunisolar versus planetary misleading, so in 2006 the International Astronomical Union recommended that the dominant component be renamed the precession of the equator, and the minor component be renamed precession of the ecliptic, but their combination is still named general precession. Many references to the old terms exist in publications predating the change. Nomenclature The term "Precession" is derived from the Latin praecedere ("to precede, to come before or earlier"). The stars viewed from Earth are seen to proceed from east to west daily (at about 15 degrees per hour), due to the Earth's diurnal motion, and yearly (at about 1 degree per day), due to the Earth's revolution around the Sun. At the same time the stars can be observed to anticipate slightly such motion, at the rate of approximately 50 arc seconds per year (1 degree per 72 years), a phenomenon known as the "precession of the equinoxes". In describing this motion astronomers generally have shortened the term to simply "precession". In describing the cause of the motion physicists have also used the term "precession", which has led to some confusion between the observable phenomenon and its cause, which matters because in astronomy, some precessions are real and others are apparent. This issue is further obfuscated by the fact that many astronomers are physicists or astrophysicists. The term "precession" used in astronomy generally describes the observable precession of the equinox (the stars moving retrograde across the sky), whereas the term "precession" as used in physics, generally describes a mechanical process. Effects The precession of the Earth's axis has a number of observable effects. First, the positions of the south and north celestial poles appear to move in circles against the space-fixed backdrop of stars, completing one circuit in approximately 26,000 years. Thus, while today the star Polaris lies approximately at the north celestial pole, this will change over time, and other stars will become the "north star". In approximately 3,200 years, the star Gamma Cephei in the Cepheus constellation will succeed Polaris for this position. The south celestial pole currently lacks a bright star to mark its position, but over time precession also will cause bright stars to become South Stars. As the celestial poles shift, there is a corresponding gradual shift in the apparent orientation of the whole star field, as viewed from a particular position on Earth. Secondly, the position of the Earth in its orbit around the Sun at the solstices, equinoxes, or other time defined relative to the seasons, slowly changes. For example, suppose that the Earth's orbital position is marked at the summer solstice, when the Earth's axial tilt is pointing directly toward the Sun. One full orbit later, when the Sun has returned to the same apparent position relative to the background stars, the Earth's axial tilt is not now directly toward the Sun: because of the effects of precession, it is a little way "beyond" this. In other words, the solstice occurred a little earlier in the orbit. Thus, the tropical year, measuring the cycle of seasons (for example, the time from solstice to solstice, or equinox to equinox), is about 20 minutes shorter than the sidereal year, which is measured by the Sun's apparent position relative to the stars. After about 26 000 years the difference amounts to a full year, so the positions of the seasons relative to the orbit are "back where they started". (Other effects also slowly change the shape and orientation of the Earth's orbit, and these, in combination with precession, create various cycles of differing periods; see also Milankovitch cycles. The magnitude of the Earth's tilt, as opposed to merely its orientation, also changes slowly over time, but this effect is not attributed directly to precession.) For identical reasons, the apparent position of the Sun relative to the backdrop of the stars at some seasonally fixed time slowly regresses a full 360° through all twelve traditional constellations of the zodiac, at the rate of about 50.3 seconds of arc per year, or 1 degree every 71.6 years. At present, the rate of precession corresponds to a period of 25,772 years, so tropical year is shorter than sidereal year by 1,224.5 seconds The rate itself varies somewhat with time (see Values below), so one cannot say that in exactly 25,772 years the Earth's axis will be back to where it is now. For further details, see Changing pole stars and Polar shift and equinoxes shift, below. History Hellenistic world Hipparchus The discovery of precession usually is attributed to Hipparchus (190–120 BC) of Rhodes or Nicaea, a Greek astronomer. According to Ptolemy's Almagest, Hipparchus measured the longitude of Spica and other bright stars. Comparing his measurements with data from his predecessors, Timocharis (320–260 BC) and Aristillus (~280 BC), he concluded that Spica had moved 2° relative to the autumnal equinox. He also compared the lengths of the tropical year (the time it takes the Sun to return to an equinox) and the sidereal year (the time it takes the Sun to return to a fixed star), and found a slight discrepancy. Hipparchus concluded that the equinoxes were moving ("precessing") through the zodiac, and that the rate of precession was not less than 1° in a century, in other words, completing a full cycle in no more than 36,000 years. Virtually all of the writings of Hipparchus are lost, including his work on precession. They are mentioned by Ptolemy, who explains precession as the rotation of the celestial sphere around a motionless Earth. It is reasonable to presume that Hipparchus, similarly to Ptolemy, thought of precession in geocentric terms as a motion of the heavens, rather than of the Earth. Ptolemy The first astronomer known to have continued Hipparchus's work on precession is Ptolemy in the second century AD. Ptolemy measured the longitudes of Regulus, Spica, and other bright stars with a variation of Hipparchus's lunar method that did not require eclipses. Before sunset, he measured the longitudinal arc separating the Moon from the Sun. Then, after sunset, he measured the arc from the Moon to the star. He used Hipparchus's model to calculate the Sun's longitude, and made corrections for the Moon's motion and its parallax. Ptolemy compared his own observations with those made by Hipparchus, Menelaus of Alexandria, Timocharis, and Agrippa. He found that between Hipparchus's time and his own (about 265 years), the stars had moved 2°40', or 1° in 100 years (36" per year; the rate accepted today is about 50" per year or 1° in 72 years). It is possible, however, that Ptolemy simply trusted Hipparchus' figure instead of making his own measurements. He also confirmed that precession affected all fixed stars, not just those near the ecliptic, and his cycle had the same period of 36,000 years as that of Hipparchus. Other authors Most ancient authors did not mention precession and, perhaps, did not know of it. For instance, Proclus rejected precession, while Theon of Alexandria, a commentator on Ptolemy in the fourth century, accepted Ptolemy's explanation. Theon also reports an alternate theory: "According to certain opinions ancient astrologers believe that from a certain epoch the solstitial signs have a motion of 8° in the order of the signs, after which they go back the same amount. ..." (Dreyer 1958, p. 204) Instead of proceeding through the entire sequence of the zodiac, the equinoxes "trepidated" back and forth over an arc of 8°. The theory of trepidation is presented by Theon as an alternative to precession. Alternative discovery theories Babylonians Various assertions have been made that other cultures discovered precession independently of Hipparchus. According to Al-Battani, the Chaldean astronomers had distinguished the tropical and sidereal year so that by approximately 330 BC, they would have been in a position to describe precession, if inaccurately, but such claims generally are regarded as unsupported. Maya Archaeologist Susan Milbrath has speculated that the Mesoamerican Long Count calendar of "30,000 years involving the Pleiades...may have been an effort to calculate the precession of the equinox." This view is held by few other professional scholars of Maya civilization. Ancient Egyptians Similarly, it is claimed the precession of the equinoxes was known in Ancient Egypt, prior to the time of Hipparchus (the Ptolemaic period). These claims remain controversial. Ancient Egyptians kept accurate calendars and recorded dates on temple walls, so it would be a simple matter for them to plot the "rough" precession rate. The Dendera Zodiac, a star-map inside the Hathor temple at Dendera, allegedly records the precession of the equinoxes. In any case, if the ancient Egyptians knew of precession, their knowledge is not recorded as such in any of their surviving astronomical texts. Michael Rice, a popular writer on Ancient Egypt, has written that Ancient Egyptians must have observed the precession, and suggested that this awareness had profound affects on their culture. Rice noted that Egyptians re-oriented temples in response to precession of associated stars. India Before 1200, India had two theories of trepidation, one with a rate and another without a rate, and several related models of precession. Each had minor changes or corrections by various commentators. The dominant of the three was the trepidation described by the most respected Indian astronomical treatise, the Surya Siddhanta (3:9–12), composed but revised during the next few centuries. It used a sidereal epoch, or ayanamsa, that is still used by all Indian calendars, varying over the ecliptic longitude of 19°11′ to 23°51′, depending on the group consulted. This epoch causes the roughly 30 Indian calendar years to begin 23–28 days after the modern March equinox. The March equinox of the Surya Siddhanta librated 27° in both directions from the sidereal epoch. Thus the equinox moved 54° in one direction and then back 54° in the other direction. This cycle took 7200 years to complete at a rate of 54″/year. The equinox coincided with the epoch at the beginning of the Kali Yuga in −3101 and again 3,600 years later in 499. The direction changed from prograde to retrograde midway between these years at −1301 when it reached its maximum deviation of 27°, and would have remained retrograde, the same direction as modern precession, for 3600 years until 2299. Another trepidation was described by Varāhamihira (). His trepidation consisted of an arc of 46°40′ in one direction and a return to the starting point. Half of this arc, 23°20′, was identified with the Sun's maximum declination on either side of the equator at the solstices. But no period was specified, thus no annual rate can be ascertained. Several authors have described precession to be near 200,000revolutions in a Kalpa of 4,320,000,000years, which would be a rate of = 60″/year. They probably deviated from an even 200,000revolutions to make the accumulated precession zero near 500. Visnucandra () mentions 189,411revolutions in a Kalpa or 56.8″/year. Bhaskara I () mentions [1]94,110revolutions in a Kalpa or 58.2″/year. Bhāskara II () mentions 199,699revolutions in a Kalpa or 59.9″/year. Chinese astronomy Yu Xi (fourth century AD) was the first Chinese astronomer to mention precession. He estimated the rate of precession as 1° in 50 years. Middle Ages and Renaissance In medieval Islamic astronomy, precession was known based on Ptolemy's Almagest, and by observations that refined the value. Al-Battani, in his work Zij Al-Sabi, mentions Hipparchus's calculation of precession, and Ptolemy's value of 1 degree per 100 solar years, says that he measured precession and found it to be one degree per 66 solar years. Subsequently, Al-Sufi, in his Book of Fixed Stars, mentions the same values that Ptolemy's value for precession is 1 degree per 100 solar years. He then quotes a different value from Zij Al Mumtahan, which was done during Al-Ma'mun's reign, of 1 degree for every 66 solar years. He also quotes the aforementioned Zij Al-Sabi of Al-Battani as adjusting coordinates for stars by 11 degrees and 10 minutes of arc to account for the difference between Al-Battani's time and Ptolemy's. Later, the Zij-i Ilkhani, compiled at the Maragheh observatory, sets the precession of the equinoxes at 51 arc seconds per annum, which is very close to the modern value of 50.2 arc seconds. In the Middle Ages, Islamic and Latin Christian astronomers treated "trepidation" as a motion of the fixed stars to be added to precession. This theory is commonly attributed to the Arab astronomer Thabit ibn Qurra, but the attribution has been contested in modern times. Nicolaus Copernicus published a different account of trepidation in De revolutionibus orbium coelestium (1543). This work makes the first definite reference to precession as the result of a motion of the Earth's axis. Copernicus characterized precession as the third motion of the Earth. Modern period Over a century later, Isaac Newton in Philosophiae Naturalis Principia Mathematica (1687) explained precession as a consequence of gravitation. However, Newton's original precession equations did not work, and were revised considerably by Jean le Rond d'Alembert and subsequent scientists. Hipparchus's discovery Hipparchus gave an account of his discovery in On the Displacement of the Solsticial and Equinoctial Points (described in Almagest III.1 and VII.2). He measured the ecliptic longitude of the star Spica during lunar eclipses and found that it was about 6° west of the autumnal equinox. By comparing his own measurements with those of Timocharis of Alexandria (a contemporary of Euclid, who worked with Aristillus early in the 3rd century BC), he found that Spica's longitude had decreased by about 2° in the meantime (exact years are not mentioned in Almagest). Also in VII.2, Ptolemy gives more precise observations of two stars, including Spica, and concludes that in each case a 2° 40' change occurred between 128 BC and AD 139. Hence, 1° per century or one full cycle in 36,000 years, that is, the precessional period of Hipparchus as reported by Ptolemy; cf. page 328 in Toomer's translation of Almagest, 1998 edition. He also noticed this motion in other stars. He speculated that only the stars near the zodiac shifted over time. Ptolemy called this his "first hypothesis" (Almagest VII.1), but did not report any later hypothesis Hipparchus might have devised. Hipparchus apparently limited his speculations, because he had only a few older observations, which were not very reliable. Because the equinoctial points are not marked in the sky, Hipparchus needed the Moon as a reference point; he used a lunar eclipse to measure the position of a star. Hipparchus already had developed a way to calculate the longitude of the Sun at any moment. A lunar eclipse happens during Full moon, when the Moon is at opposition, precisely 180° from the Sun. Hipparchus is thought to have measured the longitudinal arc separating Spica from the Moon. To this value, he added the calculated longitude of the Sun, plus 180° for the longitude of the Moon. He did the same procedure with Timocharis' data. Observations such as these eclipses, incidentally, are the main source of data about when Hipparchus worked, since other biographical information about him is minimal. The lunar eclipses he observed, for instance, took place on 21 April 146 BC, and 21 March 135 BC. Hipparchus also studied precession in On the Length of the Year. Two kinds of year are relevant to understanding his work. The tropical year is the length of time that the Sun, as viewed from the Earth, takes to return to the same position along the ecliptic (its path among the stars on the celestial sphere). The sidereal year is the length of time that the Sun takes to return to the same position with respect to the stars of the celestial sphere. Precession causes the stars to change their longitude slightly each year, so the sidereal year is longer than the tropical year. Using observations of the equinoxes and solstices, Hipparchus found that the length of the tropical year was 365+1/4−1/300 days, or 365.24667 days (Evans 1998, p. 209). Comparing this with the length of the sidereal year, he calculated that the rate of precession was not less than 1° in a century. From this information, it is possible to calculate that his value for the sidereal year was 365+1/4+1/144 days. By giving a minimum rate, he may have been allowing for errors in observation. To approximate his tropical year, Hipparchus created his own lunisolar calendar by modifying those of Meton and Callippus in On Intercalary Months and Days (now lost), as described by Ptolemy in the Almagest III.1. The Babylonian calendar used a cycle of 235 lunar months in 19 years since 499 BC (with only three exceptions before 380 BC), but it did not use a specified number of days. The Metonic cycle (432 BC) assigned 6,940 days to these 19 years producing an average year of 365+1/4+1/76 or 365.26316 days. The Callippic cycle (330 BC) dropped one day from four Metonic cycles (76 years) for an average year of 365+1/4 or 365.25 days. Hipparchus dropped one more day from four Callippic cycles (304 years), creating the Hipparchic cycle with an average year of 365+1/4−1/304 or 365.24671 days, which was close to his tropical year of 365+1/4−1/300 or 365.24667 days. Hipparchus's mathematical signatures are found in the Antikythera Mechanism, an ancient astronomical computer of the second century BC. The mechanism is based on a solar year, the Metonic Cycle, which is the period the Moon reappears in the same place in the sky with the same phase (full Moon appears at the same position in the sky approximately in 19 years), the Callipic cycle (which is four Metonic cycles and more accurate), the Saros cycle, and the Exeligmos cycles (three Saros cycles for the accurate eclipse prediction). Study of the Antikythera Mechanism showed that the ancients used very accurate calendars based on all the aspects of solar and lunar motion in the sky. In fact, the Lunar Mechanism which is part of the Antikythera Mechanism depicts the motion of the Moon and its phase, for a given time, using a train of four gears with a pin and slot device which gives a variable lunar velocity that is very close to Kepler's second law. That is, it takes into account the fast motion of the Moon at perigee and slower motion at apogee. Changing pole stars A consequence of the precession is a changing pole star. Currently Polaris is extremely well suited to mark the position of the north celestial pole, as Polaris is a moderately bright star with a visual magnitude of 2.1 (variable), and is located about one degree from the pole, with no stars of similar brightness too close. The previous pole star was Kochab (Beta Ursae Minoris, β UMi, β Ursae Minoris), the brightest star in the bowl of the "Little Dipper", located 16 degrees from Polaris. It held that role from 1500 BC to AD 500. It was not quite as accurate in its day as Polaris is today. Today, Kochab and its neighbor Pherkad are referred to as the "Guardians of the Pole" (meaning Polaris). On the other hand, Thuban in the constellation Draco, which was the pole star in 3000 BC, is much less conspicuous at magnitude 3.67 (one-fifth as bright as Polaris); today it is invisible in light-polluted urban skies. When Polaris becomes the north star again around 27,800, it will then be farther away from the pole than it is now due to its proper motion, while in 23,600 BC it came closer to the pole. It is more difficult to find the south celestial pole in the sky at this moment, as that area is a particularly bland portion of the sky. The nominal south pole star is Sigma Octantis, which with magnitude 5.5 is barely visible to the naked eye even under ideal conditions. That will change from the 80th to the 90th centuries, however, when the south celestial pole travels through the False Cross. This situation also is seen on a star map. The orientation of the south pole is moving toward the Southern Cross constellation. For the last 2,000 years or so, the Southern Cross has pointed to the south celestial pole. As a consequence, the constellation is difficult to view from subtropical northern latitudes, unlike in the time of the ancient Greeks. The Southern Cross can be viewed from as far north as Miami (about 25° N), but only during the winter/early spring. Polar shift and equinoxes shift The images at right attempt to explain the relation between the precession of the Earth's axis and the shift in the equinoxes. These images show the position of the Earth's axis on the celestial sphere, a fictitious sphere which places the stars according to their position as seen from Earth, regardless of their actual distance. The first image shows the celestial sphere from the outside, with the constellations in mirror image. The second image shows the perspective of a near-Earth position as seen through a very wide angle lens (from which the apparent distortion arises). The rotation axis of the Earth describes, over a period of 25,700 years, a small among the stars near the top of the diagram, centered on the ecliptic north pole (the ) and with an angular radius of about 23.4°, an angle known as the obliquity of the ecliptic. The direction of precession is opposite to the daily rotation of the Earth on its axis. The was the Earth's rotation axis 5,000 years ago, when it pointed to the star Thuban. The yellow axis, pointing to Polaris, marks the axis now. The equinoxes occur where the celestial equator intersects the ecliptic (red line), that is, where the Earth's axis is perpendicular to the line connecting the centers of the Sun and Earth.The term "equinox" here refers to a point on the celestial sphere so defined, rather than the moment in time when the Sun is overhead at the Equator (though the two meanings are related). When the axis precesses from one orientation to another, the equatorial plane of the Earth (indicated by the circular grid around the equator) moves. The celestial equator is just the Earth's equator projected onto the celestial sphere, so it moves as the Earth's equatorial plane moves, and the intersection with the ecliptic moves with it. The positions of the poles and equator on Earth do not change, only the orientation of the Earth against the fixed stars. As seen from the , 5,000 years ago, the March equinox was close to the star Aldebaran in Taurus. Now, as seen from the yellow grid, it has shifted (indicated by the ) to somewhere in the constellation of Pisces. Still pictures like these are only first approximations, as they do not take into account the variable speed of the precession, the variable obliquity of the ecliptic, the planetary precession (which is a slow rotation of the ecliptic plane itself, presently around an axis located on the plane, with longitude 174.8764°) and the proper motions of the stars. The precessional eras of each constellation, often known as "Great Months", are given, approximately, in the table below: Cause The precession of the equinoxes is caused by the gravitational forces of the Sun and the Moon, and to a lesser extent other bodies, on the Earth. It was first explained by Isaac Newton. Axial precession is similar to the precession of a spinning top. In both cases, the applied force is due to gravity. For a spinning top, this force tends to be almost parallel to the rotation axis initially and increases as the top slows down. For a gyroscope on a stand it can approach 90 degrees. For the Earth, however, the applied forces of the Sun and the Moon are closer to perpendicular to the axis of rotation. The Earth is not a perfect sphere but an oblate spheroid, with an equatorial diameter about 43 kilometers larger than its polar diameter. Because of the Earth's axial tilt, during most of the year the half of this bulge that is closest to the Sun is off-center, either to the north or to the south, and the far half is off-center on the opposite side. The gravitational pull on the closer half is stronger, since gravity decreases with the square of distance, so this creates a small torque on the Earth as the Sun pulls harder on one side of the Earth than the other. The axis of this torque is roughly perpendicular to the axis of the Earth's rotation so the axis of rotation precesses. If the Earth were a perfect sphere, there would be no precession. This average torque is perpendicular to the direction in which the rotation axis is tilted away from the ecliptic pole, so that it does not change the axial tilt itself. The magnitude of the torque from the Sun (or the Moon) varies with the angle between the Earth's spin axis direction and that of the gravitational attraction. It approaches zero when they are perpendicular. For example, this happens at the equinoxes in the case of the interaction with the Sun. This can be seen to be since the near and far points are aligned with the gravitational attraction, so there is no torque due to the difference in gravitational attraction. Although the above explanation involved the Sun, the same explanation holds true for any object moving around the Earth, along or close to the ecliptic, notably, the Moon. The combined action of the Sun and the Moon is called the lunisolar precession. In addition to the steady progressive motion (resulting in a full circle in about 25,700 years) the Sun and Moon also cause small periodic variations, due to their changing positions. These oscillations, in both precessional speed and axial tilt, are known as the nutation. The most important term has a period of 18.6 years and an amplitude of 9.2 arcseconds. In addition to lunisolar precession, the actions of the other planets of the Solar System cause the whole ecliptic to rotate slowly around an axis which has an ecliptic longitude of about 174° measured on the instantaneous ecliptic. This so-called planetary precession shift amounts to a rotation of the ecliptic plane of 0.47 seconds of arc per year (more than a hundred times smaller than lunisolar precession). The sum of the two precessions is known as the general precession. Equations The tidal force on Earth due to a perturbing body (Sun, Moon or planet) is expressed by Newton's law of universal gravitation, whereby the gravitational force of the perturbing body on the side of Earth nearest is said to be greater than the gravitational force on the far side by an amount proportional to the difference in the cubes of the distances between the near and far sides. If the gravitational force of the perturbing body acting on the mass of the Earth as a point mass at the center of Earth (which provides the centripetal force causing the orbital motion) is subtracted from the gravitational force of the perturbing body everywhere on the surface of Earth, what remains may be regarded as the tidal force. This gives the paradoxical notion of a force acting away from the satellite but in reality it is simply a lesser force toward that body due to the gradient in the gravitational field. For precession, this tidal force can be grouped into two forces which only act on the equatorial bulge outside of a mean spherical radius. This couple can be decomposed into two pairs of components, one pair parallel to Earth's equatorial plane toward and away from the perturbing body which cancel each other out, and another pair parallel to Earth's rotational axis, both toward the ecliptic plane. The latter pair of forces creates the following torque vector on Earth's equatorial bulge: where GM, standard gravitational parameter of the perturbing body r, geocentric distance to the perturbing body C, moment of inertia around Earth's axis of rotation A, moment of inertia around any equatorial diameter of Earth C − A, moment of inertia of Earth's equatorial bulge (C > A) δ, declination of the perturbing body (north or south of equator) α, right ascension of the perturbing body (east from March equinox). The three unit vectors of the torque at the center of the Earth (top to bottom) are x on a line within the ecliptic plane (the intersection of Earth's equatorial plane with the ecliptic plane) directed toward the March equinox, y on a line in the ecliptic plane directed toward the summer solstice (90° east of x), and z on a line directed toward the north pole of the ecliptic. The value of the three sinusoidal terms in the direction of x for the Sun is a sine squared waveform varying from zero at the equinoxes (0°, 180°) to 0.36495 at the solstices (90°, 270°). The value in the direction of y for the Sun is a sine wave varying from zero at the four equinoxes and solstices to ±0.19364 (slightly more than half of the sine squared peak) halfway between each equinox and solstice with peaks slightly skewed toward the equinoxes (43.37°(−), 136.63°(+), 223.37°(−), 316.63°(+)). Both solar waveforms have about the same peak-to-peak amplitude and the same period, half of a revolution or half of a year. The value in the direction of z is zero. The average torque of the sine wave in the direction of y is zero for the Sun or Moon, so this component of the torque does not affect precession. The average torque of the sine squared waveform in the direction of x for the Sun or Moon is: where , semimajor axis of Earth's (Sun's) orbit or Moon's orbit e, eccentricity of Earth's (Sun's) orbit or Moon's orbit and 1/2 accounts for the average of the sine squared waveform, accounts for the average distance cubed of the Sun or Moon from Earth over the entire elliptical orbit, and ε (the angle between the equatorial plane and the ecliptic plane) is the maximum value of δ for the Sun and the average maximum value for the Moon over an entire 18.6 year cycle. Precession is: where ω is Earth's angular velocity and Cω is Earth's angular momentum. Thus the first order component of precession due to the Sun is: whereas that due to the Moon is: where i is the angle between the plane of the Moon's orbit and the ecliptic plane. In these two equations, the Sun's parameters are within square brackets labeled S, the Moon's parameters are within square brackets labeled L, and the Earth's parameters are within square brackets labeled E. The term accounts for the inclination of the Moon's orbit relative to the ecliptic. The term is Earth's dynamical ellipticity or flattening, which is adjusted to the observed precession because Earth's internal structure is not known with sufficient detail. If Earth were homogeneous the term would equal its third eccentricity squared, where a is the equatorial radius () and c is the polar radius (), so . Applicable parameters for J2000.0 rounded to seven significant digits (excluding leading 1) are: which yield dψS/dt = 2.450183 /s dψL/dt = 5.334529 /s both of which must be converted to ″/a (arcseconds/annum) by the number of arcseconds in 2π radians (1.296″/2π) and the number of seconds in one annum (a Julian year) (3.15576s/a): dψS/dt = 15.948788″/a   vs   15.948870″/a from Williams dψL/dt = 34.723638″/a   vs   34.457698″/a from Williams. The solar equation is a good representation of precession due to the Sun because Earth's orbit is close to an ellipse, being only slightly perturbed by the other planets. The lunar equation is not as good a representation of precession due to the Moon because the Moon's orbit is greatly distorted by the Sun and neither the radius nor the eccentricity is constant over the year. Values Simon Newcomb's calculation at the end of the 19th century for general precession (p) in longitude gave a value of 5,025.64 arcseconds per tropical century, and was the generally accepted value until artificial satellites delivered more accurate observations and electronic computers allowed more elaborate models to be calculated. Jay Henry Lieske developed an updated theory in 1976, where p equals 5,029.0966 arcseconds (or 1.3969713 degrees) per Julian century. Modern techniques such as VLBI and LLR allowed further refinements, and the International Astronomical Union adopted a new constant value in 2000, and new computation methods and polynomial expressions in 2003 and 2006; the accumulated precession is: pA = 5,028.796195T + 1.1054348T2 + higher order terms, in arcseconds, with T, the time in Julian centuries (that is, 36,525 days) since the epoch of 2000. The rate of precession is the derivative of that: p = 5,028.796195 + 2.2108696T + higher order terms. The constant term of this speed (5,028.796195 arcseconds per century in above equation) corresponds to one full precession circle in 25,771.57534 years (one full circle of 360 degrees divided by 50.28796195 arcseconds per year) although some other sources put the value at 25771.4 years, leaving a small uncertainty. The precession rate is not a constant, but is (at the moment) slowly increasing over time, as indicated by the linear (and higher order) terms in T. In any case it must be stressed that this formula is only valid over a limited time period. It is a polynomial expression centred on the J2000 datum, empirically fitted to observational data, not on a deterministic model of the Solar System. It is clear that if T gets large enough (far in the future or far in the past), the T² term will dominate and p will go to very large values. In reality, more elaborate calculations on the numerical model of the Solar System show that the precessional rate has a period of about 41,000 years, the same as the obliquity of the ecliptic. That is, p = A + BT + CT2 + … is an approximation of p = a + b sin (2πT/P), where P is the 41,000-year period. Theoretical models may calculate the constants (coefficients) corresponding to the higher powers of T, but since it is impossible for a polynomial to match a periodic function over all numbers, the difference in all such approximations will grow without bound as T increases. Sufficient accuracy can be obtained over a limited time span by fitting a high enough order polynomial to observation data, rather than a necessarily imperfect dynamic numerical model. For present flight trajectory calculations of artificial satellites and spacecraft, the polynomial method gives better accuracy. In that respect, the International Astronomical Union has chosen the best-developed available theory. For up to a few centuries into the past and future, none of the formulas used diverge very much. For up to a few thousand years in the past and the future, most agree to some accuracy. For eras farther out, discrepancies become too large – the exact rate and period of precession may not be computed using these polynomials even for a single whole precession period. The precession of Earth's axis is a very slow effect, but at the level of accuracy at which astronomers work, it does need to be taken into account on a daily basis. Although the precession and the tilt of Earth's axis (the obliquity of the ecliptic) are calculated from the same theory and are thus related one to the other, the two movements act independently of each other, moving in opposite directions. Precession rate exhibits a secular decrease due to tidal dissipation from 59"/a to 45"/a (a = annum = Julian year) during the 500 million year period centered on the present. After short-term fluctuations (tens of thousands of years) are averaged out, the long-term trend can be approximated by the following polynomials for negative and positive time from the present in "/a, where T is in billions of Julian years (Ga): p = 50.475838 − 26.368583T + 21.890862T2 p = 50.475838 − 27.000654T + 15.603265T2 This gives an average cycle length now of 25,676 years. Precession will be greater than p by the small amount of +0.135052"/a between and . The jump to this excess over p will occur in only beginning now because the secular decrease in precession is beginning to cross a resonance in Earth's orbit caused by the other planets. According to W. R. Ward, in about 1,500 million years, when the distance of the Moon, which is continuously increasing from tidal effects, has increased from the current 60.3 to approximately 66.5 Earth radii, resonances from planetary effects will push precession to 49,000 years at first, and then, when the Moon reaches 68 Earth radii in about 2,000 million years, to 69,000 years. This will be associated with wild swings in the obliquity of the ecliptic as well. Ward, however, used the abnormally large modern value for tidal dissipation. Using the 620-million year average provided by tidal rhythmites of about half the modern value, these resonances will not be reached until about 3,000 and 4,000 million years, respectively. However, due to the gradually increasing luminosity of the Sun, the oceans of the Earth will have vaporized before that time (about 2,100 million years from now).
Physical sciences
Celestial mechanics
Astronomy
72585
https://en.wikipedia.org/wiki/Weathering
Weathering
Weathering is the deterioration of rocks, soils and minerals (as well as wood and artificial materials) through contact with water, atmospheric gases, sunlight, and biological organisms. It occurs in situ (on-site, with little or no movement), and so is distinct from erosion, which involves the transport of rocks and minerals by agents such as water, ice, snow, wind, waves and gravity. Weathering processes are either physical or chemical. The former involves the breakdown of rocks and soils through such mechanical effects as heat, water, ice and wind. The latter covers reactions to water, atmospheric gases and biologically produced chemicals with rocks and soils. Water is the principal agent behind both kinds, though atmospheric oxygen and carbon dioxide and the activities of biological organisms are also important. Biological chemical weathering is also called biological weathering. The materials left after the rock breaks down combine with organic material to create soil. Many of Earth's landforms and landscapes are the result of weathering, erosion and redeposition. Weathering is a crucial part of the rock cycle; sedimentary rock, the product of weathered rock, covers 66% of the Earth's continents and much of the ocean floor. Physical Physical weathering, also called mechanical weathering or disaggregation, is the class of processes that causes the disintegration of rocks without chemical change. Physical weathering involves the breakdown of rocks into smaller fragments through processes such as expansion and contraction, mainly due to temperature changes. Two types of physical breakdown are freeze-thaw weathering and thermal fracturing. Pressure release can also cause weathering without temperature change. It is usually much less important than chemical weathering, but can be significant in subarctic or alpine environments. Furthermore, chemical and physical weathering often go hand in hand. For example, cracks extended by physical weathering will increase the surface area exposed to chemical action, thus amplifying the rate of disintegration. Frost weathering is the most important form of physical weathering. Next in importance is wedging by plant roots, which sometimes enter cracks in rocks and pry them apart. The burrowing of worms or other animals may also help disintegrate rock, as can "plucking" by lichens. Frost Frost weathering is the collective name for those forms of physical weathering that are caused by the formation of ice within rock outcrops. It was long believed that the most important of these is frost wedging, which is the widening of cracks or joints in rocks resulting from the expansion of porewater when it freezes. A growing body of theoretical and experimental work suggests that ice segregation, whereby supercooled water migrates to lenses of ice forming within the rock, is the more important mechanism. When water freezes, its volume increases by 9.2%. This expansion can theoretically generate pressures greater than , though a more realistic upper limit is . This is still much greater than the tensile strength of granite, which is about . This makes frost wedging, in which pore water freezes and its volumetric expansion fractures the enclosing rock, appear to be a plausible mechanism for frost weathering. Ice will simply expand out of a straight open fracture before it can generate significant pressure. Thus, frost wedging can only take place in small tortuous fractures. The rock must also be almost completely saturated with water, or the ice will simply expand into the air spaces in the unsaturated rock without generating much pressure. These conditions are unusual enough that frost wedging is unlikely to be the dominant process of frost weathering. Frost wedging is most effective where there are daily cycles of melting and freezing of water-saturated rock, so it is unlikely to be significant in the tropics, in polar regions or in arid climates. Ice segregation is a less well characterized mechanism of physical weathering. It takes place because ice grains always have a surface layer, often just a few molecules thick, that resembles liquid water more than solid ice, even at temperatures well below the freezing point. This premelted liquid layer has unusual properties, including a strong tendency to draw in water by capillary action from warmer parts of the rock. This results in growth of the ice grain that puts considerable pressure on the surrounding rock, up to ten times greater than is likely with frost wedging. This mechanism is most effective in rock whose temperature averages just below the freezing point, . Ice segregation results in growth of ice needles and ice lenses within fractures in the rock and parallel to the rock surface, which gradually pry the rock apart. Thermal stress Thermal stress weathering results from the expansion and contraction of rock due to temperature changes. Thermal stress weathering is most effective when the heated portion of the rock is buttressed by surrounding rock, so that it is free to expand in only one direction. Thermal stress weathering comprises two main types, thermal shock and thermal fatigue. Thermal shock takes place when the stresses are so great that the rock cracks immediately, but this is uncommon. More typical is thermal fatigue, in which the stresses are not great enough to cause immediate rock failure, but repeated cycles of stress and release gradually weaken the rocks. Block disintegration, when rock joints weaken from temperature fluctuations and the rock splits into rectangular blocks, can be attributed to thermal fatigue. Thermal stress weathering is an important mechanism in deserts, where there is a large diurnal temperature range, hot in the day and cold at night. As a result, thermal stress weathering is sometimes called insolation weathering, but this is misleading. Thermal stress weathering can be caused by any large change of temperature, and not just intense solar heating. It is likely as important in cold climates as in hot, arid climates. Wildfires can also be a significant cause of rapid thermal stress weathering. The importance of thermal stress weathering has long been discounted by geologists, based on experiments in the early 20th century that seemed to show that its effects were unimportant. These experiments have since been criticized as unrealistic, since the rock samples were small, were polished (which reduces nucleation of fractures), and were not buttressed. These small samples were thus able to expand freely in all directions when heated in experimental ovens, which failed to produce the kinds of stress likely in natural settings. The experiments were also more sensitive to thermal shock than thermal fatigue, but thermal fatigue is likely the more important mechanism in nature. Geomorphologists have begun to reemphasize the importance of thermal stress weathering, particularly in cold climates. Pressure release Pressure release or unloading is a form of physical weathering seen when deeply buried rock is exhumed. Intrusive igneous rocks, such as granite, are formed deep beneath the Earth's surface. They are under tremendous pressure because of the overlying rock material. When erosion removes the overlying rock material, these intrusive rocks are exposed and the pressure on them is released. The outer parts of the rocks then tend to expand. The expansion sets up stresses which cause fractures parallel to the rock surface to form. Over time, sheets of rock break away from the exposed rocks along the fractures, a process known as exfoliation. Exfoliation due to pressure release is also known as sheeting. As with thermal weathering, pressure release is most effective in buttressed rock. Here the differential stress directed toward the unbuttressed surface can be as high as , easily enough to shatter rock. This mechanism is also responsible for spalling in mines and quarries, and for the formation of joints in rock outcrops. Retreat of an overlying glacier can also lead to exfoliation due to pressure release. This can be enhanced by other physical wearing mechanisms. Salt-crystal growth Salt crystallization (also known as salt weathering, salt wedging or haloclasty) causes disintegration of rocks when saline solutions seep into cracks and joints in the rocks and evaporate, leaving salt crystals behind. As with ice segregation, the surfaces of the salt grains draw in additional dissolved salts through capillary action, causing the growth of salt lenses that exert high pressure on the surrounding rock. Sodium and magnesium salts are the most effective at producing salt weathering. Salt weathering can also take place when pyrite in sedimentary rock is chemically weathered to iron(II) sulfate and gypsum, which then crystallize as salt lenses. Salt crystallization can take place wherever salts are concentrated by evaporation. It is thus most common in arid climates where strong heating causes strong evaporation and along coasts. Salt weathering is likely important in the formation of tafoni, a class of cavernous rock weathering structures. Biomechanical relationship Living organisms may contribute to mechanical weathering, as well as chemical weathering (see § Biological weathering below). Lichens and mosses grow on essentially bare rock surfaces and create a more humid chemical microenvironment. The attachment of these organisms to the rock surface enhances physical as well as chemical breakdown of the surface microlayer of the rock. Lichens have been observed to pry mineral grains loose from bare shale with their hyphae (rootlike attachment structures), a process described as plucking, and to pull the fragments into their body, where the fragments then undergo a process of chemical weathering not unlike digestion. On a larger scale, seedlings sprouting in a crevice and plant roots exert physical pressure as well as providing a pathway for water and chemical infiltration. Chemical Most rock forms at elevated temperature and pressure, and the minerals making up the rock are often chemically unstable in the relatively cool, wet, and oxidizing conditions typical of the Earth's surface. Chemical weathering takes place when water, oxygen, carbon dioxide, and other chemical substances react with rock to change its composition. These reactions convert some of the original primary minerals in the rock to secondary minerals, remove other substances as solutes, and leave the most stable minerals as a chemically unchanged resistate. In effect, chemical weathering changes the original set of minerals in the rock into a new set of minerals that is in closer equilibrium with surface conditions. True equilibrium is rarely reached, because weathering is a slow process, and leaching carries away solutes produced by weathering reactions before they can accumulate to equilibrium levels. This is particularly true in tropical environments. Water is the principal agent of chemical weathering, converting many primary minerals to clay minerals or hydrated oxides via reactions collectively described as hydrolysis. Oxygen is also important, acting to oxidize many minerals, as is carbon dioxide, whose weathering reactions are described as carbonation. The process of mountain block uplift is important in exposing new rock strata to the atmosphere and moisture, enabling important chemical weathering to occur; significant release occurs of Ca2+ and other ions into surface waters. Dissolution Dissolution (also called simple solution or congruent dissolution) is the process in which a mineral dissolves completely without producing any new solid substance. Rainwater easily dissolves soluble minerals, such as halite or gypsum, but can also dissolve highly resistant minerals such as quartz, given sufficient time. Water breaks the bonds between atoms in the crystal: The overall reaction for dissolution of quartz is The dissolved quartz takes the form of silicic acid. A particularly important form of dissolution is carbonate dissolution, in which atmospheric carbon dioxide enhances solution weathering. Carbonate dissolution affects rocks containing calcium carbonate, such as limestone and chalk. It takes place when rainwater combines with carbon dioxide to form carbonic acid, a weak acid, which dissolves calcium carbonate (limestone) and forms soluble calcium bicarbonate. Despite a slower reaction kinetics, this process is thermodynamically favored at low temperature, because colder water holds more dissolved carbon dioxide gas (due to the retrograde solubility of gases). Carbonate dissolution is therefore an important feature of glacial weathering. Carbonate dissolution involves the following steps: CO2 + H2O → H2CO3 carbon dioxide + water → carbonic acid H2CO3 + CaCO3 → Ca(HCO3)2 carbonic acid + calcium carbonate → calcium bicarbonate Carbonate dissolution on the surface of well-jointed limestone produces a dissected limestone pavement. This process is most effective along the joints, widening and deepening them. In unpolluted environments, the pH of rainwater due to dissolved carbon dioxide is around 5.6. Acid rain occurs when gases such as sulfur dioxide and nitrogen oxides are present in the atmosphere. These oxides react in the rain water to produce stronger acids and can lower the pH to 4.5 or even 3.0. Sulfur dioxide, SO2, comes from volcanic eruptions or from fossil fuels, and can become sulfuric acid within rainwater, which can cause solution weathering to the rocks on which it falls. Hydrolysis and carbonation Hydrolysis (also called incongruent dissolution) is a form of chemical weathering in which only part of a mineral is taken into solution. The rest of the mineral is transformed into a new solid material, such as a clay mineral. For example, forsterite (magnesium olivine) is hydrolyzed into solid brucite and dissolved silicic acid: Mg2SiO4 + 4 H2O ⇌ 2 Mg(OH)2 + H4SiO4 forsterite + water ⇌ brucite + silicic acid Most hydrolysis during weathering of minerals is acid hydrolysis, in which protons (hydrogen ions), which are present in acidic water, attack chemical bonds in mineral crystals. The bonds between different cations and oxygen ions in minerals differ in strength, and the weakest will be attacked first. The result is that minerals in igneous rock weather in roughly the same order in which they were originally formed (Bowen's Reaction Series). Relative bond strength is shown in the following table: This table is only a rough guide to order of weathering. Some minerals, such as illite, are unusually stable, while silica is unusually unstable given the strength of the silicon–oxygen bond. Carbon dioxide that dissolves in water to form carbonic acid is the most important source of protons, but organic acids are also important natural sources of acidity. Acid hydrolysis from dissolved carbon dioxide is sometimes described as carbonation, and can result in weathering of the primary minerals to secondary carbonate minerals. For example, weathering of forsterite can produce magnesite instead of brucite via the reaction: Mg2SiO4 + 2 CO2 + 2 H2O ⇌ 2 MgCO3 + H4SiO4 forsterite + carbon dioxide + water ⇌ magnesite + silicic acid in solution Carbonic acid is consumed by silicate weathering, resulting in more alkaline solutions because of the bicarbonate. This is an important reaction in controlling the amount of CO2 in the atmosphere and can affect climate. Aluminosilicates containing highly soluble cations, such as sodium or potassium ions, will release the cations as dissolved bicarbonates during acid hydrolysis: 2 KAlSi3O8 + 2 H2CO3 + 9 H2O ⇌ Al2Si2O5(OH)4 + 4 H4SiO4 + 2 K+ + 2 HCO3− orthoclase (aluminosilicate feldspar) + carbonic acid + water ⇌ kaolinite (a clay mineral) + silicic acid in solution + potassium and bicarbonate ions in solution Oxidation Within the weathering environment, chemical oxidation of a variety of metals occurs. The most commonly observed is the oxidation of Fe2+ (iron) by oxygen and water to form Fe3+ oxides and hydroxides such as goethite, limonite, and hematite. This gives the affected rocks a reddish-brown coloration on the surface which crumbles easily and weakens the rock. Many other metallic ores and minerals oxidize and hydrate to produce colored deposits, as does sulfur during the weathering of sulfide minerals such as chalcopyrites or CuFeS2 oxidizing to copper hydroxide and iron oxides. Hydration Mineral hydration is a form of chemical weathering that involves the rigid attachment of water molecules or H+ and OH- ions to the atoms and molecules of a mineral. No significant dissolution takes place. For example, iron oxides are converted to iron hydroxides and the hydration of anhydrite forms gypsum. Bulk hydration of minerals is secondary in importance to dissolution, hydrolysis, and oxidation, but hydration of the crystal surface is the crucial first step in hydrolysis. A fresh surface of a mineral crystal exposes ions whose electrical charge attracts water molecules. Some of these molecules break into H+ that bonds to exposed anions (usually oxygen) and OH- that bonds to exposed cations. This further disrupts the surface, making it susceptible to various hydrolysis reactions. Additional protons replace cations exposed on the surface, freeing the cations as solutes. As cations are removed, silicon-oxygen and silicon-aluminium bonds become more susceptible to hydrolysis, freeing silicic acid and aluminium hydroxides to be leached away or to form clay minerals. Laboratory experiments show that weathering of feldspar crystals begins at dislocations or other defects on the surface of the crystal, and that the weathering layer is only a few atoms thick. Diffusion within the mineral grain does not appear to be significant. Biological Mineral weathering can also be initiated or accelerated by soil microorganisms. Soil organisms make up about 10 mg/cm3 of typical soils, and laboratory experiments have demonstrated that albite and muscovite weather twice as fast in live versus sterile soil. Lichens on rocks are among the most effective biological agents of chemical weathering. For example, an experimental study on hornblende granite in New Jersey, US, demonstrated a 3x – 4x increase in weathering rate under lichen covered surfaces compared to recently exposed bare rock surfaces. The most common forms of biological weathering result from the release of chelating compounds (such as certain organic acids and siderophores) and of carbon dioxide and organic acids by plants. Roots can build up the carbon dioxide level to 30% of all soil gases, aided by adsorption of on clay minerals and the very slow diffusion rate of out of the soil. The and organic acids help break down aluminium- and iron-containing compounds in the soils beneath them. Roots have a negative electrical charge balanced by protons in the soil next to the roots, and these can be exchanged for essential nutrient cations such as potassium. Decaying remains of dead plants in soil may form organic acids which, when dissolved in water, cause chemical weathering. Chelating compounds, mostly low molecular weight organic acids, are capable of removing metal ions from bare rock surfaces, with aluminium and silicon being particularly susceptible. The ability to break down bare rock allows lichens to be among the first colonizers of dry land. The accumulation of chelating compounds can easily affect surrounding rocks and soils, and may lead to podsolisation of soils. The symbiotic mycorrhizal fungi associated with tree root systems can release inorganic nutrients from minerals such as apatite or biotite and transfer these nutrients to the trees, thus contributing to tree nutrition. It was also recently evidenced that bacterial communities can impact mineral stability leading to the release of inorganic nutrients. A large range of bacterial strains or communities from diverse genera have been reported to be able to colonize mineral surfaces or to weather minerals, and for some of them a plant growth promoting effect has been demonstrated. The demonstrated or hypothesised mechanisms used by bacteria to weather minerals include several oxidoreduction and dissolution reactions as well as the production of weathering agents, such as protons, organic acids and chelating molecules. Ocean floor Weathering of basaltic oceanic crust differs in important respects from weathering in the atmosphere. Weathering is relatively slow, with basalt becoming less dense, at a rate of about 15% per 100 million years. The basalt becomes hydrated, and is enriched in total and ferric iron, magnesium, and sodium at the expense of silica, titanium, aluminum, ferrous iron, and calcium. Buildings Buildings made of any stone, brick or concrete are susceptible to the same weathering agents as any exposed rock surface. Also statues, monuments and ornamental stonework can be badly damaged by natural weathering processes. This is accelerated in areas severely affected by acid rain. Accelerated building weathering may be a threat to the environment and occupant safety. Design strategies can moderate the impact of environmental effects, such as using of pressure-moderated rain screening, ensuring that the HVAC system is able to effectively control humidity accumulation and selecting concrete mixes with reduced water content to minimize the impact of freeze-thaw cycles. Soil Granitic rock, the most abundant crystalline rock exposed at the Earth's surface, begins weathering with the destruction of hornblende. Biotite then weathers to vermiculite, and finally oligoclase and microcline are destroyed. All are converted into a mixture of clay minerals and iron oxides. The resulting soil is depleted in calcium, sodium, and ferrous iron compared with the bedrock, and magnesium is reduced by 40% and silicon by 15%. At the same time, the soil is enriched in aluminium and potassium by at least 50%; by titanium, whose abundance triples, and ferric iron, whose abundance increases by an order of magnitude compared with the bedrock. Basaltic rock is more easily weathered than granitic rock due to its formation at higher temperatures and drier conditions. The fine grain size and presence of volcanic glass also hasten weathering. In tropical settings, it rapidly weathers to clay minerals, aluminium hydroxides, and titanium-enriched iron oxides. Because most basalt is relatively poor in potassium, the basalt weathers directly to potassium-poor montmorillonite, then to kaolinite. Where leaching is continuous and intense, as in rain forests, the final weathering product is bauxite, the principal ore of aluminium. Where rainfall is intense but seasonal, as in monsoon climates, the final weathering product is iron- and titanium-rich laterite. Conversion of kaolinite to bauxite occurs only with intense leaching, as ordinary river water is in equilibrium with kaolinite. Soil formation requires between 100 and 1,000 years, a brief interval in geologic time. As a result, some formations show numerous paleosol (fossil soil) beds. For example, the Willwood Formation of Wyoming contains over 1,000 paleosol layers in a section representing 3.5 million years of geologic time. Paleosols have been identified in formations as old as Archean (over 2.5 billion years in age). They are difficult to recognize in the geologic record. Indications that a sedimentary bed is a paleosol include a gradational lower boundary and sharp upper boundary, the presence of much clay, poor sorting with few sedimentary structures, rip-up clasts in overlying beds, and desiccation cracks containing material from higher beds. The degree of weathering of soil can be expressed as the chemical index of alteration, defined as . This varies from 47 for unweathered upper crust rock to 100 for fully weathered material. Wood, paint and plastic Wood can be physically and chemically weathered by hydrolysis and other processes relevant to minerals and is highly susceptible to ultraviolet radiation from sunlight. This induces photochemical reactions that degrade its surface. These also significantly weather paint and plastics. Gallery
Physical sciences
Geomorphology
null
72660
https://en.wikipedia.org/wiki/Commensalism
Commensalism
Commensalism is a long-term biological interaction (symbiosis) in which members of one species gain benefits while those of the other species neither benefit nor are harmed. This is in contrast with mutualism, in which both organisms benefit from each other; amensalism, where one is harmed while the other is unaffected; and parasitism, where one is harmed and the other benefits. The commensal (the species that benefits from the association) may obtain nutrients, shelter, support, or locomotion from the host species, which is substantially unaffected. The commensal relation is often between a larger host and a smaller commensal; the host organism is unmodified, whereas the commensal species may show great structural adaptation consistent with its habits, as in the remoras that ride attached to sharks and other fishes. Remoras feed on their hosts' fecal matter, while pilot fish feed on the leftovers of their hosts' meals. Numerous birds perch on bodies of large mammal herbivores or feed on the insects turned up by grazing mammals. Etymology The word "commensalism" is derived from the word "commensal", meaning "eating at the same table" in human social interaction, which in turn comes through French from the Medieval Latin commensalis, meaning "sharing a table", from the prefix com-, meaning "together", and mensa, meaning "table" or "meal". Commensality, at the Universities of Oxford and Cambridge, refers to professors eating at the same table as students (as they live in the same "college"). Pierre-Joseph van Beneden introduced the term "commensalism" in 1876. Examples of commensal relationships The commensal pathway was traversed by animals that fed on refuse around human habitats or by animals that preyed on other animals drawn to human camps. Those animals established a commensal relationship with humans in which the animals benefited but the humans received little benefit or harm. Those animals that were most capable of taking advantage of the resources associated with human camps would have been the 'tamer' individuals: less aggressive, with shorter fight-or-flight distances. Later, these animals developed closer social or economic bonds with humans and led to a domestic relationship. The leap from a synanthropic population to a domestic one could only have taken place after the animals had progressed from anthropophily to habituation to commensalism and partnership, at which point the establishment of a reciprocal relationship between animal and human would have laid the foundation for domestication, including captivity and then human-controlled breeding. From this perspective, animal domestication is a coevolutionary process in which a population responds to selective pressure while adapting to a novel niche that includes another species with evolving behaviors. Dogs The dog was the first domesticated animal, and was domesticated and widely established across Eurasia before the end of the Pleistocene, well before the cultivation of crops or the domestication of other animals. The dog is often hypothesised to be a classic example of a domestic animal that likely traveled a commensal pathway into domestication. Archaeological evidence, such as the Bonn-Oberkassel dog dating to ~14,000BP, supports the hypothesis that dog domestication preceded the emergence of agriculture and began close to the Last Glacial Maximum when hunter-gatherers preyed on megafauna. The wolves more likely drawn to human camps were the less-aggressive, subdominant pack members with lowered flight response, higher stress thresholds, and less wary around humans, and therefore better candidates for domestication. Proto-dogs might have taken advantage of carcasses left on site by early hunters, assisted in the capture of prey, or provided defense from large competing predators at kills. However, the extent to which proto-domestic wolves could have become dependent on this way of life prior to domestication and without human provisioning is unclear and highly debated. In contrast, cats may have become fully dependent on a commensal lifestyle before being domesticated by preying on other commensal animals, such as rats and mice, without any human provisioning. Debate over the extent to which some wolves were commensal with humans prior to domestication stems from debate over the level of human intentionality in the domestication process, which remains untested. The earliest sign of domestication in dogs was the neotenization of skull morphology and the shortening of snout length that results in tooth crowding, reduction in tooth size, and a reduction in the number of teeth, which has been attributed to the strong selection for reduced aggression. This process may have begun during the initial commensal stage of dog domestication, even before humans began to be active partners in the process. A mitochondrial, microsatellite, and Y-chromosome assessment of two wolf populations in North America combined with satellite telemetry data revealed significant genetic and morphological differences between one population that migrated with and preyed upon caribou and another territorial ecotype population that remained in a boreal coniferous forest. Although these two populations spend a period of the year in the same place, and though there was evidence of gene flow between them, the difference in prey-habitat specialization has been sufficient to maintain genetic and even coloration divergence. A different study has identified the remains of a population of extinct Pleistocene Beringian wolves with unique mitochondrial signatures. The skull shape, tooth wear, and isotopic signatures suggested these remains were derived from a population of specialist megafauna hunters and scavengers that became extinct while less specialized wolf ecotypes survived. Analogous to the modern wolf ecotype that has evolved to track and prey upon caribou, a Pleistocene wolf population could have begun following mobile hunter-gatherers, thus slowly acquiring genetic and phenotypic differences that would have allowed them to more successfully adapt to the human habitat. Aspergillus and Staphylococcus Numerous genera of bacteria and fungi live on and in the human body as part of its natural flora. The fungal genus Aspergillus is capable of living under considerable environmental stress, and thus is capable of colonising the upper gastrointestinal tract where relatively few examples of the body's gut flora can survive due to highly acidic or alkaline conditions produced by gastric acid and digestive juices. While Aspergillus normally produces no symptoms, in individuals who are immunocompromised or suffering from existing conditions such as tuberculosis, a condition called aspergillosis can occur, in which populations of Aspergillus grow out of control. Staphylococcus aureus, a common bacterial species, is known best for its numerous pathogenic strains that can cause numerous illnesses and conditions. However, many strains of S. aureus are metabiotic commensals, and are present on roughly 20 to 30% of the human population as part of the skin flora. S. aureus also benefits from the variable ambient conditions created by the body's mucous membranes, and as such can be found in the oral and nasal cavities, as well as inside the ear canal. Other Staphylococcus species including S. warneri, S. lugdunensis and S. epidermidis, will also engage in commensalism for similar purposes. Nitrosomonas spp and Nitrobacter spp Commensalistic relationships between microorganisms include situations in which the waste product of one microorganism is a substrate for another species. One good example is nitrification-the oxidation of ammonium ion to nitrate. Nitrification occurs in two steps: first, bacteria such as Nitrosomonas spp. and certain crenarchaeotes oxidize ammonium to nitrite; and second, nitrite is oxidized to nitrate by Nitrobacter spp. and similar bacteria. Nitrobacter spp. benefit from their association with Nitrosomonas spp. because they use nitrite to obtain energy for growth. Commensalistic associations also occur when one microbial group modifies the environment to make it better suited for another organism. The synthesis of acidic waste products during fermentation stimulates the proliferation of more acid-tolerant microorganisms, which may be only a minor part of the microbial community at neutral pH. A good example is the succession of microorganisms during milk spoilage. Biofilm formation provides another example. The colonization of a newly exposed surface by one type of microorganism (an initial colonizer) makes it possible for other microorganisms to attach to the microbially modified surface. Octocorals and brittle stars In deep-sea, benthic environments there is an associative relationship between octocorals and brittle stars. Due to the currents flowing upward along seamount ridges, atop these ridges there are colonies of suspension feeding corals and sponges, and brittle stars that grip tight to them and get up off the sea floor. A specific documented commensal relationship is between the ophiuran Ophiocreas oedipus Lyman and the octocoral primnoid Metallogorgia melanotrichos. Historically, commensalism has been recognized as the usual type of association between brittle stars and octocorals. In this association, the ophiurans benefit directly by being elevated through facilitating their feeding by suspension, while the octocorals do not seem to benefit or be harmed by this relationship. Recent studies in the Gulf of Mexico have suggested that there are actually some benefits to the octocorals, such as receiving a cleaning action by the brittle star as it slowly moves around the coral. In some cases, a close relationship occurs between cohabiting species, with the interaction beginning from their juvenile stages. Arguments Whether the relationship between humans and some types of gut flora is commensal or mutualistic is still unanswered. Some biologists argue that any close interaction between two organisms is unlikely to be completely neutral for either party, and that relationships identified as commensal are likely mutualistic or parasitic in a subtle way that has not been detected. For example, epiphytes are "nutritional pirates" that may intercept substantial amounts of nutrients that would otherwise go to the host plant. Large numbers of epiphytes can also cause tree limbs to break or shade the host plant and reduce its rate of photosynthesis. Similarly, phoretic mites may hinder their host by making flight more difficult, which may affect its aerial hunting ability or cause it to expend extra energy while carrying these passengers. Types Like all ecological interactions, commensalisms vary in strength and duration from intimate, long-lived symbioses to brief, weak interactions through intermediaries. Phoresy Phoresy is one animal attached to another exclusively for transport, mainly arthropods, examples of which are mites on insects (such as beetles, flies or bees), pseudoscorpions on mammals or beetles, and millipedes on birds. Phoresy can be either obligate or facultative (induced by environmental conditions). Inquilinism Inquilinism is the use of a second organism for permanent housing. Examples are epiphytic plants (such as many orchids) that grow on trees, or birds that live in holes in trees. Metabiosis Metabiosis is a more indirect dependency, in which one organism creates or prepares a suitable environment for a second. Examples include maggots, which develop on and infest corpses, and hermit crabs, which use gastropod shells to protect their bodies. Facilitation Facilitation or probiosis describes species interactions that benefit at least one of the participants and cause harm to neither. Necromeny Necromeny is one animal associating with another until the latter dies, then the former feeds on the corpse of the latter. Examples include some nematodes and some mites.
Biology and health sciences
Ecology
Biology
72672
https://en.wikipedia.org/wiki/Exocrine%20gland
Exocrine gland
Exocrine glands are glands that secrete substances onto an epithelial surface by way of a duct. Examples of exocrine glands include sweat, salivary, mammary, ceruminous, lacrimal, sebaceous, prostate and mucous. Exocrine glands are one of two types of glands in the human body, the other being endocrine glands, which secrete their products directly into the bloodstream. The liver and pancreas are both exocrine and endocrine glands; they are exocrine glands because they secrete products—bile and pancreatic juice—into the gastrointestinal tract through a series of ducts, and endocrine because they secrete other substances directly into the bloodstream. Exocrine sweat glands are part of the integumentary system; they have eccrine and apocrine types. Classification Structure Exocrine glands contain a glandular portion and a duct portion, the structures of which can be used to classify the gland. The duct portion may be branched (called compound) or unbranched (called simple). The glandular portion may be tubular or acinar, or may be a mix of the two (called tubuloacinar). If the glandular portion branches, then the gland is called a branched gland. Method of secretion Depending on how their products are secreted, exocrine glands are categorized as merocrine, apocrine, or holocrine. Merocrine – the cells of the gland excrete their substances by exocytosis into a duct; for example, pancreatic acinar cells, eccrine sweat glands, salivary glands, goblet cells, intestinal glands, tear glands, etc. Apocrine – the apical portion of the cytoplasm in the cell membrane, which contains the excretion, buds off. Examples are sweat glands of arm pits, pubic region, skin around anus, lips and nipples; mammary glands, etc. Holocrine – the entire cell disintegrates to excrete its substance; for example, sebaceous glands of the skin and nose, meibomian gland, zeis gland, etc. Product secreted Serous cells secrete proteins, often enzymes. Examples include gastric chief cells and Paneth cells Mucous cells secrete mucus. Examples include Brunner's glands, esophageal glands, and pyloric glands Seromucous glands (mixed) secrete both protein and mucus. Examples include the salivary glands: although the parotid gland (saliva secretion 25%) is predominantly serous, the sublingual gland (saliva secretion 5%) mainly mucous gland, and the submandibular gland (saliva secretion 70%) is a mixed, mainly serous gland. Sebaceous glands secrete sebum, a lipid product. These glands are also known as oil glands, e.g. Fordyce spots and Meibomian glands. Additional images
Biology and health sciences
Exocrine system
Biology
72720
https://en.wikipedia.org/wiki/Beaked%20whale
Beaked whale
Beaked whales (systematic name Ziphiidae) are a family of cetaceans noted as being one of the least-known groups of mammals because of their deep-sea habitat, reclusive behavior and apparent low abundance. Only three or four of the 24 existing species are reasonably well-known. Baird's beaked whales and Cuvier's beaked whales were subject to commercial exploitation, off the coast of Japan, while the northern bottlenose whale was extensively hunted in the northern part of the North Atlantic in the late 19th and early 20th centuries. Reports emerged in late 2020 of the possible discovery of a new beaked whale species off the coast of Mexico, the taxonomy of which had not been determined . Physical characteristics Beaked whales are moderate in size, ranging from and weighing from . Their key distinguishing feature is the presence of a 'beak', somewhat similar to many dolphins. Other distinctive features include a pair of converging grooves under the throat, and the absence of a notch in the tail fluke. Although Shepherd's beaked whale is an exception, most species have only one or two pairs of teeth, and even these do not erupt in females (other than in the genus Berardius). Beaked whale species are often sexually dimorphic one or the other sex is significantly larger. The adult males often possess a large bulging forehead, some to an extreme feature. However, aside from dentition and size, very few morphological differences exist between male and female beaked whales. Individual species may be very difficult to identify in the wild, since many species appear similar. The observer must rely on size, shape, and placement of teeth and often subtle differences in size, color, forehead shape, and beak length. In collected specimens, the expansion of the premaxillary process in the skull can be a key feature to identification. The blubber of these whales is almost entirely (94%) composed of wax ester instead of the more usual triacylglycerols, a unique characteristic of this family. Dentition Beaked whales are unique among toothed whales in that most species have only one pair of teeth. The teeth are tusk-like, but are only visible in males, which are presumed to use these teeth in combat for females for reproductive rights. In females, the teeth do not develop and remain hidden in the gum tissues. In December 2008, researchers from the Marine Mammal Institute at Oregon State University completed a DNA tree of 13 of 15 known species of Mesoplodon beaked whales (excluding the spade-toothed whale, which was then only known from a skeletal specimen and a few stranded specimens). Among the results of this study was the conclusion that the male's teeth are actually a secondary sexual characteristic, similar to the antlers of male deer. Each species' teeth have a characteristically unique shape. In some cases, these teeth even hinder feeding; in the strap-toothed whale, for example, the teeth curve over the upper jaw, effectively limiting the gape to a few centimeters. Females are presumed to select mates based on the shape of the teeth, because the different species are otherwise quite similar in appearance. Taxonomy As of 2024, the Society for Marine Mammalogy Committee on Taxonomy recognizes 24 extant (living) species of beaked whales in six genera. Several species have only been formally described in the last two decades, most recently in 2021. The beaked whales are the second-largest family of cetaceans after the oceanic dolphins (Delphinidae). Beaked whales were one of the first extant clades to diverge from the ancestral lineage. The earliest known beaked whale fossils date to the Miocene, about 15 million years ago. A 2016 study split the beaked whales into the basal extinct Messapicetus clade (lineage) and the crown Ziphiidae which include all of the living members of the family as well as other extinct forms. Both clades share some key characteristics of the family including thick skull bones and the trend toward loss of teeth. In 2020, a molecular study further resolved the relationships among the crown Ziphiidae and placed Shepherd's beaked whale, the only living species with a full set of erupted teeth, between Berardiinae, whose extant forms have four erupted teeth, and Ziphiinae, whose extant form has two erupted teeth. Order Artiodactyla Infraorder Cetacea Parvorder Odontoceti: toothed whales Family Ziphiidae Incertae sedis Genus †Anoplonassa Genus †Caviziphius Genus †Cetorhynchus Genus †Eboroziphius Genus †Pelycorhamphus Messapicetus clade Genus †Aporotus Genus †Beneziphius Genus †Chavinziphius Genus †Chimuziphius Genus †Choneziphius Genus †Dagonodum Genus †Globicetus Genus †Imocetus Genus †Messapicetus Genus †Ninoziphius Genus †Notoziphius Genus †Tusciziphius Genus †Ziphirostrum Unnamed clade Genus †Nazcacetus Subfamily Berardiinae Genus †Archaeoziphius Genus Berardius B. arnuxii, Arnoux's beaked whale B. bairdii, Baird's beaked whale Berardius minimus, Sato's beaked whale †B. kobayashii Genus †Microberardius Unnamed clade Genus Tasmacetus T. shepherdi, Shepherd's beaked whale Subfamily Ziphiinae Genus †Izikoziphius Genus Ziphius Z. cavirostris, Cuvier's beaked whale †Z. compressus Subfamily Hyperoodontinae Genus †Africanacetus Genus †Belemnoziphius Genus Hyperoodon, bottlenose whales H. ampullatus, northern bottlenose whale H. planifrons, southern bottlenose whale Genus †Ihlengesi Genus Indopacetus I. pacificus, tropical bottlenose whale Genus †Khoikhoicetus Genus Mesoplodon, mesoplodont whales M. bidens, Sowerby's beaked whale M. bowdoini, Andrews' beaked whale M. carlhubbsi, Hubbs' beaked whale M. densirostris, Blainville's beaked whale M. eueu, Ramari's beaked whale M. europaeus, Gervais's beaked whale M. ginkgodens, ginkgo-toothed beaked whale M. grayi, Gray's beaked whale M. hectori, Hector's beaked whale M. layardii, strap-toothed whale M. mirus, True's beaked whale M. peruvianus, pygmy beaked whale M. perrini, Perrin's beaked whale M. stejnegeri, Stejneger's beaked whale M. traversii, spade-toothed whale M. hotaula, Deraniyagala's beaked whale †M. longirostris †M. posti †M. slangkopi †M. tumidirostris Genus †Nenga Genus †Pterocetus Genus †Xhosacetus Etymology The name Ziphiidae was coined from the genus Ziphius by J. E. Gray in 1865 to move the beaked whales from the family Delphinidae into a new family. Gray noted in 1866, however, that Hyperoodontidae should have priority for the new beaked whale family name owing to earlier usage (in 1846), but Gray preferred Ziphiidae due to an apparent confusion between the upper and lower jaw (or the terminology) in the naming of Hyperoodon. Hyperoodontidae was preferred in a 1968 phylogeny, which stated that Gray's objection did not qualify as an exception under the International Code of Zoological Nomenclature (ICZN). Hyperoodontidae is indeed currently marked as the valid name by the Integrated Taxonomic Information System (ITIS) which states no successful petition for Ziphiidae had been made to the ICZN as of 2023. In contrast, Smithsonian researchers J.G. Mead and Robert Brownell Jr. argued in 1993 that due to being the "name of choice for over 100 years", Ziphiidae should be given exception under the ICZN Article 23.12. In addition, several authorities, including the Society for Marine Mammalogy Committee on Taxonomy and IUCN Red List of Threatened Species among others continue to use Ziphiidae. A further, unrelated confusion has arisen, as noted on the ITIS, due to the propagation of an incorrect citation of "Gray, 1850" for Ziphiidae. Evolutionary history As many as 26 genera antedate humans. These include ancestors of giant beaked whales (Berardius), such as Microberardius, and ancestors of Cuvier's beaked whale (Ziphius); they had many relatives, such as Caviziphius, Archaeoziphius, and Izikoziphius. They were probably preyed upon by predatory whales and sharks, including Otodus megalodon. Recently, a large fossil ziphiid sample was discovered off the South African coast, confirming the extant ziphiid diversity might just be a remnant of a higher past diversity. After studying numerous fossil skulls off the shore of Iberia and South Africa, researchers discovered the absence of functional maxillary teeth in all South African fossil ziphiids, which is evidence that suction feeding had already developed in several beaked whale lineages during the Miocene. Researchers also found fossil ziphiids with robust skulls, signaling that tusks were used for male-male interactions (speculated with extant beaked whales). Ecology Diving Beaked whales are deep divers with extreme dive profiles. They regularly dive deeper than to echolocate for food, and these deep dives are often followed by multiple shallower dives less than 500 m. This pattern is not always followed, however. Animals have been observed spending more than an hour at or near the surface breathing. Beaked whales are often seen surfacing synchronously, but asynchronous surfacing has also been observed. In March 2014, a study by Cascadia Research revealed that Cuvier's beaked whales were recorded to dive at least 2992 m in depth, a mammalian record. Another study, published in 2020, reported a Cuvier's beaked whale making a dive that lasted 222 minutes, another mammalian record. Deep-diving mammals face a number of challenges related to extended breath-holding and hydrostatic pressure. Cetaceans and pinnipeds that prolong apnea must optimize the size and use of their oxygen stores, and they must deal with the accumulation of lactic acid due to anaerobic metabolism. Beaked whales have several anatomical adaptations to deep diving: large spleens, livers, and body shape. Most cetaceans have small spleens. However, beaked whales have much larger spleens than delphinids, and may have larger livers, as well. These anatomical traits, which are important for filtering blood, could be adaptations to deep diving. Another notable anatomical adaptation among beaked whales is a slight depression in the body wall that allows them to hold their pectoral flippers tightly against their bodies for increased streamlining. However, they are not invulnerable to the effects of diving so deep and so often. Cascadia Research shows that the deeper the whales dive, the less often they dive per day, cutting their efforts by at least 40%. The challenges of deep diving are also overcome by the unique diving physiology of beaked whales. Oxygen storage during dives is mostly achieved by blood hemoglobin and muscle myoglobin. While the whale is diving, its heart rate slows and blood flow changes. This physiological dive response ensures oxygen-sensitive tissues maintain a supply of oxygen, while those tissues tolerant to hypoxia receive less blood flow. Additionally, lung collapse obviates the exchange of lung gas with blood, likely minimizing the uptake of nitrogen by tissues. Feeding The throats of all beaked whales have a bilaterally paired set of grooves that are associated with their unique feeding mechanism, suction feeding. Instead of capturing prey with their teeth, beaked whales suck it into their oral cavity. Suction is aided by the throat grooves, which stretch and expand to accommodate food. Their tongues can move very freely. By suddenly retracting the tongue and distending the gular (throat) floor, pressure immediately drops within the mouth, sucking the prey in with the water. Dietary information is available from stomach contents analyses of stranded beaked whales and from whaling operations. Their preferred diet is primarily deep-water squid, but also benthic and benthopelagic fish and some crustaceans, mostly taken near the sea floor. In a recent study, gouge marks in the sea floor were interpreted to be a result of feeding activities by beaked whales. To understand the hunting and foraging behavior of beaked whales, researchers used sound and orientation recording devices on two species: Cuvier's beaked whale (Ziphius cavirostris) and Blainville's beaked whale (Mesoplodon densirostris). These whales hunt by echolocation in deep water (where the majority of their prey is located) between about and usually catch about 30 prey per dive. Cuvier's beaked whales must forage on average at for 58 minutes and Blainville's beaked whales typically forage at deep for an average of 47 minutes. Range and habitat The family Ziphiidae is one of the most widespread families of cetaceans, ranging from the ice edges at both the north and south poles, to the equator in all the oceans. Specific ranges vary greatly by species, though beaked whales typically inhabit offshore waters that are at least 300 m deep. Beaked whales are known to congregate in deep waters off the edge of continental shelves, and bottom features, such as seamounts, canyons, escarpments, and oceanic islands, including the Azores and the Canary Islands, and even off the coasts of Hawaii. Life history Very little is known about the life history of beaked whales. The oldest recorded age is 84 years for a male Baird's beaked whale and 54 years for a female. For all other beaked whale species studied, the oldest recorded age is between 27 and 39 years. Sexual maturity is reached between seven and 15 years of age in Baird's beaked whales and northern bottlenose whales. Gestation varies greatly between species, lasting 17 months for Baird's beaked whales and 12 months for the northern bottlenose whale. No data is available on their reproductive rates. Determining group size for beaked whales is difficult, due to their inconspicuous surfacing behavior. Groups of beaked whales, defined as all individuals found in the same location at the same time, have been reported as ranging from one to 100 individuals. Nevertheless, some populations' group size has been estimated from repeated observations. For example, northern and southern bottlenose whales (H. ampullatus and H. planifrons), Cuvier's beaked whales, and Blainville's beaked whales (Mesoplodon densirostris) have a reported maximum group size of 20 individuals, with the average ranging from 2.5 to 3.5 individuals. Berardius species and Longman's beaked whales (Indopacetus pacificus) are found in larger groups of up to 100 individuals. Not much information is available about group composition of beaked whales. Only four species have been studied in great detail: northern bottlenose whale, Blainville's beaked whale, Baird's beaked whale, and Cuvier's beaked whale. Female northern bottlenose whales appear to form a loose network of social partners with no obvious long-term associations. In contrast to females, some male northern bottlenose whales have been repeatedly recorded together over several years, and possibly form long-term associations. Studies of Blainville's beaked whales have revealed groups usually consist of a number of females, calves, and/or juvenile animals occasionally accompanied by single males. Drawing on similarities with other mammal species, it has been concluded that this species may therefore engage in female-defense polygyny. Baird's beaked whales are known to occur in multiple male groups, and in large groups consisting of adult animals of both sexes. There has been some evidence from the Commander islands of multi-year stability in these groups. Arnoux's beaked whales have also been observed to form large pods of up to 47 individuals off the Southern Ocean off the coast of Kemp Land, Antarctica. While males may form short-term associations in the Cuvier's beaked whale, there do not appear to be long term bonds in this species and relatively high rates of fission and fusion within and among groups have been observed. Conservation For many years, most beaked whale species were insulated from anthropogenic impacts because of their remote habitat. However, now several issues of concern include: Studies of stranded beaked whales show rising levels of toxic chemicals in their blubber. As a top predator, beaked whales, like raptors, are particularly vulnerable to build-up of biocontaminants. They can ingest plastic (which can be lethal). They more frequently become trapped in trawl nets, due to the expansion of deepwater fisheries. Decompression sickness A major conservation concern for beaked whales (family Ziphiidae) is they appear to be vulnerable to modern sonar operations, which arises from recent strandings that temporally and physically coincide with naval sonar exercises. Mid-frequency active sonar (MFAS), developed in the 1950s for submarine detection, is thought to induce panic when experienced by whales at depth. This raises their heart-rates, forcing them to attempt to rapidly ascend toward the surface in search of air. This artificially-induced rapid ascent can cause decompression. Post mortem examinations of the stranded whales in concurrence with naval exercises have reported the presence of hemorrhaging near the ears or gas and fat emboli, which could have a deleterious impact on beaked whales that is analogous to decompression sickness in humans. Gas and fat emboli have been shown to cause nervous and cardiovascular system dysfunction, respiratory distress, pain, and disorientation in both humans and animals. In the inner ear, gas embolism can cause hemorrhages, leading to disorientation or vestibular dysfunction. Breath-holding divers, like beaked whales, can develop decompression-related problems (the "bends") when they return to the surface after deep dives. This is a possible hypothesis for the mass strandings of pelagic beaked whales associated with sonar-related activities. To illustrate, a diving beaked whale may be surfacing from a deep dive and must pass vertically through varying received sound levels. Since the whale has limited remaining oxygen supplies at the end of a long dive, it probably has limited abilities to display any normal sound avoidance behavior. Instead, the whale must continue to swim toward the surface to replenish its oxygen stores. Avoiding sonar inevitably requires a change in behavior or surfacing pattern. Therefore, sonar in close proximity to groups of beaked whales has the potential to cause hemorrhaging or to disorient the animal, eventually leading to a stranding. Current research reveals two species of beaked whales are most affected by sonar: Cuvier's (Z. cavirostris) and Blainville's (M. densirostris) beaked whales. These animals have been reported as stranding in correlation with military exercises in Greece, the Bahamas, Madeira, and the Canary Islands. The livers of these animals had the most damage. In 2019, a review of evidence on the mass strandings of beaked whale linked to naval exercises where sonar was used was published. It concluded that the effects of mid-frequency active sonar are strongest on Cuvier's beaked whales but vary among individuals or populations, and the strength of their response may depend on whether the individuals had prior exposure to sonar. The report considered that the most plausible explanation of the symptoms of decompression sickness such as gas embolism found in stranded whales to be the whales' response to sonar. It noted that no more mass strandings had occurred in the Canary Islands once naval exercises where sonar was used were banned there, and recommended that the ban be extended to other areas where mass strandings continue to occur. Four species are classified by the IUCN as "lower risk, conservation dependent": Arnoux's and Baird's beaked whales, and the northern and southern bottlenose whales. The status of the remaining species is unknown, preventing classification. Captivity Beaked whales have only rarely been kept in captivity and generally for very short time periods. Most animals taken into captivity have been captured after live-stranding and may have been in poor health only to die briefly thereafter. The longest time period for a beaked whale living in captivity was 25 days, the record held by a whale named Alexander, one of two presumed Hubbs' beaked whale calves that stranded on August 24, 1989, on a beach in San Francisco, California, USA. Both whales were taken to Marine World Africa USA, in Vallejo, California. The animals were kept in a 9.7 meter diameter by 2.7 meter deep pool. The second whale, named Nicholas, died after 15 days in captivity. Alexander, the smaller of the two whales, died of pneumonia, while the cause of death for Nicholas was not determined. A small number of other beaked whales have been kept in captivity. Notably, a Cuvier's beaked whale captured on 02 February, 1992 and held at Sea World of Florida was released after nine days about 30 miles offshore into the Atlantic Ocean. Perhaps the only successful release of a beaked whale, the animal was freeze branded for future identification before release. A rare True's beaked whale, later named Hope, the only member of its species known to be held in captivity, was taken after live-stranding on 02 January, 1973. It was held for about two days in a backyard swimming pool which had been pumped full of seawater before being transferred to the Coney Island Aquarium where it died approximately 2 days later. A juvenile female Cuvier's beaked whale was found stranded on a kelp bed off of Santa Catalina Island on 23 February 1956. She was taken to Marineland of the Pacific, where she was named Martha Washington. On 16 June 1969, a Blainville's beaked whale live stranded in St. Augustine. The whale, thought to be a male, was then transported to Marineland of Florida. It is unknown what happened to the whale, but it was still alive on 18 June 1969.
Biology and health sciences
Toothed whale
Animals
72726
https://en.wikipedia.org/wiki/Thrust%20fault
Thrust fault
A thrust fault is a break in the Earth's crust, across which older rocks are pushed above younger rocks. Thrust geometry and nomenclature Reverse faults A thrust fault is a type of reverse fault that has a dip of 45 degrees or less. If the angle of the fault plane is lower (often less than 15 degrees from the horizontal) and the displacement of the overlying block is large (often in the kilometer range) the fault is called an overthrust or overthrust fault. Erosion can remove part of the overlying block, creating a fenster (or window) – when the underlying block is exposed only in a relatively small area. When erosion removes most of the overlying block, leaving island-like remnants resting on the lower block, the remnants are called klippen (singular klippe). Blind thrust faults If the fault plane terminates before it reaches the Earth's surface, it is called a blind thrust fault. Because of the lack of surface evidence, blind thrust faults are difficult to detect until rupture. The destructive 1994 earthquake in Northridge, Los Angeles, California, was caused by a previously undiscovered blind thrust fault. Because of their low dip, thrusts are also difficult to appreciate in mapping, where lithological offsets are generally subtle and stratigraphic repetition is difficult to detect, especially in peneplain areas. Fault-bend folds Thrust faults, particularly those involved in thin-skinned style of deformation, have a so-called ramp-flat geometry. Thrusts mainly propagate along zones of weakness within a sedimentary sequence, such as mudstones or halite layers; these parts of the thrust are called decollements. If the effectiveness of the decollement becomes reduced, the thrust will tend to cut up the section to a higher stratigraphic level until it reaches another effective decollement where it can continue as bedding parallel flat. The part of the thrust linking the two flats is known as a ramp and typically forms at an angle of about 15°–30° to the bedding. Continued displacement on a thrust over a ramp produces a characteristic fold geometry known as a ramp anticline or, more generally, as a fault-bend fold. Fault-propagation folds Fault-propagation folds form at the tip of a thrust fault where propagation along the decollement has ceased, but displacement on the thrust behind the fault tip continues. The formation of an asymmetric anticline-syncline fold pair accommodates the continuing displacement. As displacement continues, the thrust tip starts to propagate along the axis of the syncline. Such structures are also known as tip-line folds. Eventually, the propagating thrust tip may reach another effective decollement layer, and a composite fold structure will develop with fault-bending and fault-propagation folds' characteristics. Thrust duplex Duplexes occur where two decollement levels are close to each other within a sedimentary sequence, such as the top and base of a relatively strong sandstone layer bounded by two relatively weak mudstone layers. When a thrust that has propagated along the lower detachment, known as the floor thrust, cuts up to the upper detachment, known as the roof thrust, it forms a ramp within the stronger layer. With continued displacement on the thrust, higher stresses are developed in the footwall of the ramp due to the bend on the fault. This may cause renewed propagation along the floor thrust until it again cuts up to join the roof thrust. Further displacement then takes place via the newly created ramp. This process may repeat many times, forming a series of fault-bounded thrust slices known as imbricates or horses, each with the geometry of a fault-bend fold of small displacement. The final result is typically a lozenge-shaped duplex. Most duplexes have only small displacements on the bounding faults between the horses, which dip away from the foreland. Occasionally, the displacement on the individual horses is more significant, such that each horse lies more or less vertically above the other; this is known as an antiformal stack or imbricate stack. If the individual displacements are still greater, the horses have a foreland dip. Duplexing is a very efficient mechanism of accommodating the shortening of the crust by thickening the section rather than by folding and deformation. Tectonic environment Large overthrust faults occur in areas that have undergone great compressional forces. These conditions exist in the orogenic belts that result from either two continental tectonic collisions or from subduction zone accretion. The resultant compressional forces produce mountain ranges. The Himalayas, the Alps, and the Appalachians are prominent examples of compressional orogenies with numerous overthrust faults. Thrust faults occur in the foreland basin, marginal to orogenic belts. Here, compression does not result in appreciable mountain building, which is mostly accommodated by folding and stacking of thrusts. Instead, thrust faults generally cause a thickening of the stratigraphic section. When thrusts are developed in orogens formed in previously rifted margins, inversion of the buried paleo-rifts can induce the nucleation of thrust ramps. Foreland basin thrusts also usually observe the ramp-flat geometry, with thrusts propagating within units at very low angle "flats" (at 1–5 degrees) and then moving up-section in steeper ramps (at 5–20 degrees) where they offset stratigraphic units. Thrusts have also been detected in cratonic settings, where "far-foreland" deformation has advanced into intracontinental areas. Thrusts and duplexes are also found in accretionary wedges in the ocean trench margin of subduction zones, where oceanic sediments are scraped off the subducted plate and accumulate. Here, the accretionary wedge must thicken by up to 200%, and this is achieved by stacking thrust fault upon thrust fault in a melange of disrupted rock, often with chaotic folding. Here, ramp flat geometries are not usually observed because the compressional force is at a steep angle to the sedimentary layering. History Thrust faults were unrecognised until the work of Arnold Escher von der Linth, Albert Heim and Marcel Alexandre Bertrand in the Alps working on the Glarus Thrust; Charles Lapworth, Ben Peach and John Horne working on parts of the Moine Thrust in the Scottish Highlands; Alfred Elis Törnebohm in the Scandinavian Caledonides and R. G. McConnell in the Canadian Rockies. The realisation that older strata could, via faulting, be found above younger strata was arrived at more or less independently by geologists in all these areas during the 1880s. Geikie in 1884 coined the term thrust-plane to describe this special set of faults. He wrote: By a system of reversed faults, a group of strata is made to cover a great breadth of ground and actually to overlie higher members of the same series. The most extraordinary dislocations, however, are those to which for distinction we have given the name of Thrust-planes. They are strictly reversed faults, but with so low a hade that the rocks on their upthrown side have been, as it were, pushed horizontally forward.
Physical sciences
Geology
null
72750
https://en.wikipedia.org/wiki/Prosthesis
Prosthesis
In medicine, a prosthesis (: prostheses; from ), or a prosthetic implant, is an artificial device that replaces a missing body part, which may be lost through physical trauma, disease, or a condition present at birth (congenital disorder). Prostheses may restore the normal functions of the missing body part, or may perform a cosmetic function. A person who has undergone an amputation is sometimes referred to as an amputee, however, this term may be offensive. Rehabilitation for someone with an amputation is primarily coordinated by a physiatrist as part of an inter-disciplinary team consisting of physiatrists, prosthetists, nurses, physical therapists, and occupational therapists. Prostheses can be created by hand or with computer-aided design (CAD), a software interface that helps creators design and analyze the creation with computer-generated 2-D and 3-D graphics as well as analysis and optimization tools. Types A person's prosthesis should be designed and assembled according to the person's appearance and functional needs. For instance, a person may need a transradial prosthesis, but the person needs to choose between an aesthetic functional device, a myoelectric device, a body-powered device, or an activity specific device. The person's future goals and economical capabilities may help them choose between one or more devices. Craniofacial prostheses include intra-oral and extra-oral prostheses. Extra-oral prostheses are further divided into hemifacial, auricular (ear), nasal, orbital and ocular. Intra-oral prostheses include dental prostheses, such as dentures, obturators, and dental implants. Prostheses of the neck include larynx substitutes, trachea and upper esophageal replacements, Somato prostheses of the torso include breast prostheses which may be either single or bilateral, full breast devices or nipple prostheses. Penile prostheses are used to treat erectile dysfunction, correct penile deformity, perform phalloplasty procedures in cisgender men, and to build a new penis in female-to-male gender reassignment surgeries. Limb prostheses Limb prostheses include both upper- and lower-extremity prostheses. Upper-extremity prostheses are used at varying levels of amputation: forequarter, shoulder disarticulation, transhumeral prosthesis, elbow disarticulation, transradial prosthesis, wrist disarticulation, full hand, partial hand, finger, partial finger. A transradial prosthesis is an artificial limb that replaces an arm missing below the elbow. Upper limb prostheses can be categorized in three main categories: Passive devices, Body Powered devices, and Externally Powered (myoelectric) devices. Passive devices can either be passive hands, mainly used for cosmetic purposes, or passive tools, mainly used for specific activities (e.g. leisure or vocational). An extensive overview and classification of passive devices can be found in a literature review by Maat et.al. A passive device can be static, meaning the device has no movable parts, or it can be adjustable, meaning its configuration can be adjusted (e.g. adjustable hand opening). Despite the absence of active grasping, passive devices are very useful in bimanual tasks that require fixation or support of an object, or for gesticulation in social interaction. According to scientific data a third of the upper limb amputees worldwide use a passive prosthetic hand. Body Powered or cable-operated limbs work by attaching a harness and cable around the opposite shoulder of the damaged arm. A recent body-powered approach has explored the utilization of the user's breathing to power and control the prosthetic hand to help eliminate actuation cable and harness. The third category of available prosthetic devices comprises myoelectric arms. This particular class of devices distinguishes itself from the previous ones due to the inclusion of a battery system. This battery serves the dual purpose of providing energy for both actuation and sensing components. While actuation predominantly relies on motor or pneumatic systems, a variety of solutions have been explored for capturing muscle activity, including techniques such as Electromyography, Sonomyography, Myokinetic, and others. These methods function by detecting the minute electrical currents generated by contracted muscles during upper arm movement, typically employing electrodes or other suitable tools. Subsequently, these acquired signals are converted into gripping patterns or postures that the artificial hand will then execute. In the prosthetics industry, a trans-radial prosthetic arm is often referred to as a "BE" or below elbow prosthesis. Lower-extremity prostheses provide replacements at varying levels of amputation. These include hip disarticulation, transfemoral prosthesis, knee disarticulation, transtibial prosthesis, Syme's amputation, foot, partial foot, and toe. The two main subcategories of lower extremity prosthetic devices are trans-tibial (any amputation transecting the tibia bone or a congenital anomaly resulting in a tibial deficiency) and trans-femoral (any amputation transecting the femur bone or a congenital anomaly resulting in a femoral deficiency). A transfemoral prosthesis is an artificial limb that replaces a leg missing above the knee. Transfemoral amputees can have a very difficult time regaining normal movement. In general, a transfemoral amputee must use approximately 80% more energy to walk than a person with two whole legs. This is due to the complexities in movement associated with the knee. In newer and more improved designs, hydraulics, carbon fiber, mechanical linkages, motors, computer microprocessors, and innovative combinations of these technologies are employed to give more control to the user. In the prosthetics industry, a trans-femoral prosthetic leg is often referred to as an "AK" or above the knee prosthesis. A transtibial prosthesis is an artificial limb that replaces a leg missing below the knee. A transtibial amputee is usually able to regain normal movement more readily than someone with a transfemoral amputation, due in large part to retaining the knee, which allows for easier movement. Lower extremity prosthetics describe artificially replaced limbs located at the hip level or lower. In the prosthetics industry, a trans-tibial prosthetic leg is often referred to as a "BK" or below the knee prosthesis. Prostheses are manufactured and fit by clinical prosthetists. Prosthetists are healthcare professionals responsible for making, fitting, and adjusting prostheses and for lower limb prostheses will assess both gait and prosthetic alignment. Once a prosthesis has been fit and adjusted by a prosthetist, a rehabilitation physiotherapist (called physical therapist in America) will help teach a new prosthetic user to walk with a leg prosthesis. To do so, the physical therapist may provide verbal instructions and may also help guide the person using touch or tactile cues. This may be done in a clinic or home. There is some research suggesting that such training in the home may be more successful if the treatment includes the use of a treadmill. Using a treadmill, along with the physical therapy treatment, helps the person to experience many of the challenges of walking with a prosthesis. In the United Kingdom, 75% of lower limb amputations are performed due to inadequate circulation (dysvascularity). This condition is often associated with many other medical conditions (co-morbidities) including diabetes and heart disease that may make it a challenge to recover and use a prosthetic limb to regain mobility and independence. For people who have inadequate circulation and have lost a lower limb, there is insufficient evidence due to a lack of research, to inform them regarding their choice of prosthetic rehabilitation approaches. Lower extremity prostheses are often categorized by the level of amputation or after the name of a surgeon: Transfemoral (Above-knee) Transtibial (Below-knee) Ankle disarticulation (more commonly known as Syme's amputation) Knee disarticulation (also see knee replacement) Hip disarticulation, (also see hip replacement) Hemi-pelvictomy Partial foot amputations (Pirogoff, Talo-Navicular and Calcaneo-cuboid (Chopart), Tarso-metatarsal (Lisfranc), Trans-metatarsal, Metatarsal-phalangeal, Ray amputations, toe amputations). Van Nes rotationplasty Prosthetic raw materials Prosthetic are made lightweight for better convenience for the amputee. Some of these materials include: Plastics: Polyethylene Polypropylene Acrylics Polyurethane Wood (early prosthetics) Rubber (early prosthetics) Lightweight metals: Aluminum Composites: Carbon fiber reinforced polymers Wheeled prostheses have also been used extensively in the rehabilitation of injured domestic animals, including dogs, cats, pigs, rabbits, and turtles. History Prosthetics originate from the ancient Near East circa 3000 BCE, with the earliest evidence of prosthetics appearing in ancient Egypt and Iran. The earliest recorded mention of eye prosthetics is from the Egyptian story of the Eye of Horus dated circa 3000 BC, which involves the left eye of Horus being plucked out and then restored by Thoth. Circa 3000-2800 BC, the earliest archaeological evidence of prosthetics is found in ancient Iran, where an eye prosthetic is found buried with a woman in Shahr-i Shōkhta. It was likely made of bitumen paste that was covered with a thin layer of gold. The Egyptians were also early pioneers of foot prosthetics, as shown by the wooden toe found on a body from the New Kingdom circa 1000 BC. Another early textual mention is found in South Asia circa 1200 BC, involving the warrior queen Vishpala in the Rigveda. Roman bronze crowns have also been found, but their use could have been more aesthetic than medical. An early mention of a prosthetic comes from the Greek historian Herodotus, who tells the story of Hegesistratus, a Greek diviner who cut off his own foot to escape his Spartan captors and replaced it with a wooden one. Wood and metal prosthetics Pliny the Elder also recorded the tale of a Roman general, Marcus Sergius, whose right hand was cut off while campaigning and had an iron hand made to hold his shield so that he could return to battle. A famous and quite refined historical prosthetic arm was that of Götz von Berlichingen, made at the beginning of the 16th century. The first confirmed use of a prosthetic device, however, is from 950 to 710 BC. In 2000, research pathologists discovered a mummy from this period buried in the Egyptian necropolis near ancient Thebes that possessed an artificial big toe. This toe, consisting of wood and leather, exhibited evidence of use. When reproduced by bio-mechanical engineers in 2011, researchers discovered that this ancient prosthetic enabled its wearer to walk both barefoot and in Egyptian style sandals. Previously, the earliest discovered prosthetic was an artificial leg from Capua. Around the same time, François de la Noue is also reported to have had an iron hand, as is, in the 17th century, René-Robert Cavalier de la Salle. Henri de Tonti had a prosthetic hook for a hand. During the Middle Ages, prosthetics remained quite basic in form. Debilitated knights would be fitted with prosthetics so they could hold up a shield, grasp a lance or a sword, or stabilize a mounted warrior. Only the wealthy could afford anything that would assist in daily life. One notable prosthesis was that belonging to an Italian man, who scientists estimate replaced his amputated right hand with a knife. Scientists investigating the skeleton, which was found in a Longobard cemetery in Povegliano Veronese, estimated that the man had lived sometime between the 6th and 8th centuries AD. Materials found near the man's body suggest that the knife prosthesis was attached with a leather strap, which he repeatedly tightened with his teeth. During the Renaissance, prosthetics developed with the use of iron, steel, copper, and wood. Functional prosthetics began to make an appearance in the 1500s. Technology progress before the 20th century An Italian surgeon recorded the existence of an amputee who had an arm that allowed him to remove his hat, open his purse, and sign his name. Improvement in amputation surgery and prosthetic design came at the hands of Ambroise Paré. Among his inventions was an above-knee device that was a kneeling peg leg and foot prosthesis with a fixed position, adjustable harness, and knee lock control. The functionality of his advancements showed how future prosthetics could develop. Other major improvements before the modern era: Pieter Verduyn – First non-locking below-knee (BK) prosthesis. James Potts – Prosthesis made of a wooden shank and socket, a steel knee joint and an articulated foot that was controlled by catgut tendons from the knee to the ankle. Came to be known as "Anglesey Leg" or "Selpho Leg". Sir James Syme – A new method of ankle amputation that did not involve amputating at the thigh. Benjamin Palmer – Improved upon the Selpho leg. Added an anterior spring and concealed tendons to simulate natural-looking movement. Dubois Parmlee – Created prosthetic with a suction socket, polycentric knee, and multi-articulated foot. Marcel Desoutter & Charles Desoutter – First aluminium prosthesis Henry Heather Bigg, and his son Henry Robert Heather Bigg, won the Queen's command to provide "surgical appliances" to wounded soldiers after Crimea War. They developed arms that allowed a double arm amputee to crochet, and a hand that felt natural to others based on ivory, felt and leather. At the end of World War II, the NAS (National Academy of Sciences) began to advocate better research and development of prosthetics. Through government funding, a research and development program was developed within the Army, Navy, Air Force, and the Veterans Administration. Lower extremity modern history After the Second World War, a team at the University of California, Berkeley including James Foort and C.W. Radcliff helped to develop the quadrilateral socket by developing a jig fitting system for amputations above the knee. Socket technology for lower extremity limbs saw a further revolution during the 1980s when John Sabolich C.P.O., invented the Contoured Adducted Trochanteric-Controlled Alignment Method (CATCAM) socket, later to evolve into the Sabolich Socket. He followed the direction of Ivan Long and Ossur Christensen as they developed alternatives to the quadrilateral socket, which in turn followed the open ended plug socket, created from wood. The advancement was due to the difference in the socket to patient contact model. Prior to this, sockets were made in the shape of a square shape with no specialized containment for muscular tissue. New designs thus help to lock in the bony anatomy, locking it into place and distributing the weight evenly over the existing limb as well as the musculature of the patient. Ischial containment is well known and used today by many prosthetist to help in patient care. Variations of the ischial containment socket thus exists and each socket is tailored to the specific needs of the patient. Others who contributed to socket development and changes over the years include Tim Staats, Chris Hoyt, and Frank Gottschalk. Gottschalk disputed the efficacy of the CAT-CAM socket- insisting the surgical procedure done by the amputation surgeon was most important to prepare the amputee for good use of a prosthesis of any type socket design. The first microprocessor-controlled prosthetic knees became available in the early 1990s. The Intelligent Prosthesis was the first commercially available microprocessor-controlled prosthetic knee. It was released by Chas. A. Blatchford & Sons, Ltd., of Great Britain, in 1993 and made walking with the prosthesis feel and look more natural. An improved version was released in 1995 by the name Intelligent Prosthesis Plus. Blatchford released another prosthesis, the Adaptive Prosthesis, in 1998. The Adaptive Prosthesis utilized hydraulic controls, pneumatic controls, and a microprocessor to provide the amputee with a gait that was more responsive to changes in walking speed. Cost analysis reveals that a sophisticated above-knee prosthesis will be about $1 million in 45 years, given only annual cost of living adjustments. In 2019, a project under AT2030 was launched in which bespoke sockets are made using a thermoplastic, rather than through a plaster cast. This is faster to do and significantly less expensive. The sockets were called Amparo Confidence sockets. Upper extremity modern history In 2005, DARPA started the Revolutionizing Prosthetics program. According to DARPA, the goal of the $100 million program was to "develop an advanced electromechanical prosthetic upper limb with near-natural control that would dramatically enhance independence and quality of life for amputees." In 2014, the LUKE Arm developed by Dean Kamen and his team at DEKA Research and Development Corp. became the first prosthetic arm approved by FDA that "translates signals from a person's muscles to perform complex tasks," according to FDA. Johns Hopkins University and the U.S. Department of Veteran Affairs also participated in the program. Design trends moving forward There are many steps in the evolution of prosthetic design trends that are moving forward with time. Many design trends point to lighter, more durable, and flexible materials like carbon fiber, silicone, and advanced polymers. These not only make the prosthetic limb lighter and more durable but also allow it to mimic the look and feel of natural skin, providing users with a more comfortable and natural experience. This new technology helps prosthetic users blend in with people with normal ligaments to reduce the stigmatism for people who wear prosthetics. Another trend points towards using bionics and myoelectric components in prosthetic design. These limbs utilize sensors to detect electrical signals from the user's residual muscles. The signals are then converted into motions, allowing users to control their prosthetic limbs using their own muscle contractions. This has greatly improved the range and fluidity of movements available to amputees, making tasks like grasping objects or walking naturally much more feasible. Integration with AI is also on the forefront to the prosthetic design. AI-enabled prosthetic limbs can learn and adapt to the user's habits and preferences over time, ensuring optimal functionality. By analyzing the user's gait, grip, and other movements, these smart limbs can make real-time adjustments, providing smoother and more natural motions. Patient procedure A prosthesis is a functional replacement for an amputated or congenitally malformed or missing limb. Prosthetists are responsible for the prescription, design, and management of a prosthetic device. In most cases, the prosthetist begins by taking a plaster cast of the patient's affected limb. Lightweight, high-strength thermoplastics are custom-formed to this model of the patient. Cutting-edge materials such as carbon fiber, titanium and Kevlar provide strength and durability while making the new prosthesis lighter. More sophisticated prostheses are equipped with advanced electronics, providing additional stability and control. Current technology and manufacturing Over the years, there have been advancements in artificial limbs. New plastics and other materials, such as carbon fiber, have allowed artificial limbs to be stronger and lighter, limiting the amount of extra energy necessary to operate the limb. This is especially important for trans-femoral amputees. Additional materials have allowed artificial limbs to look much more realistic, which is important to trans-radial and transhumeral amputees because they are more likely to have the artificial limb exposed. In addition to new materials, the use of electronics has become very common in artificial limbs. Myoelectric limbs, which control the limbs by converting muscle movements to electrical signals, have become much more common than cable operated limbs. Myoelectric signals are picked up by electrodes, the signal gets integrated and once it exceeds a certain threshold, the prosthetic limb control signal is triggered which is why inherently, all myoelectric controls lag. Conversely, cable control is immediate and physical, and through that offers a certain degree of direct force feedback that myoelectric control does not. Computers are also used extensively in the manufacturing of limbs. Computer Aided Design and Computer Aided Manufacturing are often used to assist in the design and manufacture of artificial limbs. Most modern artificial limbs are attached to the residual limb (stump) of the amputee by belts and cuffs or by suction. The residual limb either directly fits into a socket on the prosthetic, or—more commonly today—a liner is used that then is fixed to the socket either by vacuum (suction sockets) or a pin lock. Liners are soft and by that, they can create a far better suction fit than hard sockets. Silicone liners can be obtained in standard sizes, mostly with a circular (round) cross section, but for any other residual limb shape, custom liners can be made. The socket is custom made to fit the residual limb and to distribute the forces of the artificial limb across the area of the residual limb (rather than just one small spot), which helps reduce wear on the residual limb. Production of prosthetic socket The production of a prosthetic socket begins with capturing the geometry of the residual limb, this process is called shape capture. The goal of this process is to create an accurate representation of the residual limb, which is critical to achieve good socket fit. The custom socket is created by taking a plaster cast of the residual limb or, more commonly today, of the liner worn over their residual limb, and then making a mold from the plaster cast. The commonly used compound is called Plaster of Paris. In recent years, various digital shape capture systems have been developed which can be input directly to a computer allowing for a more sophisticated design. In general, the shape capturing process begins with the digital acquisition of three-dimensional (3D) geometric data from the amputee's residual limb. Data are acquired with either a probe, laser scanner, structured light scanner, or a photographic-based 3D scanning system. After shape capture, the second phase of the socket production is called rectification, which is the process of modifying the model of the residual limb by adding volume to bony prominence and potential pressure points and remove volume from load bearing area. This can be done manually by adding or removing plaster to the positive model, or virtually by manipulating the computerized model in the software. Lastly, the fabrication of the prosthetic socket begins once the model has been rectified and finalized. The prosthetists would wrap the positive model with a semi-molten plastic sheet or carbon fiber coated with epoxy resin to construct the prosthetic socket. For the computerized model, it can be 3D printed using a various of material with different flexibility and mechanical strength. Optimal socket fit between the residual limb and socket is critical to the function and usage of the entire prosthesis. If the fit between the residual limb and socket attachment is too loose, this will reduce the area of contact between the residual limb and socket or liner, and increase pockets between residual limb skin and socket or liner. Pressure then is higher, which can be painful. Air pockets can allow sweat to accumulate that can soften the skin. Ultimately, this is a frequent cause for itchy skin rashes. Over time, this can lead to breakdown of the skin. On the other hand, a very tight fit may excessively increase the interface pressures that may also lead to skin breakdown after prolonged use. Artificial limbs are typically manufactured using the following steps: Measurement of the residual limb Measurement of the body to determine the size required for the artificial limb Fitting of a silicone liner Creation of a model of the liner worn over the residual limb Formation of thermoplastic sheet around the model – This is then used to test the fit of the prosthetic Formation of permanent socket Formation of plastic parts of the artificial limb – Different methods are used, including vacuum forming and injection molding Creation of metal parts of the artificial limb using die casting Assembly of entire limb Body-powered arms Current technology allows body-powered arms to weigh around one-half to one-third of what a myoelectric arm does. Sockets Current body-powered arms contain sockets that are built from hard epoxy or carbon fiber. These sockets or "interfaces" can be made more comfortable by lining them with a softer, compressible foam material that provides padding for the bone prominences. A self-suspending or supra-condylar socket design is useful for those with short to mid-range below elbow absence. Longer limbs may require the use of a locking roll-on type inner liner or more complex harnessing to help augment suspension. Wrists Wrist units are either screw-on connectors featuring the UNF 1/2-20 thread (USA) or quick-release connector, of which there are different models. Voluntary opening and voluntary closing Two types of body-powered systems exist, voluntary opening "pull to open" and voluntary closing "pull to close". Virtually all "split hook" prostheses operate with a voluntary opening type system. More modern "prehensors" called GRIPS utilize voluntary closing systems. The differences are significant. Users of voluntary opening systems rely on elastic bands or springs for gripping force, while users of voluntary closing systems rely on their own body power and energy to create gripping force. Voluntary closing users can generate prehension forces equivalent to the normal hand, up to or exceeding one hundred pounds. Voluntary closing GRIPS require constant tension to grip, like a human hand, and in that property, they do come closer to matching human hand performance. Voluntary opening split hook users are limited to forces their rubber or springs can generate which usually is below 20 pounds. Feedback An additional difference exists in the biofeedback created that allows the user to "feel" what is being held. Voluntary opening systems once engaged provide the holding force so that they operate like a passive vice at the end of the arm. No gripping feedback is provided once the hook has closed around the object being held. Voluntary closing systems provide directly proportional control and biofeedback so that the user can feel how much force that they are applying. In 1997, the Colombian Prof. Álvaro Ríos Poveda, a researcher in bionics in Latin America, developed an upper limb and hand prosthesis with sensory feedback. This technology allows amputee patients to handle prosthetic hand systems in a more natural way. A recent study showed that by stimulating the median and ulnar nerves, according to the information provided by the artificial sensors from a hand prosthesis, physiologically appropriate (near-natural) sensory information could be provided to an amputee. This feedback enabled the participant to effectively modulate the grasping force of the prosthesis with no visual or auditory feedback. In February 2013, researchers from École Polytechnique Fédérale de Lausanne in Switzerland and the Scuola Superiore Sant'Anna in Italy, implanted electrodes into an amputee's arm, which gave the patient sensory feedback and allowed for real time control of the prosthetic. With wires linked to nerves in his upper arm, the Danish patient was able to handle objects and instantly receive a sense of touch through the special artificial hand that was created by Silvestro Micera and researchers both in Switzerland and Italy. In July 2019, this technology was expanded on even further by researchers from the University of Utah, led by Jacob George. The group of researchers implanted electrodes into the patient's arm to map out several sensory precepts. They would then stimulate each electrode to figure out how each sensory precept was triggered, then proceed to map the sensory information onto the prosthetic. This would allow the researchers to get a good approximation of the same kind of information that the patient would receive from their natural hand. Unfortunately, the arm is too expensive for the average user to acquire, however, Jacob mentioned that insurance companies could cover the costs of the prosthetic. Terminal devices Terminal devices contain a range of hooks, prehensors, hands or other devices. Hooks Voluntary opening split hook systems are simple, convenient, light, robust, versatile and relatively affordable. A hook does not match a normal human hand for appearance or overall versatility, but its material tolerances can exceed and surpass the normal human hand for mechanical stress (one can even use a hook to slice open boxes or as a hammer whereas the same is not possible with a normal hand), for thermal stability (one can use a hook to grip items from boiling water, to turn meat on a grill, to hold a match until it has burned down completely) and for chemical hazards (as a metal hook withstands acids or lye, and does not react to solvents like a prosthetic glove or human skin). Hands Prosthetic hands are available in both voluntary opening and voluntary closing versions and because of their more complex mechanics and cosmetic glove covering require a relatively large activation force, which, depending on the type of harness used, may be uncomfortable. A recent study by the Delft University of Technology, The Netherlands, showed that the development of mechanical prosthetic hands has been neglected during the past decades. The study showed that the pinch force level of most current mechanical hands is too low for practical use. The best tested hand was a prosthetic hand developed around 1945. In 2017 however, a research has been started with bionic hands by Laura Hruby of the Medical University of Vienna. A few open-hardware 3-D printable bionic hands have also become available. Some companies are also producing robotic hands with integrated forearm, for fitting unto a patient's upper arm and in 2020, at the Italian Institute of Technology (IIT), another robotic hand with integrated forearm (Soft Hand Pro) was developed. Commercial providers and materials Hosmer and Otto Bock are major commercial hook providers. Mechanical hands are sold by Hosmer and Otto Bock as well; the Becker Hand is still manufactured by the Becker family. Prosthetic hands may be fitted with standard stock or custom-made cosmetic looking silicone gloves. But regular work gloves may be worn as well. Other terminal devices include the V2P Prehensor, a versatile robust gripper that allows customers to modify aspects of it, Texas Assist Devices (with a whole assortment of tools) and TRS that offers a range of terminal devices for sports. Cable harnesses can be built using aircraft steel cables, ball hinges, and self-lubricating cable sheaths. Some prosthetics have been designed specifically for use in salt water. Lower-extremity prosthetics Lower-extremity prosthetics describes artificially replaced limbs located at the hip level or lower. Concerning all ages Ephraim et al. (2003) found a worldwide estimate of all-cause lower-extremity amputations of 2.0–5.9 per 10,000 inhabitants. For birth prevalence rates of congenital limb deficiency they found an estimate between 3.5 and 7.1 cases per 10,000 births. The two main subcategories of lower extremity prosthetic devices are trans-tibial (any amputation transecting the tibia bone or a congenital anomaly resulting in a tibial deficiency), and trans-femoral (any amputation transecting the femur bone or a congenital anomaly resulting in a femoral deficiency). In the prosthetic industry, a trans-tibial prosthetic leg is often referred to as a "BK" or below the knee prosthesis while the trans-femoral prosthetic leg is often referred to as an "AK" or above the knee prosthesis. Other, less prevalent lower extremity cases include the following: Hip disarticulations – This usually refers to when an amputee or congenitally challenged patient has either an amputation or anomaly at or in close proximity to the hip joint. See hip replacement Knee disarticulations – This usually refers to an amputation through the knee disarticulating the femur from the tibia. See knee replacement Symes – This is an ankle disarticulation while preserving the heel pad. Socket The socket serves as an interface between the residuum and the prosthesis, ideally allowing comfortable weight-bearing, movement control and proprioception. Socket problems, such as discomfort and skin breakdown, are rated among the most important issues faced by lower-limb amputees. Shank and connectors This part creates distance and support between the knee-joint and the foot (in case of an upper-leg prosthesis) or between the socket and the foot. The type of connectors that are used between the shank and the knee/foot determines whether the prosthesis is modular or not. Modular means that the angle and the displacement of the foot in respect to the socket can be changed after fitting. In developing countries prosthesis mostly are non-modular, in order to reduce cost. When considering children modularity of angle and height is important because of their average growth of 1.9 cm annually. Foot Providing contact to the ground, the foot provides shock absorption and stability during stance. Additionally it influences gait biomechanics by its shape and stiffness. This is because the trajectory of the center of pressure (COP) and the angle of the ground reaction forces is determined by the shape and stiffness of the foot and needs to match the subject's build in order to produce a normal gait pattern. Andrysek (2010) found 16 different types of feet, with greatly varying results concerning durability and biomechanics. The main problem found in current feet is durability, endurance ranging from 16 to 32 months These results are for adults and will probably be worse for children due to higher activity levels and scale effects. Evidence comparing different types of feet and ankle prosthetic devices is not strong enough to determine if one mechanism of ankle/foot is superior to another. When deciding on a device, the cost of the device, a person's functional need, and the availability of a particular device should be considered. Knee joint In case of a trans-femoral (above knee) amputation, there also is a need for a complex connector providing articulation, allowing flexion during swing-phase but not during stance. As its purpose is to replace the knee, the prosthetic knee joint is the most critical component of the prosthesis for trans-femoral amputees. The function of the good prosthetic knee joint is to mimic the function of the normal knee, such as providing structural support and stability during stance phase but able to flex in a controllable manner during swing phase. Hence it allows users to have a smooth and energy efficient gait and minimize the impact of amputation. The prosthetic knee is connected to the prosthetic foot by the shank, which is usually made of an aluminum or graphite tube. One of the most important aspect of a prosthetic knee joint would be its stance-phase control mechanism. The function of stance-phase control is to prevent the leg from buckling when the limb is loaded during weight acceptance. This ensures the stability of the knee in order to support the single limb support task of stance phase and provides a smooth transition to the swing phase. Stance phase control can be achieved in several ways including the mechanical locks, relative alignment of prosthetic components, weight activated friction control, and polycentric mechanisms. Microprocessor control To mimic the knee's functionality during gait, microprocessor-controlled knee joints have been developed that control the flexion of the knee. Some examples are Otto Bock's C-leg, introduced in 1997, Ossur's Rheo Knee, released in 2005, the Power Knee by Ossur, introduced in 2006, the Plié Knee from Freedom Innovations and DAW Industries' Self Learning Knee (SLK). The idea was originally developed by Kelly James, a Canadian engineer, at the University of Alberta. A microprocessor is used to interpret and analyze signals from knee-angle sensors and moment sensors. The microprocessor receives signals from its sensors to determine the type of motion being employed by the amputee. Most microprocessor controlled knee-joints are powered by a battery housed inside the prosthesis. The sensory signals computed by the microprocessor are used to control the resistance generated by hydraulic cylinders in the knee-joint. Small valves control the amount of hydraulic fluid that can pass into and out of the cylinder, thus regulating the extension and compression of a piston connected to the upper section of the knee. The main advantage of a microprocessor-controlled prosthesis is a closer approximation to an amputee's natural gait. Some allow amputees to walk near walking speed or run. Variations in speed are also possible and are taken into account by sensors and communicated to the microprocessor, which adjusts to these changes accordingly. It also enables the amputees to walk downstairs with a step-over-step approach, rather than the one step at a time approach used with mechanical knees. There is some research suggesting that people with microprocessor-controlled prostheses report greater satisfaction and improvement in functionality, residual limb health, and safety. People may be able to perform everyday activities at greater speeds, even while multitasking, and reduce their risk of falls. However, some have some significant drawbacks that impair its use. They can be susceptible to water damage and thus great care must be taken to ensure that the prosthesis remains dry. Myoelectric A myoelectric prosthesis uses the electrical tension generated every time a muscle contracts, as information. This tension can be captured from voluntarily contracted muscles by electrodes applied on the skin to control the movements of the prosthesis, such as elbow flexion/extension, wrist supination/pronation (rotation) or opening/closing of the fingers. A prosthesis of this type utilizes the residual neuromuscular system of the human body to control the functions of an electric powered prosthetic hand, wrist, elbow or foot. This is different from an electric switch prosthesis, which requires straps and/or cables actuated by body movements to actuate or operate switches that control the movements of the prosthesis. There is no clear evidence concluding that myoelectric upper extremity prostheses function better than body-powered prostheses. Advantages to using a myoelectric upper extremity prosthesis include the potential for improvement in cosmetic appeal (this type of prosthesis may have a more natural look), may be better for light everyday activities, and may be beneficial for people experiencing phantom limb pain. When compared to a body-powered prosthesis, a myoelectric prosthesis may not be as durable, may have a longer training time, may require more adjustments, may need more maintenance, and does not provide feedback to the user. Prof. Alvaro Ríos Poveda has been working for several years on a non-invasive and affordable solution to this feedback problem. He considers that: "Prosthetic limbs that can be controlled with thought hold great promise for the amputee, but without sensorial feedback from the signals returning to the brain, it can be difficult to achieve the level of control necessary to perform precise movements. When connecting the sense of touch from a mechanical hand directly to the brain, prosthetics can restore the function of the amputated limb in an almost natural-feeling way." He presented the first Myoelectric prosthetic hand with sensory feedback at the XVIII World Congress on Medical Physics and Biomedical Engineering, 1997, held in Nice, France. The USSR was the first to develop a myoelectric arm in 1958, while the first myoelectric arm became commercial in 1964 by the Central Prosthetic Research Institute of the USSR, and distributed by the Hangar Limb Factory of the UK. The Myoelectric prosthesis are expensive requires regular maintenance, sensitive to sweat and moisture affecting sensor performance. Robotic prostheses Robots can be used to generate objective measures of patient's impairment and therapy outcome, assist in diagnosis, customize therapies based on patient's motor abilities, and assure compliance with treatment regimens and maintain patient's records. It is shown in many studies that there is a significant improvement in upper limb motor function after stroke using robotics for upper limb rehabilitation. In order for a robotic prosthetic limb to work, it must have several components to integrate it into the body's function: Biosensors detect signals from the user's nervous or muscular systems. It then relays this information to a microcontroller located inside the device, and processes feedback from the limb and actuator, e.g., position or force, and sends it to the controller. Examples include surface electrodes that detect electrical activity on the skin, needle electrodes implanted in muscle, or solid-state electrode arrays with nerves growing through them. One type of these biosensors are employed in myoelectric prostheses. A device known as the controller is connected to the user's nerve and muscular systems and the device itself. It sends intention commands from the user to the actuators of the device and interprets feedback from the mechanical and biosensors to the user. The controller is also responsible for the monitoring and control of the movements of the device. An actuator mimics the actions of a muscle in producing force and movement. Examples include a motor that aids or replaces original muscle tissue. Targeted muscle reinnervation (TMR) is a technique in which motor nerves, which previously controlled muscles on an amputated limb, are surgically rerouted such that they reinnervate a small region of a large, intact muscle, such as the pectoralis major. As a result, when a patient thinks about moving the thumb of their missing hand, a small area of muscle on their chest will contract instead. By placing sensors over the reinnervated muscle, these contractions can be made to control the movement of an appropriate part of the robotic prosthesis. A variant of this technique is called targeted sensory reinnervation (TSR). This procedure is similar to TMR, except that sensory nerves are surgically rerouted to skin on the chest, rather than motor nerves rerouted to muscle. Recently, robotic limbs have improved in their ability to take signals from the human brain and translate those signals into motion in the artificial limb. DARPA, the Pentagon's research division, is working to make even more advancements in this area. Their desire is to create an artificial limb that ties directly into the nervous system. Robotic arms Advancements in the processors used in myoelectric arms have allowed developers to make gains in fine-tuned control of the prosthetic. The Boston Digital Arm is a recent artificial limb that has taken advantage of these more advanced processors. The arm allows movement in five axes and allows the arm to be programmed for a more customized feel. Recently the I-LIMB Hand, invented in Edinburgh, Scotland, by David Gow has become the first commercially available hand prosthesis with five individually powered digits. The hand also possesses a manually rotatable thumb which is operated passively by the user and allows the hand to grip in precision, power, and key grip modes. Another neural prosthetic is Johns Hopkins University Applied Physics Laboratory Proto 1. Besides the Proto 1, the university also finished the Proto 2 in 2010. Early in 2013, Max Ortiz Catalan and Rickard Brånemark of the Chalmers University of Technology, and Sahlgrenska University Hospital in Sweden, succeeded in making the first robotic arm which is mind-controlled and can be permanently attached to the body (using osseointegration). An approach that is very useful is called arm rotation which is common for unilateral amputees which is an amputation that affects only one side of the body; and also essential for bilateral amputees, a person who is missing or has had amputated either both arms or legs, to carry out activities of daily living. This involves inserting a small permanent magnet into the distal end of the residual bone of subjects with upper limb amputations. When a subject rotates the residual arm, the magnet will rotate with the residual bone, causing a change in magnetic field distribution. EEG (electroencephalogram) signals, detected using small flat metal discs attached to the scalp, essentially decoding human brain activity used for physical movement, is used to control the robotic limbs. This allows the user to control the part directly. Robotic transtibial prostheses The research of robotic legs has made some advancement over time, allowing exact movement and control. Researchers at the Rehabilitation Institute of Chicago announced in September 2013 that they have developed a robotic leg that translates neural impulses from the user's thigh muscles into movement, which is the first prosthetic leg to do so. It is currently in testing. Hugh Herr, head of the biomechatronics group at MIT's Media Lab developed a robotic transtibial leg (PowerFoot BiOM). The Icelandic company Össur has also created a robotic transtibial leg with motorized ankle that moves through algorithms and sensors that automatically adjust the angle of the foot during different points in its wearer's stride. Also there are brain-controlled bionic legs that allow an individual to move his limbs with a wireless transmitter. Prosthesis design The main goal of a robotic prosthesis is to provide active actuation during gait to improve the biomechanics of gait, including, among other things, stability, symmetry, or energy expenditure for amputees. There are several powered prosthetic legs currently on the market, including fully powered legs, in which actuators directly drive the joints, and semi-active legs, which use small amounts of energy and a small actuator to change the mechanical properties of the leg but do not inject net positive energy into gait. Specific examples include The emPOWER from BionX, the Proprio Foot from Ossur, and the Elan Foot from Endolite. Various research groups have also experimented with robotic legs over the last decade. Central issues being researched include designing the behavior of the device during stance and swing phases, recognizing the current ambulation task, and various mechanical design problems such as robustness, weight, battery-life/efficiency, and noise-level. However, scientists from Stanford University and Seoul National University has developed artificial nerves system that will help prosthetic limbs feel. This synthetic nerve system enables prosthetic limbs sense braille, feel the sense of touch and respond to the environment. Use of recycled materials Prosthetics are being made from recycled plastic bottles and lids around the world. Direct bone attachment and osseointegration Most prostheses are attached to the exterior of the body in a non-permanent way. The stump and socket method can cause significant pain for the person, which is why direct bone attachment has been explored extensively. Osseointegration is a method of attaching the artificial limb to the body by a prosthetic implant. This method is also sometimes referred to as exoprosthesis (attaching an artificial limb to the bone), or endo-exoprosthesis. Endoprosthesis are prosthetic joint implants which remain wholly inside the body such as knee and hip replacement implants. The method works by inserting a titanium bolt into the bone at the end of the stump. After several months the bone attaches itself to the titanium bolt and an abutment is attached to the titanium bolt. The abutment extends out of the stump and the (removable) artificial limb is then attached to the abutment. Some of the benefits of this method include the following: Better muscle control of the prosthetic. The ability to wear the prosthetic for an extended period of time; with the stump and socket method this is not possible. The ability for transfemoral amputees to drive a car. The main disadvantage of this method is that amputees with the direct bone attachment cannot have large impacts on the limb, such as those experienced during jogging, because of the potential for the bone to break. Cosmesis Cosmetic prosthesis has long been used to disguise injuries and disfigurements. With advances in modern technology, cosmesis, the creation of lifelike limbs made from silicone or PVC, has been made possible. Such prosthetics, including artificial hands, can now be designed to simulate the appearance of real hands, complete with freckles, veins, hair, fingerprints and even tattoos. Custom-made cosmeses are generally more expensive (costing thousands of U.S. dollars, depending on the level of detail), while standard cosmeses come premade in a variety of sizes, although they are often not as realistic as their custom-made counterparts. Another option is the custom-made silicone cover, which can be made to match a person's skin tone but not details such as freckles or wrinkles. Cosmeses are attached to the body in any number of ways, using an adhesive, suction, form-fitting, stretchable skin, or a skin sleeve. Cognition Unlike neuromotor prostheses, neurocognitive prostheses would sense or modulate neural function in order to physically reconstitute or augment cognitive processes such as executive function, attention, language, and memory. No neurocognitive prostheses are currently available but the development of implantable neurocognitive brain-computer interfaces has been proposed to help treat conditions such as stroke, traumatic brain injury, cerebral palsy, autism, and Alzheimer's disease. The recent field of Assistive Technology for Cognition concerns the development of technologies to augment human cognition. Scheduling devices such as Neuropage remind users with memory impairments when to perform certain activities, such as visiting the doctor. Micro-prompting devices such as PEAT, AbleLink and Guide have been used to aid users with memory and executive function problems perform activities of daily living. Prosthetic enhancement In addition to the standard artificial limb for everyday use, many amputees or congenital patients have special limbs and devices to aid in the participation of sports and recreational activities. Within science fiction, and, more recently, within the scientific community, there has been consideration given to using advanced prostheses to replace healthy body parts with artificial mechanisms and systems to improve function. The morality and desirability of such technologies are being debated by transhumanists, other ethicists, and others in general. Body parts such as legs, arms, hands, feet, and others can be replaced. The first experiment with a healthy individual appears to have been that by the British scientist Kevin Warwick. In 2002, an implant was interfaced directly into Warwick's nervous system. The electrode array, which contained around a hundred electrodes, was placed in the median nerve. The signals produced were detailed enough that a robot arm was able to mimic the actions of Warwick's own arm and provide a form of touch feedback again via the implant. The DEKA company of Dean Kamen developed the "Luke arm", an advanced nerve-controlled prosthetic. Clinical trials began in 2008, with FDA approval in 2014 and commercial manufacturing by the Universal Instruments Corporation expected in 2017. The price offered at retail by Mobius Bionics is expected to be around $100,000. Further research in April 2019, there have been improvements towards prosthetic function and comfort of 3D-printed personalized wearable systems. Instead of manual integration after printing, integrating electronic sensors at the intersection between a prosthetic and the wearer's tissue can gather information such as pressure across wearer's tissue, that can help improve further iteration of these types of prosthetic. Oscar Pistorius In early 2008, Oscar Pistorius, the "Blade Runner" of South Africa, was briefly ruled ineligible to compete in the 2008 Summer Olympics because his transtibial prosthesis limbs were said to give him an unfair advantage over runners who had ankles. One researcher found that his limbs used twenty-five percent less energy than those of a non-disabled runner moving at the same speed. This ruling was overturned on appeal, with the appellate court stating that the overall set of advantages and disadvantages of Pistorius' limbs had not been considered. Pistorius did not qualify for the South African team for the Olympics, but went on to sweep the 2008 Summer Paralympics, and has been ruled eligible to qualify for any future Olympics. He qualified for the 2011 World Championship in South Korea and reached the semi-final where he ended last timewise, he was 14th in the first round, his personal best at 400m would have given him 5th place in the finals. At the 2012 Summer Olympics in London, Pistorius became the first amputee runner to compete at an Olympic Games. He ran in the 400 metres race semi-finals, and the 4 × 400 metres relay race finals. He also competed in 5 events in the 2012 Summer Paralympics in London. Design considerations There are multiple factors to consider when designing a transtibial prosthesis. Manufacturers must make choices about their priorities regarding these factors. Performance Nonetheless, there are certain elements of socket and foot mechanics that are invaluable for the athlete, and these are the focus of today's high-tech prosthetics companies: Fit – athletic/active amputees, or those with bony residua, may require a carefully detailed socket fit; less-active patients may be comfortable with a 'total contact' fit and gel liner Energy storage and return – storage of energy acquired through ground contact and utilization of that stored energy for propulsion Energy absorption – minimizing the effect of high impact on the musculoskeletal system Ground compliance – stability independent of terrain type and angle Rotation – ease of changing direction Weight – maximizing comfort, balance and speed Suspension – how the socket will join and fit to the limb Other The buyer is also concerned with numerous other factors: Cosmetics Cost Ease of use Size availability Design for Prosthetics A key feature of prosthetics and prosthetic design is the idea of “designing for disabilities.” This might sound like a good idea in which people with disabilities can participate in equitable design but this is unfortunately not true. The idea of designing for disabilities is first problematic because of the underlying meaning of disabilities. It tells amputees that there is a right and wrong way to move and walk and that if amputees are adapted to the surrounding environment by their own means, then that is the wrong way. Along with that underlying meaning of disabilities, many people designing for disabilities are not actually disabled. “Design for disability" from these experiences, takes disability as the object - with the feeling from non-disabled designers that they have properly learned about their job from their own simulation of the experience. The simulation is misleading and does a disservice to disabled people - so the design that flows from this is highly problematic. Engaging in disability design should be… with, ideally, team members who have the relevant disability and are part of communities that matter to the research. This leads to people, who do not know what the day-to-day personal experiences are, designing materials that do not meet the needs or hinder the needs of people with actual disabilities. Cost and source freedom High-cost In the USA a typical prosthetic limb costs anywhere between $15,000 and $90,000, depending on the type of limb desired by the patient. With medical insurance, a patient will typically pay 10%–50% of the total cost of a prosthetic limb, while the insurance company will cover the rest of the cost. The percent that the patient pays varies on the type of insurance plan, as well as the limb requested by the patient. In the United Kingdom, much of Europe, Australia and New Zealand the entire cost of prosthetic limbs is met by state funding or statutory insurance. For example, in Australia prostheses are fully funded by state schemes in the case of amputation due to disease, and by workers compensation or traffic injury insurance in the case of most traumatic amputations. The National Disability Insurance Scheme, which is being rolled out nationally between 2017 and 2020 also pays for prostheses. Transradial (below the elbow amputation) and transtibial prostheses (below the knee amputation) typically cost between US $6,000 and $8,000, while transfemoral (above the knee amputation) and transhumeral prosthetics (above the elbow amputation) cost approximately twice as much with a range of $10,000 to $15,000 and can sometimes reach costs of $35,000. The cost of an artificial limb often recurs, while a limb typically needs to be replaced every 3–4 years due to wear and tear of everyday use. In addition, if the socket has fit issues, the socket must be replaced within several months from the onset of pain. If height is an issue, components such as pylons can be changed. Not only does the patient need to pay for their multiple prosthetic limbs, but they also need to pay for physical and occupational therapy that come along with adapting to living with an artificial limb. Unlike the reoccurring cost of the prosthetic limbs, the patient will typically only pay the $2000 to $5000 for therapy during the first year or two of living as an amputee. Once the patient is strong and comfortable with their new limb, they will not be required to go to therapy anymore. Throughout one's life, it is projected that a typical amputee will go through $1.4 million worth of treatment, including surgeries, prosthetics, as well as therapies. Low-cost Low-cost above-knee prostheses often provide only basic structural support with limited function. This function is often achieved with crude, non-articulating, unstable, or manually locking knee joints. A limited number of organizations, such as the International Committee of the Red Cross (ICRC), create devices for developing countries. Their device which is manufactured by CR Equipments is a single-axis, manually operated locking polymer prosthetic knee joint. Table. List of knee joint technologies based on the literature review. A plan for a low-cost artificial leg, designed by Sébastien Dubois, was featured at the 2007 International Design Exhibition and award show in Copenhagen, Denmark, where it won the Index: Award. It would be able to create an energy-return prosthetic leg for US $8.00, composed primarily of fiberglass. Prior to the 1980s, foot prostheses merely restored basic walking capabilities. These early devices can be characterized by a simple artificial attachment connecting one's residual limb to the ground. The introduction of the Seattle Foot (Seattle Limb Systems) in 1981 revolutionized the field, bringing the concept of an Energy Storing Prosthetic Foot (ESPF) to the fore. Other companies soon followed suit, and before long, there were multiple models of energy storing prostheses on the market. Each model utilized some variation of a compressible heel. The heel is compressed during initial ground contact, storing energy which is then returned during the latter phase of ground contact to help propel the body forward. Since then, the foot prosthetics industry has been dominated by steady, small improvements in performance, comfort, and marketability. With 3D printers, it is possible to manufacture a single product without having to have metal molds, so the costs can be drastically reduced. Jaipur foot, an artificial limb from Jaipur, India, costs about US$40. Open-source robotic prosthesis There is currently an open-design Prosthetics forum known as the "Open Prosthetics Project". The group employs collaborators and volunteers to advance Prosthetics technology while attempting to lower the costs of these necessary devices. Open Bionics is a company that is developing open-source robotic prosthetic hands. They utilize 3D printing to manufacture the devices and low-cost 3D scanners to fit them onto the residual limb of a specific patient. Open Bionics' use of 3D printing allows for more personalized designs, such as the "Hero Arm" which incorporates the users favourite colours, textures, and even aesthetics to look like superheroes or characters from Star Wars with the aim of lowering the cost. A review study on a wide range of printed prosthetic hands found that 3D printing technology holds a promise for individualised prosthesis design, is cheaper than commercial prostheses available on the market, and is more expensive than mass production processes such as injection molding. The same study also found that evidence on the functionality, durability and user acceptance of 3D printed hand prostheses is still lacking. Low-cost prosthetics for children In the USA an estimate was found of 32,500 children (<21 years) had a major paediatric amputation, with 5,525 new cases each year, of which 3,315 congenital. Carr et al. (1998) investigated amputations caused by landmines for Afghanistan, Bosnia and Herzegovina, Cambodia and Mozambique among children (<14 years), showing estimates of respectively 4.7, 0.19, 1.11 and 0.67 per 1000 children. Mohan (1986) indicated in India a total of 424,000 amputees (23,500 annually), of which 10.3% had an onset of disability below the age of 14, amounting to a total of about 43,700 limb deficient children in India alone. Few low-cost solutions have been created specially for children. Examples of low-cost prosthetic devices include: Pole and crutch This hand-held pole with leather support band or platform for the limb is one of the simplest and cheapest solutions found. It serves well as a short-term solution, but is prone to rapid contracture formation if the limb is not stretched daily through a series of range-of motion (RoM) sets. Bamboo, PVC or plaster limbs This also fairly simple solution comprises a plaster socket with a bamboo or PVC pipe at the bottom, optionally attached to a prosthetic foot. This solution prevents contractures because the knee is moved through its full RoM. The David Werner Collection, an online database for the assistance of disabled village children, displays manuals of production of these solutions. Adjustable bicycle limb This solution is built using a bicycle seat post up side down as foot, generating flexibility and (length) adjustability. It is a very cheap solution, using locally available materials. Sathi Limb It is an endoskeletal modular lower limb from India, which uses thermoplastic parts. Its main advantages are the small weight and adaptability. Monolimb Monolimbs are non-modular prostheses and thus require more experienced prosthetist for correct fitting, because alignment can barely be changed after production. However, their durability on average is better than low-cost modular solutions. Cultural and social theory perspectives A number of theorists have explored the meaning and implications of prosthetic extension of the body. Elizabeth Grosz writes, "Creatures use tools, ornaments, and appliances to augment their bodily capacities. Are their bodies lacking something, which they need to replace with artificial or substitute organs?...Or conversely, should prostheses be understood, in terms of aesthetic reorganization and proliferation, as the consequence of an inventiveness that functions beyond and perhaps in defiance of pragmatic need?" Elaine Scarry argues that every artifact recreates and extends the body. Chairs supplement the skeleton, tools append the hands, clothing augments the skin. In Scarry's thinking, "furniture and houses are neither more nor less interior to the human body than the food it absorbs, nor are they fundamentally different from such sophisticated prosthetics as artificial lungs, eyes and kidneys. The consumption of manufactured things turns the body inside out, opening it up to and as the culture of objects." Mark Wigley, a professor of architecture, continues this line of thinking about how architecture supplements our natural capabilities, and argues that "a blurring of identity is produced by all prostheses." Some of this work relies on Freud's earlier characterization of man's relation to objects as one of extension. Negative social implications Prosthetics play a vital role in how a person perceives themselves and how other people perceive them. The ability to conceal such use enabled participants to ward off social stigmatization that in turn enabled their social integration and the reduction of emotional problems surrounding such disability. People that lose a limb first have to deal with the emotional result of losing that limb. Regardless of the reasons for amputation, whether due to traumatic causes or as a consequence of illness, emotional shock exists. It may have a smaller or larger amplitude depending on a variety of factors such as patient age, medical culture, medical cause, etc. As a result of amputation, the research participants' reports were loaded with drama. The first emotional response to amputation was one of despair, a severe sense of self-collapse, something almost unbearable. Emotional factors are just a small part of looking at social implications. Many people who lose a limb may have lots of anxiety surrounding prosthetics and their limbs. After surgery, for an extended period of time, the interviewed patients from the National Library of Medicine noticed the appearance and increase of anxiety. A lot of negative thoughts invaded their minds. Projections about the future were grim, marked by sadness, helplessness, and even despair. Existential uncertainty, lack of control, and further anticipated losses in one's life due to amputation were the primary causes of anxiety and consequently ruminations and insomnia. From losing a leg and getting a prosthetics there were also many factors that can happen including anger and regret. The amputation of a limb is associated not only with physical loss and change in body image but also with an abrupt severing in one's sense of continuity. For participants with amputation as a result of physical trauma the event is often experienced as a transgression and can lead to frustration and anger. Ethical concerns There are also many ethical concerns about how the prosthetics are made and produced. A wide range of ethical issues arise in connection with experiments and clinical usage of sensory prostheses: animal experimentation; informed consent, for instance, in patients with a locked-in syndrome that may be alleviated with a sensory prosthesis; unrealistic expectations of research subjects testing new devices. How prosthetics come to be and testing of the usability of the device is a major concern in the medical world. Although many positives come when a new prosthetic design is announced, how the device got to where it is leads to some questioning the ethics of prosthetics. Debates There are also many debates among the prosthetic community about whether they should wear prosthetics at all. This is sparked by whether prosthetics help in day-to-day living or make it harder. Many people have adapted to their loss of limb making it work for them and do not need a prosthesis in their life. Not all amputees will wear a prosthesis. In a 2011 national survey of Australian amputees, Limbs 4 Life found that 7 percent of amputees do not wear a prosthesis, and in another Australian hospital study, this number was closer to 20 percent. Many people report being uncomfortable in prostheses and not wanting to wear them, even reporting that wearing a prosthetic is more cumbersome than not having one at all. These debates are natural among the prosthetic community and help us shed light on the issues that they are facing. Notable users of prosthetic devices Henry William Paget, 1st Marquess of Anglesey (1768–1854), whose leg was amputated at the Battle of Waterloo Marie Moentmann (1900–74), child survivor of industrial accident Terry Fox (1958–81), Canadian athlete, humanitarian, and cancer research activist Oscar Pistorius (born 1986), South African former professional sprinter Harold Russell (1914–2002), WWII veteran, Academy Award-winning actor
Technology
Devices
null
72754
https://en.wikipedia.org/wiki/Organic%20farming
Organic farming
Organic farming, also known as organic agriculture or ecological farming or biological farming, is an agricultural system that emphasizes the use of naturally occurring, non-synthetic inputs such as compost manure, green manure, and bone meal and places emphasis on techniques such as crop rotation, companion planting, and mixed cropping. Biological pest control methods such as the fostering of insect predators are also encouraged. Organic agriculture can be defined as "an integrated farming system that strives for sustainability, the enhancement of soil fertility and biological diversity while, with rare exceptions, prohibiting synthetic pesticides, antibiotics, synthetic fertilizers, genetically modified organisms, and growth hormones".<ref>H. Martin, '’Ontario Ministry of Agriculture, Food and Rural Affairs Introduction to Organic Farming, </ref> It originated early in the 20th century in reaction to rapidly changing farming practices. Certified organic agriculture today accounts for globally, with over half of that total in Australia. Organic standards are designed to allow the use of naturally occurring substances while prohibiting or severely limiting synthetic substances. For instance, naturally occurring pesticides such as garlic extract, bicarbonate of soda, or pyrethrin (which is found naturally in the Chrysanthemum flower) are permitted, while synthetic fertilizers and pesticides such as glyphosate are prohibited. Synthetic substances that are allowed only in exceptional circumstances may include copper sulfate, elemental sulfur, and veterinary drugs. Genetically modified organisms, nanomaterials, human sewage sludge, plant growth regulators, hormones, and antibiotic use in livestock husbandry are prohibited. Broadly, organic agriculture is based on the principles of health, care for all living beings and the environment, ecology, and fairness. Organic methods champion sustainability, self-sufficiency, autonomy and independence, health, animal welfare, food security, and food safety. It is often seen as part of the solution to the impacts of climate change. Organic agricultural methods are internationally regulated and legally enforced by transnational organizations such as the European Union and also by individual nations, based in large part on the standards set by the International Federation of Organic Agriculture Movements (IFOAM), an international umbrella organization for organic farming organizations established in 1972, with regional branches such as IFOAM Organics Europe and IFOAM Asia. Since 1990, the market for organic food and other products has grown rapidly, reaching $150 billion worldwide in 2022 – of which more than $64 billion was earned in North America and EUR 53 billion in Europe. This demand has driven a similar increase in organically managed farmland, which grew by 26.6 percent from 2021 to 2022. As of 2022, organic farming is practiced in 188 countries and approximately worldwide were farmed organically by 4.5 million farmers, representing approximately 2 percent of total world farmland. History Agriculture was practiced for thousands of years without the use of artificial chemicals. Artificial fertilizers were first developed during the mid-19th century. These early fertilizers were cheap, powerful, and easy to transport in bulk. Similar advances occurred in chemical pesticides in the 1940s, leading to the decade being referred to as the "pesticide era". These new agricultural techniques, while beneficial in the short-term, had serious longer-term side-effects such as soil compaction, erosion, and declines in overall soil fertility, along with health concerns about toxic chemicals entering the food supply. In the late 1800s and early 1900s, soil biology scientists began to seek ways to remedy these side effects while still maintaining higher production. In 1921 the founder and pioneer of the organic movement Albert Howard and his wife Gabrielle Howard, accomplished botanists, founded an Institute of Plant Industry to improve traditional farming methods in India. Among other things, they brought improved implements and improved animal husbandry methods from their scientific training; then by incorporating aspects of Indian traditional methods, developed protocols for the rotation of crops, erosion prevention techniques, and the systematic use of composts and manures. Stimulated by these experiences of traditional farming, when Albert Howard returned to Britain in the early 1930s he began to promulgate a system of organic agriculture. In 1924 Rudolf Steiner gave a series of eight lectures on agriculture with a focus on influences of the moon, planets, non-physical beings and elemental forces.Paull, John (2013) "Breslau (Wrocław): In the footsteps of Rudolf Steiner", Journal of Bio- Dynamics Tasmania, 110:10-15. They were held in response to a request by adherent farmers who noticed degraded soil conditions and a deterioration in the health and quality of crops and livestock resulting from the use of chemical fertilizers. The lectures were published in November 1924; the first English translation appeared in 1928 as The Agriculture Course. In July 1939, Ehrenfried Pfeiffer, the author of the standard work on biodynamic agriculture (Bio-Dynamic Farming and Gardening), came to the UK at the invitation of Walter James, 4th Baron Northbourne as a presenter at the Betteshanger Summer School and Conference on Biodynamic Farming at Northbourne's farm in Kent. One of the chief purposes of the conference was to bring together the proponents of various approaches to organic agriculture in order that they might cooperate within a larger movement. Howard attended the conference, where he met Pfeiffer. In the following year, Northbourne published his manifesto of organic farming, Look to the Land, in which he coined the term "organic farming". The Betteshanger conference has been described as the 'missing link' between biodynamic agriculture and other forms of organic farming. In 1940 Howard published his An Agricultural Testament. In this book he adopted Northbourne's terminology of "organic farming". Howard's work spread widely, and he became known as the "father of organic farming" for his work in applying scientific knowledge and principles to various traditional and natural methods. In the United States J. I. Rodale, who was keenly interested both in Howard's ideas and in biodynamics, founded in the 1940s both a working organic farm for trials and experimentation, The Rodale Institute, and Rodale, Inc. in Emmaus, Pennsylvania to teach and advocate organic methods to the wider public. These became important influences on the spread of organic agriculture. Further work was done by Lady Eve Balfour (the Haughley Experiment) in the United Kingdom, and many others across the world. The term "eco-agriculture" was coined in 1970 by Charles Walters, founder of Acres Magazine, to describe agriculture which does not use "man-made molecules of toxic rescue chemistry", effectively another name for organic agriculture. Increasing environmental awareness in the general population in modern times has transformed the originally supply-driven organic movement to a demand-driven one. Premium prices and some government subsidies attracted farmers. In the developing world, many producers farm according to traditional methods that are comparable to organic farming, but not certified, and that may not include the latest scientific advancements in organic agriculture. In other cases, farmers in the developing world have converted to modern organic methods for economic reasons. Terminology The use of "organic" popularized by Howard and Rodale refers more narrowly to the use of organic matter derived from plant compost and animal manures to improve the humus content of soils, grounded in the work of early soil scientists who developed what was then called "humus farming". Since the early 1940s the two camps have tended to merge. Biodynamic agriculturists, on the other hand, used the term "organic" to indicate that a farm should be viewed as a living organism, in the sense of the following quotation: They based their work on Steiner's spiritually-oriented alternative agriculture which includes various esoteric concepts. Methods Organic farming methods combine scientific knowledge of ecology and some modern technology with traditional farming practices based on naturally occurring biological processes. Organic farming methods are studied in the field of agroecology. While conventional agriculture uses synthetic pesticides and water-soluble synthetically purified fertilizers, organic farmers are restricted by regulations to using natural pesticides and fertilizers. An example of a natural pesticide is pyrethrin, which is found naturally in the Chrysanthemum flower. The principal methods of organic farming include crop rotation, green manures and compost, biological pest control, and mechanical cultivation. These measures use the natural environment to enhance agricultural productivity: legumes are planted to fix nitrogen into the soil, natural insect predators are encouraged, crops are rotated to confuse pests and renew soil, and natural materials such as potassium bicarbonate and mulches are used to control disease and weeds. Genetically modified seeds and animals are excluded. While organic is fundamentally different from conventional because of the use of carbon-based fertilizers compared with highly soluble synthetic based fertilizers and biological pest control instead of synthetic pesticides, organic farming and large-scale conventional farming are not entirely mutually exclusive. Many of the methods developed for organic agriculture have been borrowed by more conventional agriculture. For example, Integrated Pest Management is a multifaceted strategy that uses various organic methods of pest control whenever possible, but in conventional farming could include synthetic pesticides only as a last resort. Examples of beneficial insects that are used in organic farming include ladybugs and lacewings, both of which feed on aphids. The use of IPM lowers the possibility of pest developing resistance to pesticides that are applied to crops. Crop diversity Organic farming encourages crop diversity by promoting polyculture (multiple crops in the same space). Planting a variety of vegetable crops supports a wider range of beneficial insects, soil microorganisms, and other factors that add up to overall farm health. Crop diversity helps the environment to thrive and protects species from going extinct.Crop diversity: A Distinctive Characteristic of an Organic Farming Method - Organic Farming; 15 April 2013 The science of Agroecology has revealed the benefits of polyculture, which is often employed in organic farming. Agroecology is a scientific discipline that uses ecological theory to study, design, manage, and evaluate agricultural systems that are productive and resource-conserving, and that are also culturally sensitive, socially just, and economically viable. Incorporating crop diversity into organic farming practices can have several benefits. For instance, it can help to increase soil fertility by promoting the growth of beneficial soil microorganisms. It can also help to reduce pest and disease pressure by creating a more diverse and resilient agroecosystem. Furthermore, crop diversity can help to improve the nutritional quality of food by providing a wider range of essential nutrients. Soil management Organic farming relies more heavily on the natural breakdown of organic matter than the average conventional farm, using techniques like green manure and composting, to replace nutrients taken from the soil by previous crops. This biological process, driven by microorganisms such as mycorrhiza and earthworms, releases nutrients available to plants throughout the growing season. Farmers use a variety of methods to improve soil fertility, including crop rotation, cover cropping, reduced tillage, and application of compost. By reducing fuel-intensive tillage, less soil organic matter is lost to the atmosphere. This has an added benefit of carbon sequestration, which reduces greenhouse gases and helps reverse climate change. Reducing tillage may also improve soil structure and reduce the potential for soil erosion. Plants need a large number of nutrients in various quantities to flourish. Supplying enough nitrogen and particularly synchronization, so that plants get enough nitrogen at the time when they need it most, is a challenge for organic farmers. Crop rotation and green manure ("cover crops") help to provide nitrogen through legumes (more precisely, the family Fabaceae), which fix nitrogen from the atmosphere through symbiosis with rhizobial bacteria. Intercropping, which is sometimes used for insect and disease control, can also increase soil nutrients, but the competition between the legume and the crop can be problematic and wider spacing between crop rows is required. Crop residues can be ploughed back into the soil, and different plants leave different amounts of nitrogen, potentially aiding synchronization. Organic farmers also use animal manure, certain processed fertilizers such as seed meal and various mineral powders such as rock phosphate and green sand, a naturally occurring form of potash that provides potassium. In some cases pH may need to be amended. Natural pH amendments include lime and sulfur, but in the U.S. some compounds such as iron sulfate, aluminum sulfate, magnesium sulfate, and soluble boron products are allowed in organic farming. Mixed farms with both livestock and crops can operate as ley farms, whereby the land gathers fertility through growing nitrogen-fixing forage grasses such as white clover or alfalfa and grows cash crops or cereals when fertility is established. Farms without livestock ("stockless") may find it more difficult to maintain soil fertility, and may rely more on external inputs such as imported manure as well as grain legumes and green manures, although grain legumes may fix limited nitrogen because they are harvested. Horticultural farms that grow fruits and vegetables in protected conditions often rely even more on external inputs. Manure is very bulky and is often not cost-effective to transport more than a short distance from the source. Manure for organic farms' may become scarce if a sizable number of farms become organically managed. Weed management Organic weed management promotes weed suppression, rather than weed elimination, by enhancing crop competition and phytotoxic effects on weeds. Organic farmers integrate cultural, biological, mechanical, physical and chemical tactics to manage weeds without synthetic herbicides. Organic standards require rotation of annual crops, meaning that a single crop cannot be grown in the same location without a different, intervening crop. Organic crop rotations frequently include weed-suppressive cover crops and crops with dissimilar life cycles to discourage weeds associated with a particular crop. Research is ongoing to develop organic methods to promote the growth of natural microorganisms that suppress the growth or germination of common weeds. Other cultural practices used to enhance crop competitiveness and reduce weed pressure include selection of competitive crop varieties, high-density planting, tight row spacing, and late planting into warm soil to encourage rapid crop germination. Mechanical and physical weed control practices used on organic farms can be broadly grouped as: Tillage - Turning the soil between crops to incorporate crop residues and soil amendments; remove existing weed growth and prepare a seedbed for planting; turning soil after seeding to kill weeds, including cultivation of row crops. Mowing and cutting - Removing top growth of weeds. Flame weeding and thermal weeding - Using heat to kill weeds. Mulching - Blocking weed emergence with organic materials, plastic films, or landscape fabric. Some naturally sourced chemicals are allowed for herbicidal use. These include certain formulations of acetic acid (concentrated vinegar), corn gluten meal, and essential oils. A few selective bioherbicides based on fungal pathogens have also been developed. At this time, however, organic herbicides and bioherbicides play a minor role in the organic weed control toolbox. Weeds can be controlled by grazing. For example, geese have been used successfully to weed a range of organic crops including cotton, strawberries, tobacco, and corn, reviving the practice of keeping cotton patch geese, common in the southern U.S. before the 1950s. Similarly, some rice farmers introduce ducks and fish to wet paddy fields to eat both weeds and insects. Controlling other organisms Organisms aside from weeds that cause problems on farms include arthropods (e.g., insects, mites), nematodes, fungi and bacteria. Practices include, but are not limited to: Examples of predatory beneficial insects include minute pirate bugs, big-eyed bugs, and to a lesser extent ladybugs (which tend to fly away), all of which eat a wide range of pests. Lacewings are also effective, but tend to fly away. Praying mantis tend to move more slowly and eat less heavily. Parasitoid wasps tend to be effective for their selected prey, but like all small insects can be less effective outdoors because the wind controls their movement. Predatory mites are effective for controlling other mites. Naturally derived insecticides allowed for use on organic farms include Bacillus thuringiensis (a bacterial toxin), pyrethrum (a chrysanthemum extract), spinosad (a bacterial metabolite), neem (a tree extract) and rotenone (a legume root extract). Fewer than 10% of organic farmers use these pesticides regularly; a 2003 survey found that only 5.3% of vegetable growers in California use rotenone while 1.7% use pyrethrum. These pesticides are not always more safe or environmentally friendly than synthetic pesticides and can cause harm. The main criterion for organic pesticides is that they are naturally derived, and some naturally derived substances have been controversial. Controversial natural pesticides include rotenone, copper, nicotine sulfate, and pyrethrumsPottorff LP. Some Pesticides Permitted in Organic Gardening. Colorado State University Cooperative Extension. Rotenone and pyrethrum are particularly controversial because they work by attacking the nervous system, like most conventional insecticides. Rotenone is extremely toxic to fish and can induce symptoms resembling Parkinson's disease in mammals. Although pyrethrum (natural pyrethrins) is more effective against insects when used with piperonyl butoxide (which retards degradation of the pyrethrins), organic standards generally do not permit use of the latter substance.OGA. 2004. OGA standard. Organic Growers of Australia. Inc. 32 pp. Naturally derived fungicides allowed for use on organic farms include the bacteria Bacillus subtilis and Bacillus pumilus; and the fungus Trichoderma harzianum. These are mainly effective for diseases affecting roots. Compost tea contains a mix of beneficial microbes, which may attack or out-compete certain plant pathogens, but variability among formulations and preparation methods may contribute to inconsistent results or even dangerous growth of toxic microbes in compost teas. Some naturally derived pesticides are not allowed for use on organic farms. These include nicotine sulfate, arsenic, and strychnine. Synthetic pesticides allowed for use on organic farms include insecticidal soaps and horticultural oils for insect management; and Bordeaux mixture, copper hydroxide and sodium bicarbonate for managing fungi. Copper sulfate and Bordeaux mixture (copper sulfate plus lime), approved for organic use in various jurisdictions, can be more environmentally problematic than some synthetic fungicides disallowed in organic farming.Leake, A. R. 1999. House of Lords Select Committee on the European Communities. Session 1998-99, 16th Report. Organic Farming and the European Union. p. 81. Cited by Similar concerns apply to copper hydroxide. Repeated application of copper sulfate or copper hydroxide as a fungicide may eventually result in copper accumulation to toxic levels in soil, and admonitions to avoid excessive accumulations of copper in soil appear in various organic standards and elsewhere. Environmental concerns for several kinds of biota arise at average rates of use of such substances for some crops. In the European Union, where replacement of copper-based fungicides in organic agriculture is a policy priority, research is seeking alternatives for organic production. Livestock Raising livestock and poultry, for meat, dairy and eggs, is another traditional farming activity that complements growing. Organic farms attempt to provide animals with natural living conditions and feed. Organic certification verifies that livestock are raised according to the USDA organic regulations throughout their lives. These regulations include the requirement that all animal feed must be certified organic. Organic livestock may be, and must be, treated with medicine when they are sick, but drugs cannot be used to promote growth, their feed must be organic, and they must be pastured. Also, horses and cattle were once a basic farm feature that provided labour, for hauling and plowing, fertility, through recycling of manure, and fuel, in the form of food for farmers and other animals. While today, small growing operations often do not include livestock, domesticated animals are a desirable part of the organic farming equation, especially for true sustainability, the ability of a farm to function as a self-renewing unit. Genetic modification A key characteristic of organic farming is the exclusion of genetically engineered plants and animals. On 19 October 1998, participants at IFOAM's 12th Scientific Conference issued the Mar del Plata Declaration, where more than 600 delegates from over 60 countries voted unanimously to exclude the use of genetically modified organisms in organic food production and agriculture. Although opposition to the use of any transgenic technologies in organic farming is strong, agricultural researchers Luis Herrera-Estrella and Ariel Alvarez-Morales continue to advocate integration of transgenic technologies into organic farming as the optimal means to sustainable agriculture, particularly in the developing world. Organic farmer Raoul Adamchak and geneticist Pamela Ronald write that many agricultural applications of biotechnology are consistent with organic principles and have significantly advanced sustainable agriculture. Although GMOs are excluded from organic farming, there is concern that the pollen from genetically modified crops is increasingly penetrating organic and heirloom seed stocks, making it difficult, if not impossible, to keep these genomes from entering the organic food supply. Differing regulations among countries limits the availability of GMOs to certain countries, as described in the article on regulation of the release of genetic modified organisms. Tools Organic farmers use a number of traditional farm tools to do farming, and may make use of agricultural machinery in similar ways to conventional farming. In the developing world, on small organic farms, tools are normally constrained to hand tools and diesel powered water pumps. Standards Standards regulate production methods and in some cases final output for organic agriculture. Standards may be voluntary or legislated. As early as the 1970s private associations certified organic producers. In the 1980s, governments began to produce organic production guidelines. In the 1990s, a trend toward legislated standards began, most notably with the 1991 EU-Eco-regulation developed for European Union, which set standards for 12 countries, and a 1993 UK program. The EU's program was followed by a Japanese program in 2001, and in 2002 the U.S. created the National Organic Program (NOP). As of 2007 over 60 countries regulate organic farming (IFOAM 2007:11). In 2005 IFOAM created the Principles of Organic Agriculture, an international guideline for certification criteria. Typically the agencies accredit certification groups rather than individual farms. Production materials used for the creation of USDA Organic certified foods require the approval of a NOP accredited certifier. EU-organic production-regulation on "organic" food labels define "organic" primarily in terms of whether "natural" or "artificial" substances were allowed as inputs in the food production process. Composting Using manure as a fertilizer risks contaminating food with animal gut bacteria, including pathogenic strains of E. coli that have caused fatal poisoning from eating organic food. To combat this risk, USDA organic standards require that manure must be sterilized through high temperature thermophilic composting. If raw animal manure is used, 120 days must pass before the crop is harvested if the final product comes into direct contact with the soil. For products that do not directly contact soil, 90 days must pass prior to harvest. In the US, the Organic Food Production Act of 1990 (OFPA) as amended, specifies that a farm can not be certified as organic if the compost being used contains any synthetic ingredients. The OFPA singles out commercially blended fertilizers [composts] disallowing the use of any fertilizer [compost] that contains prohibited materials. Economics The economics of organic farming, a subfield of agricultural economics, encompasses the entire process and effects of organic farming in terms of human society, including social costs, opportunity costs, unintended consequences, information asymmetries, and economies of scale. Labour input, carbon and methane emissions, energy use, eutrophication, acidification, soil quality, effect on biodiversity, and overall land use vary considerably between individual farms and between crops, making general comparisons between the economics of organic and conventional agriculture difficult.Clark, M., & Tilman, D. (2017). Comparative analysis of environmental impacts of agricultural production systems, agricultural input efficiency, and food choice. Environmental Research Letters, 12(6). In the European Union "organic farmers receive more subsidies under agri-environment and animal welfare subsidies than conventional growers". Geographic producer distribution The markets for organic products are strongest in North America and Europe, which as of 2001 are estimated to have $6 and $8 billion respectively of the $20 billion global market. As of 2007 Australasia has 39% of the total organic farmland, including Australia's but 97% of this land is sprawling rangeland (2007:35). US sales are 20x as much. Europe farms 23% of global organic farmland (), followed by Latin America and the Caribbean with 20% (). Asia has 9.5% while North America has 7.2%. Africa has 3%. Besides Australia, the countries with the most organic farmland are Argentina (), China (), and the United States (). Much of Argentina's organic farmland is pasture, like that of Australia (2007:42). Spain, Germany, Brazil (the world's largest agricultural exporter), Uruguay, and England follow the United States in the amount of organic land (2007:26). In the European Union (EU25) 3.9% of the total utilized agricultural area was used for organic production in 2005. The countries with the highest proportion of organic land were Austria (11%) and Italy (8.4%), followed by the Czech Republic and Greece (both 7.2%). The lowest figures were shown for Malta (0.2%), Poland (0.6%) and Ireland (0.8%). In 2009, the proportion of organic land in the EU grew to 4.7%. The countries with the highest share of agricultural land were Liechtenstein (26.9%), Austria (18.5%) and Sweden (12.6%). 16% of all farmers in Austria produced organically in 2010. By the same year the proportion of organic land increased to 20%. In 2005, of land in Poland was under organic management. In 2012, were under organic production, and there were about 15,500 organic farmers; retail sales of organic products were EUR 80 million in 2011. As of 2012 organic exports were part of the government's economic development strategy. After the collapse of the Soviet Union in 1991, agricultural inputs that had previously been purchased from Eastern bloc countries were no longer available in Cuba, and many Cuban farms converted to organic methods out of necessity. Consequently, organic agriculture is a mainstream practice in Cuba, while it remains an alternative practice in most other countries.Andrea Swenson for Modern Farmer. 17 November 2014 Photo Essay: Cuban Farmers Return to the Old Ways Cuba's organic strategy includes development of genetically modified crops; specifically corn that is resistant to the palomilla moth. Growth In 2001, the global market value of certified organic products was estimated at US$20 billion. By 2002, this was US$23 billion and by 2015 more than US$43 billion. By 2014, retail sales of organic products reached US$80 billion worldwide. North America and Europe accounted for more than 90% of all organic product sales. In 2018 Australia accounted for 54% of the world's certified organic land with the country recording more than . Organic agricultural land increased almost fourfold in 15 years, from in 1999 to in 2014. Between 2013 and 2014, organic agricultural land grew by worldwide, increasing in every region except Latin America. During this time period, Europe's organic farmland increased to (+2.3%), Asia's increased to (+4.7%), Africa's increased to total (+4.5%), and North America's increased to total (+1.1%). As of 2014, the country with the most organic land was Australia (), followed by Argentina (), and the United States (). Australia's organic land area has increased at a rate of 16.5% per annum for the past eighteen years. In 2013, the number of organic producers grew by almost 270,000, or more than 13%. By 2014, there were a reported 2.3 million organic producers in the world. Most of the total global increase took place in the Philippines, Peru, China, and Thailand. Overall, the majority of all organic producers are in India (650,000 in 2013), Uganda (190,552 in 2014), Mexico (169,703 in 2013) and the Philippines (165,974 in 2014). In 2016, organic farming produced over of bananas, over of soybean, and just under of coffee. Productivity Studies comparing yields have had mixed results. These differences among findings can often be attributed to variations between study designs including differences in the crops studied and the methodology by which results were gathered. A 2012 meta-analysis found that productivity is typically lower for organic farming than conventional farming, but that the size of the difference depends on context and in some cases may be very small. While organic yields can be lower than conventional yields, another meta-analysis published in Sustainable Agriculture Research in 2015, concluded that certain organic on-farm practices could help narrow this gap. Timely weed management and the application of manure in conjunction with legume forages/cover crops were shown to have positive results in increasing organic corn and soybean productivity. Another meta-analysis published in the journal Agricultural Systems in 2011 analyzed 362 datasets and found that organic yields were on average 80% of conventional yields. The author's found that there are relative differences in this yield gap based on crop type with crops like soybeans and rice scoring higher than the 80% average and crops like wheat and potato scoring lower. Across global regions, Asia and Central Europe were found to have relatively higher yields and Northern Europe relatively lower than the average. Long term studies A study published in 2005 compared conventional cropping, organic animal-based cropping, and organic legume-based cropping on a test farm at the Rodale Institute over 22 years. The study found that "the crop yields for corn and soybeans were similar in the organic animal, organic legume, and conventional farming systems". It also found that "significantly less fossil energy was expended to produce corn in the Rodale Institute's organic animal and organic legume systems than in the conventional production system. There was little difference in energy input between the different treatments for producing soybeans. In the organic systems, synthetic fertilizers and pesticides were generally not used". As of 2013 the Rodale study was ongoing and a thirty-year anniversary report was published by Rodale in 2012. A long-term field study comparing organic/conventional agriculture carried out over 21 years in Switzerland concluded that "Crop yields of the organic systems averaged over 21 experimental years at 80% of the conventional ones. The fertilizer input, however, was 34 – 51% lower, indicating an efficient production. The organic farming systems used 20 – 56% less energy to produce a crop unit and per land area this difference was 36 – 53%. In spite of the considerably lower pesticide input the quality of organic products was hardly discernible from conventional analytically and even came off better in food preference trials and picture creating methods." Profitability In the United States, organic farming has been shown to be 2.7 to 3.8 times more profitable for the farmer than conventional farming when prevailing price premiums are taken into account. Globally, organic farming is 22–35% more profitable for farmers than conventional methods, according to a 2015 meta-analysis of studies conducted across five continents. The profitability of organic agriculture can be attributed to a number of factors. First, organic farmers do not rely on synthetic fertilizer and pesticide inputs, which can be costly. In addition, organic foods currently enjoy a price premium over conventionally produced foods, meaning that organic farmers can often get more for their yield. The price premium for organic food is an important factor in the economic viability of organic farming. In 2013 there was a 100% price premium on organic vegetables and a 57% price premium for organic fruits. These percentages are based on wholesale fruit and vegetable prices, available through the United States Department of Agriculture's Economic Research Service. Price premiums exist not only for organic versus nonorganic crops, but may also vary depending on the venue where the product is sold: farmers' markets, grocery stores, or wholesale to restaurants. For many producers, direct sales at farmers' markets are most profitable because the farmer receives the entire markup, however this is also the most time and labour-intensive approach. There have been signs of organic price premiums narrowing in recent years, which lowers the economic incentive for farmers to convert to or maintain organic production methods. Data from 22 years of experiments at the Rodale Institute found that, based on the current yields and production costs associated with organic farming in the United States, a price premium of only 10% is required to achieve parity with conventional farming. A separate study found that on a global scale, price premiums of only 5-7% were needed to break even with conventional methods. Without the price premium, profitability for farmers is mixed. For markets and supermarkets organic food is profitable as well, and is generally sold at significantly higher prices than non-organic food. Energy efficiency Compared to conventional agriculture, the energy efficiency of organic farming depends upon crop type and farm size. Two studies – both comparing organically- versus conventionally-farmed apples – declare contradicting results, one saying organic farming is more energy efficient, the other saying conventionally is more efficient. It has generally been found that the labor input per unit of yield was higher for organic systems compared with conventional production. Sales and marketing Most sales are concentrated in developed nations. In 2008, 69% of Americans claimed to occasionally buy organic products, down from 73% in 2005. One theory for this change was that consumers were substituting "local" produce for "organic" produce.The Hartman Group Organic Marketplace Reports . Distributors The USDA requires that distributors, manufacturers, and processors of organic products be certified by an accredited state or private agency. In 2007, there were 3,225 certified organic handlers, up from 2,790 in 2004. Organic handlers are often small firms; 48% reported sales below $1 million annually, and 22% between $1 and $5 million per year. Smaller handlers are more likely to sell to independent natural grocery stores and natural product chains whereas large distributors more often market to natural product chains and conventional supermarkets, with a small group marketing to independent natural product stores. Some handlers work with conventional farmers to convert their land to organic with the knowledge that the farmer will have a secure sales outlet. This lowers the risk for the handler as well as the farmer. In 2004, 31% of handlers provided technical support on organic standards or production to their suppliers and 34% encouraged their suppliers to transition to organic. Smaller farms often join in cooperatives to market their goods more effectively. 93% of organic sales are through conventional and natural food supermarkets and chains, while the remaining 7% of U.S. organic food sales occur through farmers' markets, foodservices, and other marketing channels. Direct-to-consumer sales In the 2012 Census, direct-to-consumer sales equalled $1.3 billion, up from $812 million in 2002, an increase of 60 percent. The number of farms that utilize direct-to-consumer sales was 144,530 in 2012 in comparison to 116,733 in 2002. Direct-to-consumer sales include farmers' markets, community supported agriculture (CSA), on-farm stores, and roadside farm stands. Some organic farms also sell products direct to retailer, direct to restaurant and direct to institution. According to the 2008 Organic Production Survey, approximately 7% of organic farm sales were direct-to-consumers, 10% went direct to retailers, and approximately 83% went into wholesale markets. In comparison, only 0.4% of the value of convention agricultural commodities were direct-to-consumers. While not all products sold at farmer's markets are certified organic, this direct-to-consumer avenue has become increasingly popular in local food distribution and has grown substantially since 1994. In 2014, there were 8,284 farmer's markets in comparison to 3,706 in 2004 and 1,755 in 1994, most of which are found in populated areas such as the Northeast, Midwest, and West Coast. Labour and employment Organic production is more labour-intensive than conventional production. Increased labor cost is one factor that contributes to organic food being more expensive. Organic farming's increased labor requirements can be seen in a good way providing more job opportunities for people. The 2011 UNEP Green Economy Report suggests that "[a]n increase in investment in green agriculture is projected to lead to growth in employment of about 60 per cent compared with current levels" and that "green agriculture investments could create 47 million additional jobs compared with BAU2 over the next 40 years". Much of the growth in women labour participation in agriculture is outside the "male dominated field of conventional agriculture". Organic farming has a greater percentage of women working in the farms with 21% compared to farming in general with 14%. World's food security In 2007 the United Nations Food and Agriculture Organization (FAO) said that organic agriculture often leads to higher prices and hence a better income for farmers, so it should be promoted. However, FAO stressed that organic farming could not feed the current human population, much less the larger future population. Both data and models showed that organic farming was far from sufficient. Therefore, chemical fertilizers were needed to avoid hunger. Others have argued that organic farming is particularly well-suited to food-insecure areas, and therefore could be "an important part of increased food security" in places like sub-Saharan Africa FAO stressed that fertilizers and other chemical inputs can increase production, particularly in Africa where fertilizers are currently used 90% less than in Asia. For example, in Malawi the yield has been boosted using seeds and fertilizers. Also NEPAD, a development organization of African governments, announced that feeding Africans and preventing malnutrition requires fertilizers and enhanced seeds. According to a 2012 study from McGill University, organic best management practices show an average yield only 13% less than conventional. In the world's poorer nations where most of the world's hungry live, and where conventional agriculture's expensive inputs are not affordable for the majority of farmers, adopting organic management actually increases yields 93% on average, and could be an important part of increased food security. Capacity building in developing countries Organic agriculture can contribute to ecological sustainability, especially in poorer countries. The application of organic principles enables employment of local resources (e.g., local seed varieties, manure, etc.) and therefore cost-effectiveness. Local and international markets for organic products show tremendous growth prospects and offer creative producers and exporters excellent opportunities to improve their income and living conditions. Organic agriculture is knowledge intensive. Globally, capacity building efforts are underway, including localized training material, to limited effect. As of 2007, the International Federation of Organic Agriculture Movements hosted more than 170 free manuals and 75 training opportunities online. In 2008 the United Nations Environmental Programme (UNEP) and the United Nations Conference on Trade and Development (UNCTAD) stated that "organic agriculture can be more conducive to food security in Africa than most conventional production systems, and that it is more likely to be sustainable in the long-term" and that "yields had more than doubled where organic, or near-organic practices had been used" and that soil fertility and drought resistance improved. Millennium Development Goals The value of organic agriculture (OA) in the achievement of the Millennium Development Goals (MDG), particularly in poverty reduction efforts in the face of climate change, is shown by its contribution to both income and non-income aspects of the MDGs. These benefits are expected to continue in the post-MDG era. A series of case studies conducted in selected areas in Asian countries by the Asian Development Bank Institute (ADBI) and published as a book compilation by ADB in Manila document these contributions to both income and non-income aspects of the MDGs. These include poverty alleviation by way of higher incomes, improved farmers' health owing to less chemical exposure, integration of sustainable principles into rural development policies, improvement of access to safe water and sanitation, and expansion of global partnership for development as small farmers are integrated in value chains. A related ADBI study also sheds on the costs of OA programs and set them in the context of the costs of attaining the MDGs. The results show considerable variation across the case studies, suggesting that there is no clear structure to the costs of adopting OA. Costs depend on the efficiency of the OA adoption programs. The lowest cost programs were more than ten times less expensive than the highest cost ones. However, further analysis of the gains resulting from OA adoption reveals that the costs per person taken out of poverty was much lower than the estimates of the World Bank, based on income growth in general or based on the detailed costs of meeting some of the more quantifiable MDGs (e.g., education, health, and environment). Externalities Agriculture imposes negative externalities upon society through public land and other public resource use, biodiversity loss, erosion, pesticides, nutrient pollution, and assorted other problems. Positive externalities include self-reliance, entrepreneurship, respect for nature, and air quality. Organic methods differ from conventional methods in the impacts of their respective externalities, dependent on implementation and crop type. Overall land use is generally higher for organic methods, but organic methods generally use less energy in production. The analysis and comparison of externalities is complicated by whether the comparison is done using a per unit area measurement or per unit of production, and whether analysis is done on isolated plots or on farm units as a whole. Measurements of biodiversity are highly variable between studies, farms, and organism groups. "Birds, predatory insects, soil organisms and plants responded positively to organic farming, while non-predatory insects and pests did not. A 2005 review found that the positive effects of organic farming on abundance were prominent at the plot and field scales, but not for farms in matched landscapes." Other studies that have attempted to examine and compare conventional and organic systems of farming and have found that organic techniques reduce levels of biodiversity less than conventional systems do, and use less energy and produce less waste when calculated per unit area, although not when calculated per unit of output. "Farm comparisons show that actual (nitrate) leaching rates per hectare[/acre] are up to 57% lower on organic than on conventional fields. However, the leaching rates per unit of output were similar or slightly higher." "On a per-hectare[/-acre] scale, the CO2 emissions are 4060% lower in organic farming systems than in conventional ones, whereas on a per-unit output scale, the CO2 emissions tend to be higher in organic farming systems." It has been proposed that organic agriculture can reduce the level of some negative externalities from (conventional) agriculture. Whether the benefits are private, or public depends upon the division of property rights. Issues According to a meta analysis published in 2017, compared to conventional agriculture, biological agriculture has a higher land requirement per yield unit, a higher eutrophication potential, a higher acidification potential and a lower energy requirement, but is associated with similarly high greenhouse gas emissions. A 2003 to 2005 investigation by the Cranfield University for the Department for Environment, Food and Rural Affairs in the UK found that it is difficult to compare the global warming potential, acidification and eutrophication emissions but "Organic production often results in increased burdens, from factors such as nitrogen leaching and N2O emissions", even though primary energy use was less for most organic products. N2O is always the largest global warming potential contributor except in tomatoes. However, "organic tomatoes always incur more burdens (except pesticide use)". Some emissions were lower "per area", but organic farming always required 65 to 200% more field area than non-organic farming. The numbers were highest for bread wheat (200+ % more) and potatoes (160% more).Determining the environmental burdens and resource use in the production of agricultural and horticultural commodities. - IS0205 , Williams, A.G. et al., Cranfield University, U.K., August 2006. Svensk mat- och miljöinformation. Pages 4-6, 29 and 84-85. As of 2020 it seems that organic agriculture can help in mitigating climate change but only if used in certain ways. Environmental impact and emissions Researchers at Oxford University analysed 71 peer-reviewed studies and observed that organic products are sometimes worse for the environment. Organic milk, cereals, and pork generated higher greenhouse gas emissions per product than conventional ones but organic beef and olives had lower emissions in most studies. Usually organic products required less energy, but more land. Per unit of product, organic produce generates higher nitrogen leaching, nitrous oxide emissions, ammonia emissions, eutrophication, and acidification potential than conventionally grown produce. Other differences were not significant. The researchers concluded that public debate should consider various manners of employing conventional or organic farming, and not merely debate conventional farming as opposed to organic farming. They also sought to find specific solutions to specific circumstances. A 2018 review article in the Annual Review of Resource Economics found that organic agriculture is more polluting per unit of output and that widespread upscaling of organic agriculture would cause additional loss of natural habitats. Proponents of organic farming have claimed that organic agriculture emphasizes closed nutrient cycles, biodiversity, and effective soil management providing the capacity to mitigate and even reverse the effects of climate change and that organic agriculture can decrease fossil fuel emissions. "The carbon sequestration efficiency of organic systems in temperate climates is almost double () that of conventional treatment of soils, mainly owing to the use of grass clovers for feed and of cover crops in organic rotations." However, studies acknowledge organic systems require more acreage to produce the same yield as conventional farms. By converting to organic farms in developed countries where most arable land is accounted for, increased deforestation would decrease overall carbon sequestration. Nutrient leaching According to a 2012 meta-analysis of 71 studies, nitrogen leaching, nitrous oxide emissions, ammonia emissions, eutrophication potential and acidification potential were higher for organic products. Specifically, the emission per area of land is lower, but per amount of food produced is higher. This is due to the lower crop yield of organic farms. Excess nutrients in lakes, rivers, and groundwater can cause algal blooms, eutrophication, and subsequent dead zones. In addition, nitrates are harmful to aquatic organisms by themselves. Land use A 2012 Oxford meta-analysis of 71 studies found that organic farming requires 84% more land for an equivalent amount of harvest, mainly due to lack of nutrients but sometimes due to weeds, diseases or pests, lower yielding animals and land required for fertility building crops. While organic farming does not necessarily save land for wildlife habitats and forestry in all cases, the most modern breakthroughs in organic are addressing these issues with success. Professor Wolfgang Branscheid says that organic animal production is not good for the environment, because organic chicken requires twice as much land as "conventional" chicken and organic pork a quarter more. According to a calculation by Hudson Institute, organic beef requires three times as much land. On the other hand, certain organic methods of animal husbandry have been shown to restore desertified, marginal, and/or otherwise unavailable land to agricultural productivity and wildlife. Or by getting both forage and cash crop production from the same fields simultaneously, reduce net land use. SRI methods for rice production, without external inputs, have produced record yields on some farms, but not others. Pesticides In organic farming the use of synthetic pesticides and certain natural compounds that are produced using chemical synthesis are prohibited. The organic labels restrictions are not only based on the nature of the compound, but also on the method of production. A non-exhaustive list of organic approved pesticides with their median lethal doses: Boric acid is used as an insecticide (LD50: 2660 mg/kg). Copper(II) sulfate is used as a fungicide and is also used in conventional agriculture (LD50 300 mg/kg). Conventional agriculture has the option to use the less toxic Mancozeb (LD50 4,500 to 11,200 mg/kg) Lime sulfur (aka calcium polysulfide) and sulfur are considered to be allowed, synthetic materials (LD50: 820 mg/kg) Neem oil is used as an insect repellant in India; since it contains azadirachtin its use is restricted in the UK and Europe. Pyrethrin comes from chemicals extracted from flowers of the genus Pyrethrum (LD50 of 370 mg/kg). Its potent toxicity is used to control insects. Food quality and safety While there may be some differences in the amounts of nutrients and anti-nutrients when organically produced food and conventionally-produced food are compared, the variable nature of food production and handling makes it difficult to generalize results, and there is insufficient evidence to make claims that organic food is safer or healthier than conventional food.Blair, Robert. (2012). Organic Production and Food Quality: A Down to Earth Analysis. Wiley-Blackwell, Oxford, UK. Soil conservation Supporters claim that organically managed soil has a higher quality and higher water retention. This may help increase yields for organic farms in drought years. Organic farming can build up soil organic matter better than conventional no-till farming, which suggests long-term yield benefits from organic farming. An 18-year study of organic methods on nutrient-depleted soil concluded that conventional methods were superior for soil fertility and yield for nutrient-depleted soils in cold-temperate climates, arguing that much of the benefit from organic farming derives from imported materials that could not be regarded as self-sustaining. In Dirt: The Erosion of Civilizations, geomorphologist David Montgomery outlines a coming crisis from soil erosion. Agriculture relies on roughly one meter of topsoil, and that is being depleted ten times faster than it is being replaced. No-till farming, which some claim depends upon pesticides, is one way to minimize erosion. However, a 2007 study by the USDA's Agricultural Research Service has found that manure applications in tilled organic farming are better at building up the soil than no-till.Hepperly, Paul, Jeff Moyer, and Dave Wilson. "Developments in Organic No-till Agriculture." Acres USA: The Voice of Eco-agriculture September 2008: 16-19. And Roberts, Paul. "The End of Food: Investigating a Global Crisis." Interview with Acres USA. Acres USA: The Voice of Eco-Agriculture October 2008: 56-63. Gunsmoke Farms, a organic farming project in South Dakota, suffered from massive soil erosion as result of tiling after it switched to organic farming. Biodiversity The conservation of natural resources and biodiversity is a core principle of organic production. Three broad management practices (prohibition/reduced use of chemical pesticides and inorganic fertilizers; sympathetic management of non-cropped habitats; and preservation of mixed farming) that are largely intrinsic (but not exclusive) to organic farming are particularly beneficial for farmland wildlife. Using practices that attract or introduce beneficial insects, provide habitat for birds and mammals, and provide conditions that increase soil biotic diversity serve to supply vital ecological services to organic production systems. Advantages to certified organic operations that implement these types of production practices include: 1) decreased dependence on outside fertility inputs; 2) reduced pest-management costs; 3) more reliable sources of clean water; and 4) better pollination. Nearly all non-crop, naturally occurring species observed in comparative farm land practice studies show a preference for organic farming both by abundance and diversity. An average of 30% more species inhabit organic farms. Birds, butterflies, soil microbes, beetles, earthworms, spiders, vegetation, and mammals are particularly affected. Lack of herbicides and pesticides improve biodiversity fitness and population density. Many weed species attract beneficial insects that improve soil qualities and forage on weed pests. Soil-bound organisms often benefit because of increased bacteria populations due to natural fertilizer such as manure, while experiencing reduced intake of herbicides and pesticides. Increased biodiversity, especially from beneficial soil microbes and mycorrhizae have been proposed as an explanation for the high yields experienced by some organic plots, especially in light of the differences seen in a 21-year comparison of organic and control fields. Organic farming contributes to human capital by promoting biodiversity. The presence of various species in organic farms helps to reduce human input, such as fertilizers, and pesticides, which enhances sustainability. The USDA's Agricultural Marketing Service (AMS) published a Federal Register notice on 15 January 2016, announcing the National Organic Program (NOP) final guidance on Natural Resources and Biodiversity Conservation for Certified Organic Operations. Given the broad scope of natural resources which includes soil, water, wetland, woodland and wildlife, the guidance provides examples of practices that support the underlying conservation principles and demonstrate compliance with USDA organic regulations § 205.200. The final guidance provides organic certifiers and farms with examples of production practices that support conservation principles and comply with the USDA organic regulations, which require operations to maintain or improve natural resources. The final guidance also clarifies the role of certified operations (to submit an OSP to a certifier), certifiers (ensure that the OSP describes or lists practices that explain the operator's monitoring plan and practices to support natural resources and biodiversity conservation), and inspectors (onsite inspection) in the implementation and verification of these production practices. A wide range of organisms benefit from organic farming, but it is unclear whether organic methods confer greater benefits than conventional integrated agri-environmental programs. Organic farming is often presented as a more biodiversity-friendly practice, but the generality of the beneficial effects of organic farming is debated as the effects appear often species- and context-dependent, and current research has highlighted the need to quantify the relative effects of local- and landscape-scale management on farmland biodiversity. There are four key issues when comparing the impacts on biodiversity of organic and conventional farming: (1) It remains unclear whether a holistic whole-farm approach (i.e. organic) provides greater benefits to biodiversity than carefully targeted prescriptions applied to relatively small areas of cropped and/or non-cropped habitats within conventional agriculture (i.e. agri-environment schemes); (2) Many comparative studies encounter methodological problems, limiting their ability to draw quantitative conclusions; (3) Our knowledge of the impacts of organic farming in pastoral and upland agriculture is limited; (4) There remains a pressing need for longitudinal, system-level studies in order to address these issues and to fill in the gaps in our knowledge of the impacts of organic farming, before a full appraisal of its potential role in biodiversity conservation in agroecosystems can be made. Labour standards Organic agriculture is often considered to be more socially just and economically sustainable for farmworkers than conventional agriculture. However, there is little social science research or consensus as to whether or not organic agriculture provides better working conditions than conventional agriculture. As many consumers equate organic and sustainable agriculture with small-scale, family-owned organizations it is widely interpreted that buying organic supports better conditions for farmworkers than buying with conventional producers. Organic agriculture is generally more labour-intensive due to its dependence on manual practices for fertilization and pest removal. Although illnesses from inputs pose less of a risk, hired workers still fall victim to debilitating musculoskeletal disorders associated with agricultural work. The USDA certification requirements outline growing practices and ecological standards but do nothing to codify labour practices. Independent certification initiatives such as the Agricultural Justice Project, Domestic Fair Trade Working Group, and the Food Alliance have attempted to implement farmworker interests but because these initiatives require voluntary participation of organic farms, their standards cannot be widely enforced. Despite the benefit to farmworkers of implementing labour standards, there is little support among the organic community for these social requirements. Many actors of the organic industry believe that enforcing labour standards would be unnecessary, unacceptable, or unviable due to the constraints of the market. Regional support for organic farming The following is a selected list of support given in some regions. Europe The EU-organic production-regulation is a part of the European Union regulation that sets rules about the production of organic agricultural and livestock products and how to label them. In the EU, organic farming and organic food are more commonly known as ecological or biological. The regulation is derived from the guidelines of the International Federation of Organic Agriculture Movements (IFOAM), which is an association of about 800 member organizations in 119 countries. As in the rest of the world, the organic market in Europe continues to grow and more land is farmed organically each year. "More farmers cultivate organically, more land is certified organic, and more countries report organic farming activities" as per the 2016 edition of the study "The World of Organic Agriculture " according to data from the end of 2014 published by FiBL and IFOAM in 2016. Denmark Denmark has a long ongoing support for converting conventional farming into organic farming, which has been taught in academic classes in universities since 1986. The state began substitutes and has promoted a special national label for products that qualify as organic since 1989. Denmark is thus the first country in the world to substitute organic farming, promoting the concept and organizing the distribution of organic products. Today the government accept applicants for financial support during conversion years, as in Danish regulations farms must not have utilized conventional farming methods such as the usage of pesticides for several years before products can be assessed for qualification as organic. This financial support has in recent years been cut due to organic farming increasing in profitability, and some goods surpassing the profitability of conventional farming in domestic markets. In general, the financial situation of organic farmers in Denmark boomed between 2010 and 2018, while in 2018 serious nationwide long-lasting droughts stagnated the economic results of organic farmers; however, the average farmer still achieved a net positive result that year. In 2021 Denmark's (and Europe's) largest slaughterhouse, Danish Crown, publicized its expectations of stagnating sales of conventional pork domestically, however it expected increasing sales of organic pork and especially free range organic pork. Besides the conversion support, there are still base subsidies for organic farming paid per area of qualified farm land. The first Danish private development organisation, SamsØkologisk, was established in 2013, by veteran organic farmers from the existing organisation Økologisk Samsø. The development organisation has intentions to buy and invest in farmland and then lend the land to young and aspiring farmers seeking to get into farming, especially organic farming. This organisation reports 300 economical active members as of 2021, but does not publish the amount of acquired land or active lenders. However, the organic farming concept in Denmark is often not limited to organic farming as the definition is globally. Instead, the majority of organic farming is instead "ecological farming". The development of this concept has been parallel with the general organic farming movement, and is most often used interchangeable with organic farming. Thus, there is a much stronger focus on the environmental and especially the ecological impact of ecological farming than organic farming. E.g. besides the base substitute for organic farming, farmers can qualify for an extra substitute equal to 2/3 of the base for realizing a specific reduction in the usage of added nitrogen to the farmland (also by organic means). There are also parallels to the extended organic movements of regenerative agriculture, although far from all concepts in regenerative agriculture are included in the national strategy at this time, but exist as voluntary options for each farmer. For these reasons, international organic products do not fulfill the requirements of ecological farming and thus do not receive the domestic label for ecological products, rather they receive the standard European Union organic label. Ukraine The Ministry of Agrarian Policy and Food of Ukraine is the central executive body that develops the regulatory framework for the organic sector in Ukraine, maintains the state registers of certification bodies, operators and organic seeds and planting material, and provides training and professional development for organic inspectors. Thanks to the hard work on organic legislation by the Ministry of Agrarian Policy and Food of Ukraine and the organic working group that includes the main players of the Ukraine's organic sector, on 10 July 2018, the Verkhovna Rada of Ukraine (the Ukrainian Parliament) adopted the Law of Ukraine “On Basic Principles and Requirements for Organic Production, Circulation and Labelling of Organic Products” No. 2496, which was enacted on 2 August 2019. As of April 2024, organic production, circulation and labelling of organic products in Ukraine is regulated by this law as well as relevant by-laws. One more important governmental institution of the organic sector of Ukraine is the State Service of Ukraine on Food Safety and Consumer Protection. It is the central executive body authorised to conduct state supervision (control) in the field of organic production, circulation and labelling of organic products in accordance with the organic legislation of Ukraine. This includes state supervision (control) over compliance with the legislation in the field of organic production, circulation and labelling of organic products: inspection of certification bodies; random inspection of operators; monitoring of organic products on the market to prevent the entry of non-organic products labelled as organic. The State Institution “Entrepreneurship and Export Promotion Office” (EEPO, Ukraine) contributes to the development of the Ukrainian organic exporters’ potential, promotion of the organic sector and formation of a positive image of Ukraine as a reliable supplier of organic products abroad. EEPO actively supports and organises various events for organic exporters, including national pavilions at key international trade fairs, such as BIOFACH (Nuremberg, Germany), Anuga (Cologne, Germany), SIAL (Paris, France), and Middle East Organic & Natural Products Expo (Dubai, UAE). EEPO also created the Catalogue of Ukrainian Exporters of Organic Products in partnership with Organic Standard certification body. Organic farming is Ukraine is also supported by international technical assistance projects and programmes implementation of which is funded and supported by Switzerland, Germany, and other countries. These project/programmes are the Swiss-Ukrainian program “Higher Value Added Trade from the Organic and Dairy Sector in Ukraine” (QFTP), financed by Switzerland and implemented by the Research Institute of Organic Agriculture (FiBL, Switzerland) in partnership with SAFOSO AG (Switzerland); the Swiss-Ukrainian program “Organic Trade for Development in Eastern Europe” (OT4D), financed by Switzerland through the Swiss State Secretariat for Economic Affairs (SECO) and implemented by IFOAM – Organics International in partnership HELVETAS Swiss Intercooperation and the Research Institute of Organic Agriculture (FiBL, Switzerland); Project “German-Ukrainian Cooperation in Organic Agriculture” (COA). The project/programme representatives provide their expertise during development of the organic legislative framework and implementation of the legislation in the field of organic production, circulation and labelling of organic products and support various activities related to organic farming and production. China The Chinese government, especially the local government, has provided various supports for the development of organic agriculture since the 1990s. Organic farming has been recognized by local governments for its potential in promoting sustainable rural development. It is common for local governments to facilitate land access of agribusinesses by negotiating land leasing with local farmers. The government also establishes demonstration organic gardens, provides training for organic food companies to pass certifications, subsidizes organic certification fees, pest repellent lamps, organic fertilizer and so on. The government has also been playing an active role in marketing organic products through organizing organic food expos and branding supports. India In India, in 2016, the northern state of Sikkim achieved its goal of converting to 100% organic farming."Sikkim makes an organic shift". Times of India. 7 May 2010. Retrieved 29 November 2012."Sikkim races on organic route" . Telegraph India. 12 December 2011. Retrieved 29 November 2012. Other states of India, including Kerala, Mizoram, Goa, Rajasthan, and Meghalaya, have also declared their intentions to shift to fully organic cultivation. The South Indian state of Andhra Pradesh is also promoting organic farming, especially Zero Budget Natural Farming (ZBNF), which is a form of regenerative agriculture. As of 2018, India has the largest number of organic farmers in the world and constitutes more than 30% of organic farmers globally. India has 835,000 certified organic producers. However, the total land under organic cultivation is around 2% of overall farmland. Dominican Republic The Dominican Republic has successfully converted a large amount of its banana crop to organic. The Dominican Republic accounts for 55% of the world's certified organic bananas. South Korea The most noticeable change in Korea's agriculture occurred throughout the 1960s and 1970s. More specifically, the "Green Revolution" program where South Korea experienced reforestations and agricultural revolution. Due to a food shortage during Park Chung Hee's presidency, the government encouraged rice varieties suited for organic farming. Farmers were able to strategize risk minimization efforts by breeding a variety of rice called Japonica with Tongil. They also used less fertilizer and made other economic adjustments to alleviate potential risk factors. In modern society, organic farming and food policies have changed, more specifically since the 1990s. As expected, the guidelines focus on basic dietary recommendations for consumption of nutrients and Korean-style diets. The main reason for this encouragement is that around 88% of countries across the world face forms of malnutrition. Then in 2009, the Special Act on Safety Management of Children's Dietary Life was passed, restricting foods low in energy and poor in nutrients. It also focused on other nutritional problems Korean students may have had as well. Thailand In Thailand, the (ISAC) was established in 1991 to promote organic farming (among other sustainable agricultural practices). The national target via the National Plan for Organic Farming is to attain, by 2021, of organically farmed land. Another target is for 40% of the produce from these farmlands to be consumed domestically. Much progress has been made: Many organic farms have sprouted, growing produce ranging from mangosteen to stinky bean. Some of the farms have also established education centres to promote and share their organic farming techniques and knowledge. In Chiang Mai Province, there are 18 organic markets. (ISAC-linked) United States The United States Department of Agriculture Rural Development (USDARD) was created in 1994 as a subsection of the USDA that implements programs to stimulate growth in rural communities. One of the programs that the USDARD created provided grants to farmers who practiced organic farming through the Organic Certification Cost Share Program (OCCSP). During the 21st century, the United States has continued to expand its reach in the organic foods market, doubling the number of organic farms in the U.S. in 2016 when compared to 2011. Employment on organic farms offers potentially large numbers of jobs for people, and this may better manage the Fourth Industrial Revolution. Moreover, sustainable forestry, fishing, and mining, and other conservation-oriented activities provide larger numbers of jobs than more fossil fuel and mechanized work. Organic Farming has grown by in the U.S. from 2000 to 2011. In 2016, California had 2,713 organic farms, which makes California the largest producer of organic goods in the U.S. 4% of food sales in the U.S. are of organic goods. Sri Lanka As was the case with most countries, Sri Lanka made the transition away from organic farming upon the arrival of the Green Revolution, whereupon it started depending more on chemical fertilizers. This became a highly popularized method when the nation started offering subsidies on the import of artificial fertilizers to increase rice paddy production, and to incentivize farmers to switch from growing traditional varieties into using high yielding varieties (HYVs). This was especially true for young farmers who saw short-term economic profit as more sustainable to their wellbeing, compared to the long term drawbacks to the environment. However, due to the various health concerns with inorganic farming including the possibility of a chronic kidney disease being associated with chemical fertilizers, many middle aged and experienced farmers displayed skepticism towards these new approaches. Some even resorted to organic farming or utilizing insecticide free fertilizers for their crops. In a study conducted by F. Horgan and E. Kudavidanage, the researchers compared crop yields of farmers in Sri Lanka who employed distinct farming techniques including organic farmers who grew traditional varieties, and insecticide-free fertilizer users and pesticide users who grew modern varieties. No significant difference was found among the yield productions and in fact, organic farmers and insecticide-free fertilizer users lamented less about insects such as planthoppers as a challenge to their production. Regardless, many farmers continued to use insecticides to avoid the predicted dangers of pests to their crops, and the cheap sale of agrochemicals provided an easy approach to augment crop growth. Additionally, while organic farming has health benefits, it's a strenuous task which requires more man power. Although that presented a great opportunity for increased employment in Sri Lanka, the economic compensation was not enough to suffice the living expenses of those employed. Thus, most farmers relied on modern methods to run their household, especially after the economic stressors brought on by COVID-19. However, while Sri Lanka was still facing the new challenges of the pandemic, in the 2019 presidential election campaign, the president, Gotabaya Rajapaksa proposed a 10-year, national transition to organic farming to declare Sri Lanka as the first nation to be known for its organic produce. On April 27, 2021, the country issued an order prohibiting the import of any inorganic pesticides or fertilizers, creating chaos among farmers. While such a change was made over concerns for the nation's ecosystems and the health of citizens where pesticide poisonings prevailed over other health related deaths, the precipitous decision was met with criticism from the agriculture industry. This included fears that the mandate would harm the yields of the country's major crops (despite claims to the contrary), that the country would not be able to produce enough organic fertilizer domestically, and organic farming being more expensive and complex than conventional agriculture. To put this into perspective, 7.4% of Sri Lanka's GDP is reliant on agriculture and 30% of citizens work in this sector. This means that about ⅓ of its population is dependent on this sector for jobs, making its maintenance highly crucial for the prosperity of the nation's social and economic status. Of special concern was rice and tea, which are a staple food and major export respectively. Despite it being a record crop in the first half of 2021, the tea crop began to decline in July of that year. Rice production fell by 20% over the first six months of the ban, and prices increased by around 50%. Contrary to its past success at self-sustainability, the country had to import US$450 million worth of rice to meet domestic demand. In late August, the government acknowledged the ban had created a critical dependency on supplies of imported organic fertilizers, but by then food prices had already increased twofold in some cases. In September 2021, the government declared an economic emergency, citing the ban's impact on food prices, as well as inflation from the devaluation of Sri Lankan currency due to the crashing tea industry, and a lack of tourism induced by COVID-19 restrictions. In November 2021, the country partially lifted the ban on inorganic farming for certain key crops such as rubber and tea, and began to offer compensation and subsidies to farmers and rice producers in an attempt to cover losses. The previous subsidies on synthetic fertilizer imports were not reintroduced.
Technology
Forms
null
72760
https://en.wikipedia.org/wiki/Square%20kilometre
Square kilometre
The square kilometre (square kilometer in American spelling; symbol: km2) is a multiple of the square metre, the SI unit of area or surface area. 1 km2 is equal to: 1,000,000 square metres (m2) 100 hectares (ha) It is also approximately equal to: 0.3861 square miles 247.1 acres Conversely: 1 m2 = 0.000001 (10−6) km2 1 hectare = 0.01 (10−2) km2 1 square mile = 1 acre = about The symbol "km2" means (km)2, square kilometre or kilometre squared and not k(m2), kilo–square metre. For example, 3 km2 is equal to = 3,000,000 m2, not 3,000 m2. Examples of areas of 1 square kilometre Topographical map grids Topographical map grids are worked out in metres, with the grid lines being 1,000 metres apart. 1:100,000 maps are divided into squares representing 1 km2, each square on the map being one square centimetre in area and representing 1 km2 on the surface of the Earth. For 1:50,000 maps, the grid lines are 2 cm apart. Each square on the map is 2 cm by 2 cm (4 cm2) and represents 1 km2 on the surface of the Earth. For 1:25,000 maps, the grid lines are 4 cm apart. Each square on the map is 4 cm by 4 cm (16 cm2) and represents 1 km2 on the surface of the Earth. In each case, the grid lines enclose one square kilometre. Medieval city centres The area enclosed by the walls of many European medieval cities were about one square kilometre. These walls are often either still standing or the route they followed is still clearly visible, such as in Brussels, where the wall has been replaced by a ring road, or in Frankfurt, where the wall has been replaced by gardens. The approximate area of the old walled cities can often be worked out by fitting the course of the wall to a rectangle or an oval (ellipse). Examples include: Delft, Netherlands (See map alongside) The walled city of Delft was approximately rectangular. The approximate length of rectangle was about . The approximate width of the rectangle was about . A perfect rectangle with these measurements has an area of 1.30×0.75 = 0.9 km2 Lucca (Italy) The medieval city is roughly rectangular with rounded north-east and north-west corners. The maximum distance from east to west is . The maximum distance from north to south is . A perfect rectangle of these dimensions would be 1.36×0.80 = 1.088 km2. Bruges (Belgium) The medieval city of Bruges, a major centre in Flanders, was roughly oval or elliptical in shape with the longer or semi-major axis running north and south. The maximum distance from north to south (semi-major axis) is . The maximum distance from east to west (semi-minor axis) is . A perfect ellipse of these dimensions would be 2.53 × 1.81 × (π/4) = 3.597 km2. Chester United Kingdom Chester is one of the smaller English cities that has a near-intact city wall. The distance from Northgate to Watergate is about 855 metres. The distance from Eastgate to Westgate is about 589 metres. A perfect rectangle of these dimensions would be (855/1000) × (589/1000) = 0.504 km2. Parks Parks come in all sizes; a few are almost exactly one square kilometre in area. Here are some examples: Riverside Country Park, UK. Brierley Forest Park, UK. Rio de Los Angeles State Park, California, US Jones County Central Park, Iowa, US. Kiest Park, Dallas, Texas, US Hole-in-the-Wall Park & Campground, Grand Manan Island, Bay of Fundy, New Brunswick, Canada Downing Provincial Park, British Columbia, Canada Citadel Park, Poznan, Poland Sydney Olympic Park, Sydney, Australia, contains 6.63 square kilometres of wetlands and waterways. Golf courses Using the figures published by golf course architects Crafter and Mogford, a course should have a fairway width of 120 metres and 40 metres clear beyond the hole. Assuming a 18-hole course, an area of 80 hectares (0.8 square kilometre) needs to be allocated for the course itself. Examples of golf courses that are about one square kilometre include: Manchester Golf Club, UK Northop Country Park, Wales, UK The Trophy Club, Lebanon, Indiana, US Qingdao International Country Golf Course, Qingdao, Shandong, China Arabian Ranches Golf Club, Dubai Sharm el Sheikh Golf Courses: Sharm el Sheikh, South Sinai, Egypt Belmont Golf Club, Lake Macquarie, NSW, Australia Other areas of one square kilometre or thereabouts The Old City of Jerusalem is almost 1 square kilometre in area. Milton Science Park, Oxfordshire, UK. Mielec Industrial Park, Mielec, Poland The Guildford Campus of Guildford Grammar School, South Guildford, Western Australia Sardar Vallabhbhai National Institute of Technology (SVNIT), Surat, India Île aux Cerfs Island, near the east coast of Mauritius. Peng Chau Island, Hong Kong
Physical sciences
Area
Basics and measurement
72764
https://en.wikipedia.org/wiki/Claw
Claw
A claw is a curved, pointed appendage found at the end of a toe or finger in most amniotes (mammals, reptiles, birds). Some invertebrates such as beetles and spiders have somewhat similar fine, hooked structures at lower elevations that allow the spider cells of their own species the end of the leg or tarsus for gripping a surface as they walk. The pincers of crabs, lobsters and scorpions, more formally known as their chelae, are sometimes called claws. A true claw is made of a hard protein called keratin. Claws are used to catch and hold prey in carnivorous mammals such as cats and dogs, but may also be used for such purposes as digging, climbing trees, self-defense and grooming, in those and other species. Similar appendages that are flat and do not come to a sharp point are called nails instead. Claw-like projections that do not form at the end of digits but spring from other parts of the foot are properly named spurs. Tetrapods In tetrapods, claws are made of keratin and consist of two layers. The unguis is the harder external layer, which consists of keratin fibers arranged perpendicular to the direction of growth and in layers at an oblique angle. The subunguis is the softer, flaky underside layer whose grain is parallel to the direction of growth. The claw grows outward from the nail matrix at the base of the unguis and the subunguis grows thicker while travelling across the nail bed. The unguis grows outward faster than the subunguis to produce a curve and the thinner sides of the claw wear away faster than their thicker middle, producing a more or less sharp point. Tetrapods use their claws in many ways, commonly to grasp or kill prey, to dig and to climb and hang. Mammals All carnivorans have claws, which vary considerably in length and shape. Claws grow out of the third phalanges of the paws and are made of keratin. Many predatory mammals have protractile claws that can partially hide inside the animal's paw, especially the cat family, Felidae, almost all of whose members have fully protractible claws. Outside of the cat family, retractable claws are found only in certain species of the Viverridae (and the extinct Nimravidae). A claw that is retractable is protected from wear and tear. Most cats and dogs also have a dewclaw on the inside of the front paws. It is much less functional than the other claws but does help the cats to grasp prey. Because the dew claw does not touch the ground, it receives less wear and tends to be sharper and longer. A nail is homologous to a claw but is flatter and has a curved edge instead of a point. A nail that is big enough to bear weight is called a "hoof". (Nevertheless, one side of the cloven-hoof of artiodactyl ungulates may also be called a claw). Every so often, the growth of claws stops and restarts, as does hair. In a hair, this results in the hair falling out and being replaced by a new one. In claws, this results in an abscission layer, and the old segment breaks off. This process takes several months for human thumbnails. Cats are often seen working old unguis layers off on wood or on boards made for the purpose. Ungulates' hooves wear or self-trim by ground contact. Domesticated equids (horses, donkeys and mules) usually need regular trimming by a farrier, as a consequence of reduced activity on hard ground. Primates Primate nails consist of the unguis alone, as the subunguis has disappeared. With the evolution of grasping hands and feet, claws are no longer necessary for locomotion, and instead most digits exhibit nails. However, claw-like nails are found in small-bodied callitrichids on all digits except the hallux or big toe. A laterally flattened grooming claw, used for grooming, can be found on the second toe in living strepsirrhines, and the second and third in tarsiers. Aye-ayes have functional claws on all other digits except the hallux, including a grooming claw on the second toe. Less commonly known, a grooming claw is also found on the second pedal digit of night monkeys (Aotus), titis (Callicebus), and possibly other New World monkeys. Reptiles Most reptiles have well-developed claws. Most lizards have toes ending in stout claws. In snakes, feet and claws are absent, but in many boids such as Boa constrictor, remnants of highly reduced hind-limbs emerge with a single claw as "spurs" on each side of the anal opening. Lizard claws are used as aids in climbing, and in holding down prey in carnivorous species. Birds A talon is the claw of a bird of prey, its primary hunting tool. The talons are very important; without them, most birds of prey would not be able to catch their food. Some birds also use claws for defensive purposes. Cassowaries use claws on their inner toe (digit I) for defence and have been known to disembowel people. All birds, however, have claws, which are used as general holdfasts and protection for the tip of the digits. The hoatzin and turaco are unique among extant birds in having functional claws on the thumb and index finger (digits I and II) on the forelimbs as chicks, allowing them to climb trees until the adult plumage with flight feathers develop. However, several birds have a claw- or nail-like structure hidden under the feathers at the end of the hand digits, notably ostriches, emus, ducks, geese and kiwis. Amphibians The only amphibians to bear claws are the African clawed frogs. Claws evolved separately in the amphibian and amniote (reptiliomorph) line. However, the hairy frog has claw analogues on its feet; the frog intentionally dislocates the tips of its fingers to unsheathe the sharp points of its last phalanges. Arthropods The scientifically correct term for the "claw" of an arthropod, such as a lobster or crab, is a chela (plural chelae). Legs bearing a chela are called chelipeds. Chelae are also called pincers.
Biology and health sciences
External anatomy and regions of the body
Biology
72821
https://en.wikipedia.org/wiki/Turkey%20%28bird%29
Turkey (bird)
The turkey is a large bird in the genus Meleagris, native to North America. There are two extant turkey species: the wild turkey (Meleagris gallopavo) of eastern and central North America and the ocellated turkey (Meleagris ocellata) of the Yucatán Peninsula in Mexico. Males of both turkey species have a distinctive fleshy wattle, called a snood, that hangs from the top of the beak. They are among the largest birds in their ranges. As with many large ground-feeding birds (order Galliformes), the male is bigger and much more colorful than the female. The earliest turkeys evolved in North America over 20 million years ago. They share a recent common ancestor with grouse, pheasants, and other fowl. The wild turkey species is the ancestor of the domestic turkey, which was domesticated approximately 2,000 years ago by indigenous peoples. It was this domesticated turkey that later reached Eurasia, during the Columbian exchange. Taxonomy The genus Meleagris was introduced in 1758 by the Swedish naturalist Carl Linnaeus in the tenth edition of his Systema Naturae. The genus name is from the Ancient Greek μελεαγρις, meleagris meaning "guineafowl". The type species is the wild turkey (Meleagris gallopavo). Turkeys are classed in the family Phasianidae (pheasants, partridges, francolins, junglefowl, grouse, and relatives thereof) in the taxonomic order Galliformes. They are close relatives of the grouse and are classified alongside them in the tribe Tetraonini. Extant species The genus contains two species. Fossil species Meleagris californica Californian turkey – Southern California Meleagris crassipes Southwestern turkey - New Mexico Names The linguist Mario Pei proposes two possible explanations for the name turkey. One theory suggests that when Europeans first encountered turkeys in the Americas, they incorrectly identified the birds as a type of guineafowl, which were already being imported into Europe by English merchants to the Levant via Constantinople. The birds were therefore nicknamed turkey coqs. The name of the North American bird may have then become turkey fowl or Indian turkeys, which was eventually shortened to turkeys. A second theory arises from turkeys coming to England not directly from the Americas, but via merchant ships from the Middle East, where they were domesticated successfully. Again the importers lent the name to the bird; hence turkey-cocks and turkey-hens, and soon thereafter, turkeys. In 1550, the English navigator William Strickland, who had introduced the turkey into England, was granted a coat of arms including a "turkey-cock in his pride proper". William Shakespeare used the term in Twelfth Night, believed to be written in 1601 or 1602. The lack of context around his usage suggests that the term was already widespread. Other European names for turkeys incorporate an assumed Indian origin, such as ('from India') in French, (, 'bird of India') in Russian, in Polish and Ukrainian, and ('Indian') in Turkish. These are thought to arise from the supposed belief of Christopher Columbus that he had reached India rather than the Americas on his voyage. In Portuguese a turkey is a ; the name is thought to derive from the country in South America 'Peru'. Several other birds that are sometimes called turkeys are not particularly closely related: the brushturkeys are megapodes, and the bird sometimes known as the Australian turkey is the Australian bustard (Ardeotis australis). The anhinga (Anhinga anhinga) is sometimes called the water turkey, from the shape of its tail when the feathers are fully spread for drying. An infant turkey is called a chick or poult. History Turkeys were likely first domesticated in Pre-Columbian Mexico, where they held a cultural and symbolic importance. The Classical Nahuatl word for the turkey, ( in Spanish), is still used in modern Mexico, in addition to the general term . Mayan aristocrats and priests appear to have had a special connection to ocellated turkeys, with ideograms of those birds appearing in Mayan manuscripts. Spanish chroniclers, including Bernal Díaz del Castillo and Father Bernardino de Sahagún, describe the multitude of food (both raw fruits and vegetables as well as prepared dishes) that were offered in the vast markets () of Tenochtitlán, noting there were tamales made of turkeys, iguanas, chocolate, vegetables, fruits and more. Turkeys were first exported to Europe via Spain around 1519, where they gained immediate popularity among the aristocratic classes. Turkeys arrived in England in 1541. From there, English settlers brought turkeys to North America during the 17th century. Destruction and re-introduction in the United States In what is now the United States, there were an estimated 10 million turkeys in the 17th century. By the 1930s, only 30,000 remained. In the 1960s and 1970s, biologists started trapping wild turkeys from the few places they remained (including the Ozarks and New York), and re-introducing them into other states, including Minnesota and Vermont. Starting in 2014, researchers sent a survey to wildlife biologists in the National Wild Turkey Federation Technical Committee across the U.S. states to gather data regarding the population of turkeys. As of 2019, the wild turkey population declined by around 3% since 2014. Also as of 2019, the number of wild turkey hunters decreased by 18% since 2014 from the reports of the participating U.S. states. The 2019 data for population was missing information from 12 states and the 2019 hunter data was missing information from 8 states. Human conflicts with wild turkeys Turkeys have been known to be aggressive toward humans and pets in residential areas. Wild turkeys have a social structure and pecking order and habituated turkeys may respond to humans and animals as they do to other turkeys. Habituated turkeys may attempt to dominate or attack people that the birds view as subordinates. In 2017, the town of Brookline, Massachusetts, recommended a controversial approach when confronted with wild turkeys. Besides taking a step forward to intimidate the birds, officials also suggested "making noise (clanging pots or other objects together); popping open an umbrella; shouting and waving your arms; squirting them with a hose; allowing your leashed dog to bark at them; and forcefully fending them off with a broom". This advice was quickly rescinded and replaced with a caution that "being aggressive toward wild turkeys is not recommended by State wildlife officials." Fossil record A number of turkeys have been described from fossils. The Meleagridinae are known from the Early Miocene ( mya) onwards, with the extinct genera Rhegminornis (Early Miocene of Bell, U.S.) and Proagriocharis (Kimball Late Miocene/Early Pliocene of Lime Creek, U.S.). The former is probably a basal turkey, the other a more contemporary bird not very similar to known turkeys; both were much smaller birds. A turkey fossil not assignable to genus but similar to Meleagris is known from the Late Miocene of Westmoreland County, Virginia. In the modern genus Meleagris, a considerable number of species have been described, as turkey fossils are robust and fairly often found, and turkeys show great variation among individuals. Many of these supposed fossilized species are now considered junior synonyms. One, the well-documented California turkey Meleagris californica, became extinct recently enough to have been hunted by early human settlers. It has been suggested that its demise was due to the combined pressures of human hunting and climate change at the end of the last glacial period. The Oligocene fossil Meleagris antiquus was first described by Othniel Charles Marsh in 1871. It has since been reassigned to the genus Paracrax, first interpreted as a cracid, then soon after as a bathornithid Cariamiformes. Fossil species Meleagris sp. (Early Pliocene of Bone Valley, U.S.) Meleagris sp. (Late Pliocene of Macasphalt Shell Pit, U.S.) Meleagris californica (Late Pleistocene of southwestern U.S.)formerly Parapavo/Pavo Meleagris crassipes (Late Pleistocene of southwestern North America) Turkeys have been considered by many authorities to be their own family—the Meleagrididae—but a recent genomic analysis of a retrotransposon marker groups turkeys in the family Phasianidae. In 2010, a team of scientists published a draft sequence of the domestic turkey (Meleagris gallopavo) genome. In 2023 a new improved haplotype-resolved domestic turkey genome was published, which confirmed the large inversion on the Z chromosome not found in other Galliformes, and found new structural variations between the parent haplotypes that provides potential new target genes for breeding. Anatomy In anatomical terms, a snood is an erectile, fleshy protuberance on the forehead of turkeys. Most of the time when the turkey is in a relaxed state, the snood is pale and 2–3 cm long. However, when the male begins strutting (the courtship display), the snood engorges with blood, becomes redder and elongates several centimeters, hanging well below the beak (see image). Snoods are just one of the caruncles (small, fleshy excrescences) that can be found on turkeys. While fighting, commercial turkeys often peck and pull at the snood, causing damage and bleeding. This often leads to further injurious pecking by other turkeys and sometimes results in cannibalism. To prevent this, some farmers cut off the snood when the chick is young, a process known as "de-snooding". The snood can be between in length depending on the turkey's sex, health, and mood. Function The snood functions in both intersexual and intrasexual selection. Captive female wild turkeys prefer to mate with long-snooded males, and during dyadic interactions, male turkeys defer to males with relatively longer snoods. These results were demonstrated using both live males and controlled artificial models of males. Data on the parasite burdens of free-living wild turkeys revealed a negative correlation between snood length and infection with intestinal coccidia, deleterious protozoan parasites. This indicates that in the wild, the long-snooded males preferred by females and avoided by males seemed to be resistant to coccidial infection. Scientists also conducted a study on 500 male turkeys, gathering data on their snood lengths and blood samples for immune system functionality. They discovered a similar negative correlation. The presence of more red blood cells when the snood is not removed will help to fight off unwanted invaders in their immune system, explaining this trend. Behavior Feeding Wild turkeys feed on various wildlife, depending on the season. In the warmer months of spring and summer, their diet consists mainly of grains such as wheat, corn, and of smaller animals such as grasshoppers, spiders, worms, and, lizards. In the colder months of fall and winter, wild turkeys consume smaller fruits and nuts such as grapes, blueberries, acorns, and walnuts. To find this food, they have to continuously forage and feed most during the sunrise and sunset hours. Domesticated turkeys consume a commercially produced feed formulated to increase the size of the turkeys. To supplement their nutrition, farmers will also feed them grains wild turkeys eat such as corn. Grooming Turkeys participate in a number of grooming behaviors including: dusting, sunning, and feather preening. In dusting, turkeys get low on their stomach or side and flap their wings, coating themselves with dirt. This action serves to remove debris build-up on the feathers and also clog tiny pores that parasites such as lice can inhabit. Sunning for turkeys involves bathing in the sunlight, for their top and bottom halves. This can serve to liquidate the oil that turkeys naturally produce, spreading over their feathers and dry their feathers from precipitation at the same time. In feather preening, turkeys are able to remove dirt and bacteria, while also ensuring that non-durable feathers are removed. Flight Though domestic turkeys are considered flightless, wild turkeys can and do fly for short distances. Turkeys are best adapted for walking and foraging; they do not fly as a normal means of travel. When faced with a perceived danger, wild turkeys can fly up to a quarter mile. Turkeys may also make short flights to assist roosting in a tree. Use by humans The species Meleagris gallopavo is eaten by humans. They were first domesticated by the indigenous people of Mexico from at least 800 BC onwards. By 200 BC, the indigenous people of what is today the American Southwest had domesticated turkeys; though the theory that they were introduced from Mexico was once influential, modern studies suggest that the turkeys of the Southwest were domesticated independently from those in Mexico. Turkeys were used both as a food source and for their feathers and bones, which were used in both practical and cultural contexts. Compared to wild turkeys, domestic turkeys are selectively bred to grow larger in size for their meat. Turkey forms a central part of modern Thanksgiving celebrations in the United States of America, and is often eaten at similar holiday occasions, such as Christmas. The Norfolk turkeys In her memoirs, Lady Dorothy Nevill (1826–1913) recalls that her great-grandfather Horatio Walpole, 1st Earl of Orford (1723–1809), imported a quantity of American turkeys which were kept in the woods around Wolterton Hall and in all probability were the embryo flock for the popular Norfolk turkey breeds of today. Gallery
Biology and health sciences
Galliformes
Animals
72827
https://en.wikipedia.org/wiki/Cauchy%27s%20integral%20formula
Cauchy's integral formula
In mathematics, Cauchy's integral formula, named after Augustin-Louis Cauchy, is a central statement in complex analysis. It expresses the fact that a holomorphic function defined on a disk is completely determined by its values on the boundary of the disk, and it provides integral formulas for all derivatives of a holomorphic function. Cauchy's formula shows that, in complex analysis, "differentiation is equivalent to integration": complex differentiation, like integration, behaves well under uniform limits – a result that does not hold in real analysis. Theorem Let be an open subset of the complex plane , and suppose the closed disk defined as is completely contained in . Let be a holomorphic function, and let be the circle, oriented counterclockwise, forming the boundary of . Then for every in the interior of , The proof of this statement uses the Cauchy integral theorem and like that theorem, it only requires to be complex differentiable. Since can be expanded as a power series in the variable it follows that holomorphic functions are analytic, i.e. they can be expanded as convergent power series. In particular is actually infinitely differentiable, with This formula is sometimes referred to as Cauchy's differentiation formula. The theorem stated above can be generalized. The circle can be replaced by any closed rectifiable curve in which has winding number one about . Moreover, as for the Cauchy integral theorem, it is sufficient to require that be holomorphic in the open region enclosed by the path and continuous on its closure. Note that not every continuous function on the boundary can be used to produce a function inside the boundary that fits the given boundary function. For instance, if we put the function , defined for , into the Cauchy integral formula, we get zero for all points inside the circle. In fact, giving just the real part on the boundary of a holomorphic function is enough to determine the function up to an imaginary constant — there is only one imaginary part on the boundary that corresponds to the given real part, up to addition of a constant. We can use a combination of a Möbius transformation and the Stieltjes inversion formula to construct the holomorphic function from the real part on the boundary. For example, the function has real part . On the unit circle this can be written . Using the Möbius transformation and the Stieltjes formula we construct the function inside the circle. The term makes no contribution, and we find the function . This has the correct real part on the boundary, and also gives us the corresponding imaginary part, but off by a constant, namely . Proof sketch By using the Cauchy integral theorem, one can show that the integral over (or the closed rectifiable curve) is equal to the same integral taken over an arbitrarily small circle around . Since is continuous, we can choose a circle small enough on which is arbitrarily close to . On the other hand, the integral over any circle centered at . This can be calculated directly via a parametrization (integration by substitution) where and is the radius of the circle. Letting gives the desired estimate Example Let and let be the contour described by (the circle of radius 2). To find the integral of around the contour , we need to know the singularities of . Observe that we can rewrite as follows: where and . Thus, has poles at and . The moduli of these points are less than 2 and thus lie inside the contour. This integral can be split into two smaller integrals by Cauchy–Goursat theorem; that is, we can express the integral around the contour as the sum of the integral around and where the contour is a small circle around each pole. Call these contours around and around . Now, each of these smaller integrals can be evaluated by the Cauchy integral formula, but they first must be rewritten to apply the theorem. For the integral around , define as . This is analytic (since the contour does not contain the other singularity). We can simplify to be: and now Since the Cauchy integral formula says that: we can evaluate the integral as follows: Doing likewise for the other contour: we evaluate The integral around the original contour then is the sum of these two integrals: An elementary trick using partial fraction decomposition: Consequences The integral formula has broad applications. First, it implies that a function which is holomorphic in an open set is in fact infinitely differentiable there. Furthermore, it is an analytic function, meaning that it can be represented as a power series. The proof of this uses the dominated convergence theorem and the geometric series applied to The formula is also used to prove the residue theorem, which is a result for meromorphic functions, and a related result, the argument principle. It is known from Morera's theorem that the uniform limit of holomorphic functions is holomorphic. This can also be deduced from Cauchy's integral formula: indeed the formula also holds in the limit and the integrand, and hence the integral, can be expanded as a power series. In addition the Cauchy formulas for the higher order derivatives show that all these derivatives also converge uniformly. The analog of the Cauchy integral formula in real analysis is the Poisson integral formula for harmonic functions; many of the results for holomorphic functions carry over to this setting. No such results, however, are valid for more general classes of differentiable or real analytic functions. For instance, the existence of the first derivative of a real function need not imply the existence of higher order derivatives, nor in particular the analyticity of the function. Likewise, the uniform limit of a sequence of (real) differentiable functions may fail to be differentiable, or may be differentiable but with a derivative which is not the limit of the derivatives of the members of the sequence. Another consequence is that if is holomorphic in and then the coefficients satisfy Cauchy's estimate From Cauchy's estimate, one can easily deduce that every bounded entire function must be constant (which is Liouville's theorem). The formula can also be used to derive Gauss's Mean-Value Theorem, which states In other words, the average value of over the circle centered at with radius is . This can be calculated directly via a parametrization of the circle. Generalizations Smooth functions A version of Cauchy's integral formula is the Cauchy–Pompeiu formula, and holds for smooth functions as well, as it is based on Stokes' theorem. Let be a disc in and suppose that is a complex-valued function on the closure of . Then One may use this representation formula to solve the inhomogeneous Cauchy–Riemann equations in . Indeed, if is a function in , then a particular solution of the equation is a holomorphic function outside the support of . Moreover, if in an open set , for some (where ), then is also in and satisfies the equation The first conclusion is, succinctly, that the convolution of a compactly supported measure with the Cauchy kernel is a holomorphic function off the support of . Here denotes the principal value. The second conclusion asserts that the Cauchy kernel is a fundamental solution of the Cauchy–Riemann equations. Note that for smooth complex-valued functions of compact support on the generalized Cauchy integral formula simplifies to and is a restatement of the fact that, considered as a distribution, is a fundamental solution of the Cauchy–Riemann operator . The generalized Cauchy integral formula can be deduced for any bounded open region with boundary from this result and the formula for the distributional derivative of the characteristic function of : where the distribution on the right hand side denotes contour integration along . Now we can deduce the generalized Cauchy integral formula: Several variables In several complex variables, the Cauchy integral formula can be generalized to polydiscs. Let be the polydisc given as the Cartesian product of open discs : Suppose that is a holomorphic function in continuous on the closure of . Then where . In real algebras The Cauchy integral formula is generalizable to real vector spaces of two or more dimensions. The insight into this property comes from geometric algebra, where objects beyond scalars and vectors (such as planar bivectors and volumetric trivectors) are considered, and a proper generalization of Stokes' theorem. Geometric calculus defines a derivative operator under its geometric product — that is, for a -vector field , the derivative generally contains terms of grade and . For example, a vector field () generally has in its derivative a scalar part, the divergence (), and a bivector part, the curl (). This particular derivative operator has a Green's function: where is the surface area of a unit -ball in the space (that is, , the circumference of a circle with radius 1, and , the surface area of a sphere with radius 1). By definition of a Green's function, It is this useful property that can be used, in conjunction with the generalized Stokes theorem: where, for an -dimensional vector space, is an -vector and is an -vector. The function can, in principle, be composed of any combination of multivectors. The proof of Cauchy's integral theorem for higher dimensional spaces relies on the using the generalized Stokes theorem on the quantity and use of the product rule: When , is called a monogenic function, the generalization of holomorphic functions to higher-dimensional spaces — indeed, it can be shown that the Cauchy–Riemann condition is just the two-dimensional expression of the monogenic condition. When that condition is met, the second term in the right-hand integral vanishes, leaving only where is that algebra's unit -vector, the pseudoscalar. The result is Thus, as in the two-dimensional (complex analysis) case, the value of an analytic (monogenic) function at a point can be found by an integral over the surface surrounding the point, and this is valid not only for scalar functions but vector and general multivector functions as well.
Mathematics
Calculus and analysis
null
72839
https://en.wikipedia.org/wiki/Foucault%20pendulum
Foucault pendulum
The Foucault pendulum or Foucault's pendulum is a simple device named after French physicist Léon Foucault, conceived as an experiment to demonstrate the Earth's rotation. If a long and heavy pendulum suspended from the high roof above a circular area is monitored over an extended period of time, its plane of oscillation appears to change spontaneously as the Earth makes its 24-hourly rotation. The pendulum was introduced in 1851 and was the first experiment to give simple, direct evidence of the Earth's rotation. Foucault followed up in 1852 with a gyroscope experiment to further demonstrate the Earth's rotation. Foucault pendulums today are popular displays in science museums and universities. History Foucault was inspired by observing a thin flexible rod on the axis of a lathe, which vibrated in the same plane despite the rotation of the supporting frame of the lathe. The first public exhibition of a Foucault pendulum took place in February 1851 in the Meridian of the Paris Observatory. A few weeks later, Foucault made his most famous pendulum when he suspended a brass-coated lead bob with a wire from the dome of the Panthéon, Paris. Because the latitude of its location was , the plane of the pendulum's swing made a full circle in approximately , rotating clockwise approximately 11.3° per hour. The proper period of the pendulum was approximately , so with each oscillation, the pendulum rotates by about . Foucault reported observing 2.3 mm of deflection on the edge of a pendulum every oscillation, which is achieved if the pendulum swing angle is 2.1°. Foucault explained his results in an 1851 paper entitled Physical demonstration of the Earth's rotational movement by means of the pendulum, published in the Comptes rendus de l'Académie des Sciences. He wrote that, at the North Pole: ...an oscillatory movement of the pendulum mass follows an arc of a circle whose plane is well known, and to which the inertia of matter ensures an unchanging position in space. If these oscillations continue for a certain time, the movement of the earth, which continues to rotate from west to east, will become sensitive in contrast to the immobility of the oscillation plane whose trace on the ground will seem animated by a movement consistent with the apparent movement of the celestial sphere; and if the oscillations could be perpetuated for twenty-four hours, the trace of their plane would then execute an entire revolution around the vertical projection of the point of suspension. The original bob used in 1851 at the Panthéon was moved in 1855 to the Conservatoire des Arts et Métiers in Paris. A second temporary installation was made for the 50th anniversary in 1902. During museum reconstruction in the 1990s, the original pendulum was temporarily displayed at the Panthéon (1995), but was later returned to the Musée des Arts et Métiers before it reopened in 2000. On April 6, 2010, the cable suspending the bob in the Musée des Arts et Métiers snapped, causing irreparable damage to the pendulum bob and to the marble flooring of the museum. The original, now damaged pendulum bob is displayed in a separate case adjacent to the current pendulum display. An exact copy of the original pendulum has been operating under the dome of the Panthéon, Paris since 1995. Mechanism At either the [[Geographic South Pole] or Geographic South Pole, the plane of oscillation of a pendulum remains fixed relative to the distant masses of the universe while Earth rotates underneath it, taking one sidereal day to complete a rotation. So, relative to Earth, the plane of oscillation of a pendulum at the North Pole (viewed from above) undergoes a full clockwise rotation during one day; a pendulum at the South Pole rotates counterclockwise. When a Foucault pendulum is suspended at the equator, the plane of oscillation remains fixed relative to Earth. At other latitudes, the plane of oscillation precesses relative to Earth, but more slowly than at the pole; the angular speed, (measured in clockwise degrees per sidereal day), is proportional to the sine of the latitude, : where latitudes north and south of the equator are defined as positive and negative, respectively. A "pendulum day" is the time needed for the plane of a freely suspended Foucault pendulum to complete an apparent rotation about the local vertical. This is one sidereal day divided by the sine of the latitude. For example, a Foucault pendulum at 30° south latitude, viewed from above by an earthbound observer, rotates counterclockwise 360° in two days. Using enough wire length, the described circle can be wide enough that the tangential displacement along the measuring circle of between two oscillations can be visible by eye, rendering the Foucault pendulum a spectacular experiment: for example, the original Foucault pendulum in Panthéon moves circularly, with a 6-metre pendulum amplitude, by about 5 mm each period. A Foucault pendulum requires care to set up because imprecise construction can cause additional veering which masks the terrestrial effect. Heike Kamerlingh Onnes (Nobel laureate 1913) performed precise experiments and developed a fuller theory of the Foucault pendulum for his doctoral thesis (1879). He observed the pendulum to go over from linear to elliptic oscillation in an hour. By a perturbation analysis, he showed that geometrical imperfection of the system or elasticity of the support wire may cause a beat between two horizontal modes of oscillation. The initial launch of the pendulum is also critical; the traditional way to do this is to use a flame to burn through a thread which temporarily holds the bob in its starting position, thus avoiding unwanted sideways motion (see a detail of the launch at the 50th anniversary in 1902). Notably, veering of a pendulum was observed already in 1661 by Vincenzo Viviani, a disciple of Galileo, but there is no evidence that he connected the effect with the Earth's rotation; rather, he regarded it as a nuisance in his study that should be overcome with suspending the bob on two ropes instead of one. Air resistance damps the oscillation, so some Foucault pendulums in museums incorporate an electromagnetic or other drive to keep the bob swinging; others are restarted regularly, sometimes with a launching ceremony as an added attraction. Besides air resistance (the use of a heavy symmetrical bob is to reduce friction forces, mainly air resistance by a symmetrical and aerodynamic bob) the other main engineering problem in creating a 1-meter Foucault pendulum nowadays is said to be ensuring there is no preferred direction of swing. Related physical systems Many physical systems precess in a similar manner to a Foucault pendulum. As early as 1836, the Scottish mathematician Edward Sang contrived and explained the precession of a spinning top. In 1851, Charles Wheatstone described an apparatus that consists of a vibrating spring that is mounted on top of a disk so that it makes a fixed angle with the disk. The spring is struck so that it oscillates in a plane. When the disk is turned, the plane of oscillation changes just like the one of a Foucault pendulum at latitude . Similarly, consider a nonspinning, perfectly balanced bicycle wheel mounted on a disk so that its axis of rotation makes an angle with the disk. When the disk undergoes a full clockwise revolution, the bicycle wheel will not return to its original position, but will have undergone a net rotation of . Foucault-like precession is observed in a virtual system wherein a massless particle is constrained to remain on a rotating plane that is inclined with respect to the axis of rotation. Spin of a relativistic particle moving in a circular orbit precesses similar to the swing plane of Foucault pendulum. The relativistic velocity space in Minkowski spacetime can be treated as a sphere S3 in 4-dimensional Euclidean space with imaginary radius and imaginary timelike coordinate. Parallel transport of polarization vectors along such sphere gives rise to Thomas precession, which is analogous to the rotation of the swing plane of Foucault pendulum due to parallel transport along a sphere S2 in 3-dimensional Euclidean space. In physics, the evolution of such systems is determined by geometric phases. Mathematically they are understood through parallel transport. Absolute reference frame for pendulum The motion of a pendulum, such as the Foucault pendulum, is typically analyzed relative to an Inertial frame of reference, approximated by the "fixed stars." These stars, owing to their immense distance from Earth, exhibit negligible motion relative to one another over short timescales, making them a practical benchmark for physical calculations. While fixed stars are sufficient for physical analyses, the concept of an absolute reference frame introduces philosophical and theoretical considerations. Newtonian absolute space Isaac Newton proposed the existence of "absolute space," a universal, immovable reference frame independent of any material objects. In his Principia Mathematica, Newton described absolute space as the backdrop against which true motion occurs. This concept was criticized by later thinkers, such as Ernst Mach, who argued that motion should only be defined relative to other masses in the universe. Cosmic microwave background (CMB) The CMB, the remnant radiation from the Big Bang, provides a universal reference for cosmological observations. By measuring motion relative to the CMB, scientists can determine the velocity of celestial bodies, including Earth, relative to the universe's early state. This has led some to consider the CMB a modern analogue of an absolute reference frame. Mach's principle and distant masses Ernst Mach proposed that inertia arises from the interaction of an object with the distant masses in the universe. According to this view, the pendulum's frame of reference might be defined by the distribution of all matter in the cosmos, rather than an abstract absolute space. The "distant masses of the universe" play a crucial role in defining the inertial frame, suggesting that the pendulum's apparent motion might be influenced by the collective gravitational effect of these masses. This perspective aligns with Mach’s principle, emphasizing the interconnectedness of local and cosmic phenomena. However, the connection between Mach's principle and Einstein's general relativity remains unresolved. Einstein initially hoped to incorporate Mach's ideas but later acknowledged difficulties in doing so. General relativity and spacetime General relativity suggests that spacetime itself can serve as a reference frame. The pendulum’s motion might be understood as relative to the curvature of spacetime, which is influenced by nearby and distant masses. This view aligns with the concept of geodesics in curved spacetime. The Lense-Thirring effect, a prediction of general relativity, implies that massive rotating objects like Earth can slightly "drag" spacetime, which could affect the pendulum’s oscillation. This effect, though theoretically significant, is currently too small to measure with a Foucault pendulum. Equation formulation for the Foucault pendulum To model the Foucault pendulum, we consider a pendulum of length L and mass m, oscillating with small amplitudes. In a reference frame rotating with Earth at angular velocity Ω, the Coriolis force must be included. The equations of motion in the horizontal plane (x, y) are: where: is the natural angular frequency of the pendulum, is the latitude, is the acceleration due to gravity. These coupled differential equations describe the pendulum's motion, incorporating the Coriolis effect due to Earth's rotation. Precession rate calculation The precession rate of the pendulum’s oscillation plane depends on latitude. The angular precession rate is given by: where is Earth's angular rotation rate (approximately radians per second). Examples of precession periods The time for a full rotation of the pendulum’s plane is: Calculations for specific locations: Paris, France (latitude ): New York City, USA (latitude ): These calculations show that the pendulum's precession period varies with latitude, completing a full rotation more quickly at higher latitudes. Installations There are numerous Foucault pendulums at universities, science museums, and the like throughout the world. The United Nations General Assembly Building at the United Nations headquarters in New York City has one. The Oregon Convention Center pendulum is claimed to be the largest, its length approximately , however, there are larger ones listed in the article, such as the one in Gamow Tower at the University of Colorado of . There used to be much longer pendulums, such as the pendulum in Saint Isaac's Cathedral, Saint Petersburg, Russia. The experiment has also been carried out at the South Pole, where it was assumed that the rotation of the Earth would have maximum effect. A pendulum was installed in a six-story staircase of a new station under construction at the Amundsen-Scott South Pole Station. It had a length of and the bob weighed . The location was ideal: no moving air could disturb the pendulum. The researchers confirmed about 24 hours as the rotation period of the plane of oscillation.
Physical sciences
Earth science basics: General
Earth science
72877
https://en.wikipedia.org/wiki/Odometer
Odometer
An odometer or odograph is an instrument used for measuring the distance traveled by a vehicle, such as a bicycle or car. The device may be electronic, mechanical, or a combination of the two (electromechanical). The noun derives from ancient Greek , hodómetron, from , hodós ("path" or "gateway") and , métron ("measure"). Early forms of the odometer existed in the ancient Greco-Roman world as well as in ancient China. In countries using Imperial units or US customary units it is sometimes called a mileometer or milometer, the former name especially being prevalent in the United Kingdom and among members of the Commonwealth. History Classical Era Possibly the first evidence for the use of an odometer can be found in the works of the ancient Roman Pliny (NH 6. 61-62) and the ancient Greek Strabo (11.8.9). Both authors list the distances of routes traveled by Alexander the Great (r. 336-323 BC) as by his bematists Diognetus and Baeton. However, the high accuracy of the bematists's measurements rather indicates the use of a mechanical device. For example, the section between the cities Hecatompylos and Alexandria Areion, which later became a part of the Silk Road, was given by Alexander's bematists as 575 Roman miles (529 English miles) long, that is with a deviation of 0.2% from the actual distance (531 English miles). From the nine surviving bematists' measurements in Pliny's Naturalis Historia eight show a deviation of less than 5% from the actual distance, three of them being within 1%. Since these minor discrepancies can be adequately explained by slight changes in the tracks of roads during the last 2300 years, the overall accuracy of the measurements implies that the bematists already must have used a sophisticated device for measuring distances, although there is no direct mention of such a device. An odometer for measuring distance was first described by Vitruvius around 27 and 23 BC, during the First Punic War, although the actual inventor may have been Archimedes of Syracuse (c. 287 BC – ). Hero of Alexandria (10 AD – 70 AD) describes a similar device in chapter 34 of his Dioptra. The machine was also used in the time of Roman Emperor Commodus (), although after this point in time there seems to be a gap between its use in Roman times and that of the 15th century in Western Europe. Some researchers have speculated that the device might have included technology similar to that of the Greek Antikythera mechanism. The odometer of Vitruvius was based on chariot wheels of 4 Roman feet (1.18 m) diameter turning 400 times in one Roman mile (about 1,480 m). For each revolution a pin on the axle engaged a 400-tooth cogwheel thus turning it one complete revolution per mile. This engaged another gear with holes along the circumference, where pebbles (calculus) were located, that were to drop one by one into a box. The distance traveled would thus be given simply by counting the number of pebbles. Whether this instrument was ever built at the time is disputed. Leonardo da Vinci later tried to build it himself according to the description, but failed. However, in 1981 engineer Andre Sleeswyk built his own replica, replacing the square-toothed gear designs of Leonardo with the triangular, pointed teeth found in the Antikythera mechanism. With this modification, the Vitruvius odometer functioned perfectly. Imperial China Han dynasty and Three Kingdoms period The odometer was also independently invented in ancient China, possibly by the prolific inventor and early scientist Zhang Heng (78 AD – 139 AD) of the Han dynasty. By the 3rd century (during the Three Kingdoms Period), the Chinese had termed the device as the 'jì lĭ gŭ chē' (記里鼓車), or 'li-recording drum carriage' (Note: the modern measurement of li = ). Chinese texts of the 3rd century tell of the mechanical carriage's functions, and as one li is traversed, a mechanical-driven wooden figure strikes a drum, and when ten li is traversed, another wooden figure would strike a gong or a bell with its mechanical-operated arm. Despite its association with Zhang Heng or even the later Ma Jun (c. 200–265), there is evidence to suggest that the invention of the odometer was a gradual process in Han dynasty China that centered around the huang men court people (i.e. eunuchs, palace officials, attendants and familiars, actors, acrobats, etc.) that would follow the musical procession of the royal 'drum-chariot'. The historian Joseph Needham asserts that it is no surprise this social group would have been responsible for such a device, since there is already other evidence of their craftsmanship with mechanical toys to delight the emperor and the court. There is speculation that some time in the 1st century BC (during the Western Han dynasty), the beating of drums and gongs were mechanically-driven by working automatically off the rotation of the road-wheels. This might have actually been the design of one Luoxia Hong (), yet by 125 AD the mechanical odometer carriage in China was already known (depicted in a mural of the Xiaotangshan Tomb). The odometer was used also in subsequent periods of Chinese history. In the historical text of the Jin Shu (635 AD), the oldest part of the compiled text, the book known as the Cui Bao (), recorded the use of the odometer, providing description (attributing it to the Western Han era, from 202 BC–9 AD). The passage in the Jin Shu expanded upon this, explaining that it took a similar form to the mechanical device of the south-pointing chariot invented by Ma Jun (200–265, see also differential gear). As recorded in the Song Shi of the Song dynasty (960–1279 AD), the odometer and south-pointing chariot were combined into one wheeled device by engineers of the 9th century, 11th century, and 12th century. The Sunzi Suanjing (Master Sun's Mathematical Manual), dated from the 3rd century to 5th century, presented a mathematical problem for students involving the odometer. It involved a given distance between two cities, the small distance needed for one rotation of the carriage's wheel, and the posed question of how many rotations the wheels would have in all if the carriage was to travel between point A and B. Song dynasty The historical text of the Song Shi (1345 AD), recording the people and events of the Chinese Song dynasty (960–1279), also mentioned the odometer used in that period. However, unlike written sources of earlier periods, it provided a much more thoroughly detailed description of the device that harkens back to its ancient form (Wade-Giles spelling): The odometer. [The mile-measuring carriage] is painted red, with pictures of flowers and birds on the four sides, and constructed in two storeys, handsomely adorned with carvings. At the completion of every li, the wooden figure of a man in the lower storey strikes a drum; at the completion of every ten li, the wooden figure in the upper storey strikes a bell. The carriage-pole ends in a phoenix-head, and the carriage is drawn by four horses. The escort was formerly of 18 men, but in the 4th year of the Yung-Hsi reign-period (987 AD) the emperor Thai Tsung increased it to 30. In the 5th year of the Thien-Sheng reign-period (1027 AD) the Chief Chamberlain Lu Tao-lung presented specifications for the construction of odometers as follows: What follows is a long dissertation made by the Chief Chamberlain Lu Daolong on the ranging measurements and sizes of wheels and gears, along with a concluding description at the end of how the device ultimately functions: The vehicle should have a single pole and two wheels. On the body are two storeys, each containing a carved wooden figure holding a drumstick. The road-wheels are each 6 ft in diameter, and 18 ft in circumference, one evolution covering 3 paces. According to ancient standards the pace was equal to 6 ft and 300 paces to a li; but now the li is reckoned as 360 paces of 5 ft each. [Note: the measurement of the Chinese-mile unit, the li, was changed over time, as the li in Song times differed from the length of a li in Han times.] The vehicle wheel (li lun) is attached to the left road-wheel; it has a diameter of 1.38 ft with a circumference of 4.14 ft, and has 18 cogs (chhih) 2.3 inches apart. There is also a lower horizontal wheel (hsia phing lun), of diameter 4.14 ft and circumference 12.42 ft, with 54 cogs, the same distance apart as those on the vertical wheel (2.3 inches). (This engages with the former.) Upon a vertical shaft turning with this wheel, there is fixed a bronze "turning-like-the-wind wheel" (hsuan feng lun) which has (only) 3 cogs, the distance between these being 1.2 inches. (This turns the following one.) In the middle is a horizontal wheel, 4 ft in diameter, and 12 ft circumference, with 100 cogs, the distance between these cogs being the same as on the "turning-like-the-wind wheel" (1.2 inches). Next, there is fixed (on the same shaft) a small horizontal wheel (hsiao phing lun) 3.3 inches in diameter and 1 ft in circumference, having 10 cogs 1.5 inches apart. (Engaging with this) there is an upper horizontal wheel (shang phing lun) having a diameter of 3.3 ft and a circumference of 10 ft, with 100 cogs, the same distance apart as those of the small horizontal wheel (1.5 inches). When the middle horizontal wheel has made 1 revolution, the carriage will have gone 1 li and the wooden figure in the lower story will strike the drum. When the upper horizontal wheel has made 1 revolution, the carriage will have gone 10 li and the figure in the upper storey will strike the bell. The number of wheels used, great and small, is 8 inches in all, with a total of 285 teeth. Thus the motion is transmitted as if by the links of a chain, the "dog-teeth" mutually engaging with each other, so that by due revolution everything comes back to its original starting point (ti hsiang kou so, chhuan ya hsiang chih, chou erh fu shih). Subsequent developments Odometers were first developed in the 1600s for wagons and other horse-drawn vehicles in order to measure distances traveled. Abramo Colorni (d. 1599) illustrated a carriage with odometer in his Euthimetria, a treatise on engineering. Levinus Hulsius published the odometer in 1604 in his work Gründtliche Beschreibung deß Diensthafften und Nutzbahrn Instruments Viatorii oder Wegzählers, So zu Fuß, zu Pferdt unnd zu Fußen gebraucht werden kann, damit mit geringer mühe zu wissen, wie weit man gegangen, geritten, oder gefahren sey: als auch zu erfahren, ohne messen oder zehlen, wie weit von einem Orth zum andern. Daneben wird auch der grosse verborgene Wegweiser angezeiget und vermeldet. In 1645, the French mathematician Blaise Pascal invented the pascaline. Though not an odometer, the pascaline utilized gears to compute measurements. Each gear contained 10 teeth. The first gear advanced the next gear one position when moved one complete revolution, the same principle employed on modern mechanical odometers. Odometers were developed for ships in 1698 with the odometer invented by the Englishman Thomas Savery. Benjamin Franklin, U.S. statesman and the first Postmaster General, built a prototype odometer in 1775 that he attached to his carriage to help measure the mileage of postal routes. In 1847, William Clayton and Orson Pratt, pioneers of the Church of Jesus Christ of Latter-day Saints, first implemented the Roadometer they had invented earlier (a version of the modern odometer), which they attached to a wagon used by American settlers heading west. It recorded the distance traveled each day by the wagon trains. The Roadometer used two gears and was an early example of an odometer with pascaline-style gears in actual use. In 1895, Curtis Hussey Veeder invented the Cyclometer. The Cyclometer was a mechanical device that counted the number of rotations of a bicycle wheel. A flexible cable transmitted the number of rotations of the wheel to an analog odometer visible to the rider, which converted the wheel rotations into the number of miles traveled according to a predetermined formula. In 1903 Arthur P. and Charles H. Warner, two brothers from Beloit, Wisconsin, introduced their patented Auto-meter. The Auto-Meter used a magnet attached to a rotating shaft to induce a magnetic pull upon a thin metal disk. Measuring this pull provided accurate measurements of both distance and speed information to automobile drivers in a single instrument. The Warners sold their company in 1912 to the Stewart & Clark Company of Chicago. The new firm was renamed the Stewart-Warner Corporation. By 1925, Stewart-Warner odometers and trip meters were standard equipment on the vast majority of automobiles and motorcycles manufactured in the United States. By the early 2000s, mechanical odometers would be phased out on cars from major manufacturers. The Pontiac Grand Prix was the last GM car sold in the US to offer a mechanical odometer in 2003; the Canadian-built Ford Crown Victoria and Mercury Grand Marquis were the last Fords sold with one in 2005. Trip meters Most modern cars include a trip meter (trip odometer). Unlike the odometer, a trip meter is reset at any point in a journey, making it possible to record the distance traveled in any particular journey or part of a journey. It was traditionally a purely mechanical device but, in most modern vehicles, it is now electronic. Many modern vehicles often have multiple trip meters. Most mechanical trip meters will show a maximum value of 999.9. The trip meter may be used to record the distance traveled on each tank of fuel, making it very easy to accurately track the energy efficiency of the vehicle; another common use is resetting it to zero at each instruction in a sequence of driving directions, to be sure when one has arrived at the next turn. Clocking/busting miles and legality A form of fraud is to tamper with the reading on an odometer and presenting the incorrect number of miles/kilometres traveled to a prospective buyer; this is often referred to as "clocking" in the UK and "busting miles" in the US. This is done to make a car appear to have been driven less than it really has been, and thus increase its apparent market value. Most new cars sold today use digital odometers that store the mileage in the vehicle's engine control unit, making it difficult (but not impossible) to manipulate the mileage electronically. With mechanical odometers, the speedometer can be removed from the car dashboard and the digits wound back, or the drive cable can be disconnected and connected to another odometer/speedometer pair while on the road. Older vehicles can be driven in reverse to subtract mileage, a concept which provides the premise for a classic scene in the comedy film Ferris Bueller's Day Off, but modern odometers add mileage driven in reverse to the total as if driven forward, thereby accurately reflecting the true total wear and tear on the vehicle. The resale value of a vehicle is often strongly influenced by the total distance shown on the odometer, yet odometers are inherently insecure because they are under the control of their owners. Many jurisdictions have chosen to enact laws which penalize people who are found to commit odometer fraud. In the US (and many other countries), vehicle mechanics are also required to keep records of the odometer any time a vehicle is serviced or inspected. Companies such as Carfax then use these data to help potential car buyers detect whether odometer rollback has occurred. Prevalence Research by Irish vehicle check specialist Cartell found that 20% of vehicles imported to Ireland from Great Britain and Northern Ireland had had their mileometers altered to show a lower mileage. Accuracy Most odometers work by counting wheel rotations and assume that the distance traveled is the number of wheel rotations times the tire circumference, which is a standard tire diameter times pi (3.141592). If nonstandard or severely worn or underinflated tires are used then this will cause some error in the odometer. The formula is . It is common for odometers to be off by several percent. Odometer errors are typically proportional to speedometer errors.
Technology
Measuring instruments
null
4350782
https://en.wikipedia.org/wiki/Bacteroides
Bacteroides
Bacteroides is a genus of Gram-negative, obligate anaerobic bacteria. Bacteroides species are non endospore–forming bacilli, and may be either motile or nonmotile, depending on the species. The DNA base composition is 40–48% GC. Unusual in bacterial organisms, Bacteroides membranes contain sphingolipids. They also contain meso-diaminopimelic acid in their peptidoglycan layer. Bacteroides species are normally mutualistic, making up the most substantial portion of the mammalian gastrointestinal microbiota, where they play a fundamental role in processing of complex molecules to simpler ones in the host intestine. As many as 1010–1011 cells per gram of human feces have been reported. They can use simple sugars when available; however, the main sources of energy for Bacteroides species in the gut are complex host-derived and plant glycans. Studies indicate that long-term diet is strongly associated with the gut microbiome composition—those who eat a higher proportion of protein and animal fats have predominantly Bacteroides bacteria, while for those who consume more carbohydrates or fiber the Prevotella species dominate. One of the most important clinically is Bacteroides fragilis. Bacteroides melaninogenicus has recently been reclassified and split into Prevotella melaninogenica and Prevotella intermedia. Pathogenesis Bacteroides species also benefit their host by excluding potential pathogens from colonizing the gut. Some species (B. fragilis, for example) are opportunistic human pathogens, causing infections of the peritoneal cavity, gastrointestinal surgery, and appendicitis via abscess formation, inhibiting phagocytosis, and inactivating beta-lactam antibiotics. Although Bacteroides species are anaerobic, they are transiently aerotolerant and thus can survive in the abdominal cavity. In general, Bacteroides are resistant to a wide variety of antibiotics—β-lactams, aminoglycosides, and recently many species have acquired resistance to erythromycin and tetracycline. This high level of antibiotic resistance has prompted concerns that Bacteroides species may become a reservoir for resistance in other, more highly pathogenic bacterial strains. It has been often considered susceptible to clindamycin, but recent evidence demonstrated an increasing trend in clindamycin resistance rates (up to 33%). In cases where Bacteroides can move outside the gut due to gastrointestinal tract rupture or intestinal surgery, Bacteroides can infect several parts of the human body. Bacteroides can enter the central nervous system by penetrating the blood brain barrier through the olfactory and trigeminal cranial nerves and can cause meningitis and brain abscesses. Bacteroides has also been isolated from abscesses in the neck and lungs. Some Bacteroides species are associated with Crohn's disease, appendicitis and inflammatory bowel disease. Bacteroides species play multiple roles within the human gut microbiome. Microbiological applications An alternative fecal indicator organism, Bacteroides, has been suggested because they make up a significant portion of the fecal bacterial population, have a high degree of host specificity that reflects differences in the digestive system of the host animal Over the past decade, real-time polymerase chain reaction (PCR) methods have been used to detect the presence of various microbial pathogens through the amplification of specific DNA sequences without culturing bacteria. One study has measured the amount of Bacteroides by using qPCR to quantify the host-specific 16S rRNA genetic marker. This technique allows quantification of genetic markers that are specific to the host of the bacteria Bacteroides and allow detection of recent contamination. A recent report found temperature plays a major role in the amount of time the bacteria will persist in the environment, the life span increases with colder temperatures (0–4 °C). "A new study has found that there is a three-way relationship between a type of gut bacteria, cortisol, and brain metabolites. This relationship, the researchers hypothesize, may potentially lead to further insight into autism, but more in-depth studies are needed." Another study showed a 5.6-times higher risk of osteoporosis fractures in the low Bacteroides group of Japanese postmenopausal women. Human Members of the Bacillota and Bacteroidota phyla make up a majority of the bacterial species in the human intestinal microbiota (the "gut microbiome"). The healthy human gut microbiome consists of 109 abundant species of which 31 (19.7%) are members of the Bacteroidetes while 63 (40%) and 32 (20%) belong to Bacillota and Actinomycetota. Bacteroides species' main source of energy is fermentation of a wide range of sugar derivatives from plant material. These compounds are common in the human colon and are potentially toxic. Bacteroides such as Bacteroides thetaiotaomicron converts these sugars to fermentation products which are beneficial to humans. Bacteroides also have the ability to remove side chains from bile acids, thus returning bile acids to the hepatic circulation. There is data suggesting that members of Bacteroides affect the lean or obese phenotype in humans. In this article, one human twin is obese while the other is lean. When their fecal microbiota is transplanted into germ-free mice, the phenotype in the mouse model corresponds to that in humans. Bacteroides are symbiont colonizers of their host intestinal niche and serve several physiological functions, some of which can be beneficial while others are detrimental. Bacteroides participate in the regulation of the intestinal micro-environment and carbohydrate metabolism with the capacity to adapt to the host environment by hydrolyzing bile salts. Some Bacteroides produce acetate and propionate during sugar fermentation. Acetate can prevent the transport of toxins from the gut to the blood while propionate can prevent the formation of tumors in the human colon. Bacteroides such as Bacteroides uniformis may play a role in alleviating obesity. Low abundance of B. uniformis found in the intestine of formula-fed infants were associated with a high risk of obesity. Administering B. uniformis orally may alleviate metabolic and immune dysfunction which may contribute to obesity in mice. Similarly, Bacteroides acidifaciens may assist the activating fat oxidation in adipose tissue and thus could protect against obesity.
Biology and health sciences
Gram-negative bacteria
Plants
7549995
https://en.wikipedia.org/wiki/Elk
Elk
The elk (: elk or elks; Cervus canadensis), or wapiti, is the second largest species within the deer family, Cervidae, and one of the largest terrestrial mammals in its native range of North America and Central and East Asia. The word "elk" originally referred to the European variety of the moose, Alces alces, but was transferred to Cervus canadensis by North American colonists. The name "wapiti" is derived from a Shawnee and Cree word meaning "white rump", after the distinctive light fur around the tail region which the animals may fluff-up or raise to signal their agitation or distress to one another, when fleeing perceived threats, or among males courting females and sparring for dominance. A similar trait is seen in other artiodactyl species, like the bighorn sheep, pronghorn and the white-tailed deer, to varying degrees. Elk dwell in open forest and forest-edge habitats, grazing on grasses and sedges and browsing higher-growing plants, leaves, twigs and bark. Male elk have large, blood- and nerve-filled antlers, which they routinely shed each year as weather warms-up. Males also engage in ritualized mating behaviors during the mating season, including posturing to attract females, antler-wrestling (sparring), and bugling, a loud series of throaty whistles, bellows, screams, and other vocalizations that establish dominance over other males and aim to attract females. Elk were long believed to belong to a subspecies of the European red deer (Cervus elaphus), but evidence from many mitochondrial DNA genetic studies, beginning in 1998, shows that the two are distinct species. The elk's wider rump-patch and paler-hued antlers are key morphological differences that distinguish C. canadensis from C. elaphus. Although it is currently only native to North America, Central, East and North Asia, elk once had a much wider distribution in the past; prehistoric populations were present across Eurasia and into Western Europe during the Late Pleistocene, surviving into the early Holocene in Southern Sweden and the Alps. The now-extinct North American Merriam's elk subspecies (Cervus canadensis merriami) once ranged south into Mexico. The wapiti has also successfully adapted to countries outside of its natural range where it has been introduced, including Argentina and New Zealand; the animal's adaptability in these areas may, in fact, be so successful as to threaten the sensitive endemic ecosystems and species it encounters. As a member of the Artiodactyla order (and distant relative of the Bovidae), elk are susceptible to several infectious diseases which can be transmitted to and/or from domesticated livestock. Efforts to eliminate infectious diseases from elk populations, primarily by vaccination, have had mixed success. Some cultures revere the elk as having spiritual significance. Antlers and velvet are used in traditional medicines in parts of Asia; the production of ground antler and velvet supplements is also a thriving naturopathic industry in several countries, including the United States, China and Canada. The elk is hunted as a game species, and their meat is leaner, and higher in protein, than beef or chicken. Naming and etymology By the 17th century, Alces alces (moose, called "elk" in Europe) had long been extirpated from the British Isles, and the meaning of the word "elk" to English-speakers became rather vague, acquiring a meaning similar to "large deer". The name wapiti is from the Shawnee and Cree word (in Cree syllabics: or ), meaning "white rump". There is a subspecies of wapiti in Mongolia called the Altai wapiti (Cervus canadensis sibiricus), also known as the Altai maral. According to the Oxford English Dictionary, the etymology of the word "elk" is "of obscure history". In Classical Antiquity, the European Alces alces was known as and , words probably borrowed from a Germanic language or another language of northern Europe. By the 8th century, during the Early Middle Ages, the moose was known as derived from the Proto-Germanic: *elho-, *elhon- and possibly connected with the . Later, the species became known in Middle English as elk, elcke, or elke, appearing in the Latinized form alke, with the spelling alce borrowed directly from . Noting that elk "is not the normal phonetic representative" of the Old English elch, the Oxford English Dictionary derives elk from , itself from . The American Cervus canadensis was recognized as a relative of the red deer (Cervus elaphus) of Europe, and so Cervus canadensis were referred to as "red deer". Richard Hakluyt refers to North America as a "lande ... full of many beastes, as redd dere" in his 1584 Discourse Concerning Western Planting. Similarly, John Smith's 1616 A Description of New England referred to red deer. Sir William Talbot's 1672 English translation of John Lederer's Latin Discoveries likewise called the species "red deer", but noted in parentheses that they were "for their unusual largeness improperly termed Elks by ignorant people". Both Thomas Jefferson's 1785
Biology and health sciences
Artiodactyla
null
3202382
https://en.wikipedia.org/wiki/Amphiphile
Amphiphile
In chemistry, an amphiphile (), or amphipath, is a chemical compound possessing both hydrophilic (water-loving, polar) and lipophilic (fat-loving, nonpolar) properties. Such a compound is called amphiphilic or amphipathic. Amphiphilic compounds include surfactants and detergents. The phospholipid amphiphiles are the major structural component of cell membranes. Amphiphiles are the basis for a number of areas of research in chemistry and biochemistry, notably that of lipid polymorphism. Organic compounds containing hydrophilic groups at both ends of the molecule are called bolaamphiphilic. The micelles they form in the aggregate are prolate. Structure The lipophilic group is typically a large hydrocarbon moiety, such as a long chain of the form CH3(CH2)n, with n > 4. The hydrophilic group falls into one of the following categories: charged groups anionic. Examples, with the lipophilic part of the molecule represented by R, are: carboxylates: RCO2− sulfates: RSO4− sulfonates: RSO3− phosphates (the charged functional group in phospholipids) cationic. Examples: ammoniums: RNH3+ polar, uncharged groups. Examples are alcohols with large R groups, such as diacyl glycerol (DAG), and oligo ethylene glycol with long alkyl chains. Often, amphiphilic species have several lipophilic parts, several hydrophilic parts, or several of both. Proteins and some block copolymers are such examples. Amphiphilic compounds have lipophilic (typically hydrocarbon) structures and hydrophilic polar functional groups (either ionic or uncharged). As a result of having both lipophilic and hydrophilic portions, some amphiphilic compounds may dissolve in water and to some extent in non-polar organic solvents. When placed in an immiscible biphasic system consisting of aqueous and organic solvents, the amphiphilic compound will partition the two phases. The extent of the hydrophobic and hydrophilic portions determines the extent of partitioning. Biological role Phospholipids, a class of amphiphilic molecules, are the main components of biological membranes. The amphiphilic nature of these molecules defines the way in which they form membranes. They arrange themselves into lipid bilayers, by forming a sheet composed of two layers of lipids. Each layer forms by positioning their lypophilic chains to the same side of the layer. The two layers then stack such that their lyphphilic chains touch on the inside and their polar groups are outside facing the surrounding aqueous media. Thus the inside of the bilayer sheet is a non-polar region sandwiched between the two polar sheets. Although phospholipids are the principal constituents of biological membranes, there are other constituents, such as cholesterol and glycolipids, which are also included in these structures and give them different physical and biological properties. Many other amphiphilic compounds, such as pepducins, strongly interact with biological membranes by insertion of the hydrophobic part into the lipid membrane, while exposing the hydrophilic part to the aqueous medium, altering their physical behavior and sometimes disrupting them. Aβ proteins form antiparallel β sheets which are strongly amphiphilic, and which aggregate to form toxic oxidative Aβ fibrils. Aβ fibrils themselves are composed of amphiphilic 13-mer modular β sandwiches separated by reverse turns. Hydropathic waves optimize the description of the small (40,42 aa) plaque-forming (aggregative) Aβ fragments. Antimicrobial peptides (AMPs) are another class of amphiphilic molecules, a big data analysis showed that amphipathicity best distinguished between AMPs with and without anti-gram-negative bacteria activities. The higher amphipathicity, the better chances for AMPs possessing antibacterial and antifungal dual activities. Examples There are several examples of molecules that present amphiphilic properties: Hydrocarbon-based surfactants are an example group of amphiphilic compounds. Their polar region can be either ionic, or non-ionic. Some typical members of this group are: sodium dodecyl sulfate (anionic), benzalkonium chloride (cationic), cocamidopropyl betaine (zwitterionic), and 1-octanol (long-chain alcohol, non-ionic). Many biological compounds are amphiphilic: phospholipids, cholesterol, glycolipids, fatty acids, bile acids, saponins, local anaesthetics, etc. Soap is a common household amphiphilic surfactant compound. Soap mixed with water (polar, hydrophilic) is useful for cleaning oils and fats (non-polar, lipophilic) from kitchenware, dishes, skin, clothing, etc.
Physical sciences
Supramolecular chemistry
Chemistry
9788168
https://en.wikipedia.org/wiki/Leptosporangiate%20fern
Leptosporangiate fern
The Polypodiidae, commonly called leptosporangiate ferns, formerly Leptosporangiatae, are one of four subclasses of ferns, the largest of these being the largest group of living ferns, including some 11,000 species worldwide. The group has also been treated as the class Pteridopsida or Polypodiopsida, although other classifications assign them a different rank. Older names for the group include Filicidae and Filicales, although at least the "water ferns" (now the Salviniales) were then treated separately. The leptosporangiate ferns are one of the four major groups of ferns, with the other three being the eusporangiate ferns comprising the marattioid ferns (Marattiidae, Marattiaceae), the horsetails (Equisetiidae, Equisetaceae), and whisk ferns and moonworts. There are approximately 8465 species of living leptosporangiate ferns, compared with about 2070 for all other ferns, totalling 10535 species of ferns. Almost a third of leptosporangiate fern species are epiphytes. These ferns are called leptosporangiate because their sporangia arise from a single epidermal cell and not from a group of cells as in eusporangiate ferns (a polyphyletic lineage). The mature sporangia have a wall that is just a single cell thick, and are typically covered with a scale called the indusium, which can cover the whole sorus, forming a ring or cup around the sorus, or can also be strongly reduced to completely absent. Many leptosporangiate ferns have an annulus around the sporangium, which ejects the spores. Taxonomy The leptosporangiate ferns were first recognized as a group, the "Leptosporangiateen", by Karl Ritter von Goebel in 1881, who placed the eusporangiate ferns with seed plants and vascular plants into a coeval "Eusporangiateen". As this classification artificially split the ferns, Christian Luerssen subdivided the homosporous ferns only into Eusporangiatae and Leptosporangiatae in 1884–9. The latter group was treated at a variety of ranks in subsequent systems of classification. The subclass "Polypodiidae" was first published and used for the homosporous leptosporangiate ferns by Cronquist, Takhtajan and Zimmermann in 1966, typified on Polypodium L.. Other contemporary classifications used the name "Filicidae" for this subclass. Smith et al. (2006) carried out the first higher-level classification of ferns based on molecular phylogenetics. They included heterosporous water ferns (Salviniales) (placed in a separate subclass by Cronquist et al. due to their highly modified morphology) within the leptosporangiate ferns, which they elevated to the rank of class as the Polypodiopsida (published by Cronquist et al. to include all ferns). The common ancestor of Salviniales, Cyatheales and Polypodiales went through a whole genome duplication. Later classifications renamed the group Polypodiidae, initially as a subclass of Equisetopsida sensu lato. This subclass comprises leptosporangiate ferns as opposed to the remaining three subclasses which are informally referred to as eusporangiate ferns. The following diagram shows a likely phylogenic relationship between subclass Polypodiidae and the other Equisetopsida subclasses in that system In 2014, Christenhusz and Chase grouped all the fern subclasses together as Polypodiophyta and in 2016 the Pteridophyte Phylogeny Group (PPG) adopted the class Polypodiopsida sensu lato for the four fern subclasses. The following cladogram shows the phylogenic relationship between the subclasses according to the PPG. The first three small subclasses being informally grouped as eusporangiate ferns, in contrast to the Polypodiidae or leptosporangiate ferns. Polypodiidae is shown as a sister group of Marattiidae. Subdivision In both the Christenhusz and Chase, and the PPG classification, the extant Polypodiidae are divided into seven orders, 44 families, 300 genera, and an estimated 10,323 species. Phylogenetic relationships The following phylogram shows a likely relationship between the other vascular plant classes and the leptosporangiate ferns. It was formerly unclear about the relationship between Equisetopsida, Psilotopsida, and Marattiopsida, but recent studies have shown that Equisetopsida is most likely sister to Psilotopsida. Discussion of molecular classification There has been some challenge to recent molecular studies, claiming that these provide a skewed view of the phylogenetic order because they do not take into account fossil representatives. However, the molecular studies have clarified relations among families that had already been thought to be polyphyletic before the advent of molecular information but that were left in their polyphyletic ranks because there was not enough information to do otherwise. The classification of ferns using these molecular studies, which have generally supported one another, reflects the best information available at present, because traditional morphological characters are not always informative in elucidating evolutionary relationships among ferns. Extinct families The leptosporangiate ferns have a substantial fossil record. For example, fossils assigned to the Dicksoniaceae, a member of the Cyatheales, are known from the Lower Jurassic (). A number of other extinct families have been described. They are not included in the classification systems used for extant ferns, and so most cannot be assigned to orders used in these systems. Taylor et al. (2009) use the order "Filicales", which corresponds to four Polypodiidae orders in more modern systems: Hymenophyllales, Gleicheniales, Schizaeales and Cyatheales. The unplaced families include: Anachoropteridaceae Botryopteridaceae Kaplanopteridaceae Psalixochlaenaceae Sermayaceae Skaaripteridaceae Tedeleaceae Tempskyaceae
Biology and health sciences
Ferns
Plants
5795594
https://en.wikipedia.org/wiki/Rudists
Rudists
Rudists are a group of extinct box-, tube- or ring-shaped marine heterodont bivalves belonging to the order Hippuritida that arose during the Late Jurassic and became so diverse during the Cretaceous that they were major reef-building organisms in the Tethys Ocean, until their complete extinction at the close of the Cretaceous. Shell description The Late Jurassic forms were elongated, with both valves being similarly shaped, often pipe or stake-shaped, while the reef-building forms of the Cretaceous had one valve that became a flat lid, with the other valve becoming an inverted spike-like cone. The size of these conical forms ranged widely from just a few centimeters to well over a meter in length. Their "classic" morphology consisted of a lower, roughly conical valve that was attached to the seafloor or to neighboring rudists, and a smaller upper valve that served as a kind of lid for the organism. The small upper valve could take a variety of interesting forms, including: a simple flat lid, a low cone, a spiral, and even a star-shaped form. Fossil range and extinction The oldest rudists are found in late Jurassic rocks in France. The rudists became extinct at the end of the Cretaceous, apparently as a result of the Cretaceous–Paleogene extinction event. It had been thought that this group began a decline about 2.5 million years earlier which culminated in complete extinction half a million years before the end of the Cretaceous. The extinction of rudist bivalves was during the Maastrichtian (end of the Cretaceous). Taxonomy The rudists are, according to different systematic schemes, placed in the orders Hippuritida (Hippuritoida) or Rudistes (sometimes Rudista). Order: †Hippuritida Suborder: †Hippuritidina Superfamily: †Caprinoidea Family: †Antillocaprinidae Family: †Caprinidae Family: †Caprinuloideidae Family: †Ichthyosarcolitidae Superfamily: †Radiolitoidea Family: †Caprotinidae Family: †Diceratidae Family: †Hippuritidae Family: †Plagioptychidae Family: †Polyconitidae Family: †Radiolitidae Suborder: †Requieniidina Superfamily: †Requienioidea Family: †Requieniidae Family: †Epidiceratidae Bieler, Carter & Coan in 2010 also named the non-Hippuritid families Megalodontoidea and Chamoidea, of Megalodontida and Venerida respectively, as "Rudists", but this classification was not monophyletic. Ecology The classification of rudists as true reef-builders is controversial because they would catch and trap much sediment between their lower conical valves; thus, rudists were not completely composed of biogenic carbonates as a coral would be. However, rudists were one of the most important constituents of reefs during the Cretaceous Period. During the Cretaceous, rudist reefs were so successful that they may have driven scleractinian corals out of many tropical environments, including shelves that are today the Caribbean and the Mediterranean. It is likely that their success as reef builders was at least partially due to the extreme environment of the Cretaceous. During this period tropical waters were between 6°C and 14°C warmer than today and also more highly saline, and while this may have been a suitable environment for the rudists, it was not nearly so hospitable to corals and other contemporary reef builders. These rudist reefs were sometimes hundreds of meters tall and often ran for hundreds of kilometers on continental shelves; in fact at one point they fringed the North American coast from the Gulf of Mexico to the present-day Maritime Provinces. Because of their high porosity, rudist reefs are highly favored oil traps.
Biology and health sciences
Bivalvia
Animals
17800413
https://en.wikipedia.org/wiki/Satellite%20navigation%20device
Satellite navigation device
A satellite navigation device or satnav device, also known as a satellite navigation receiver or satnav receiver or simply a GPS device, is a user equipment that uses satellites of the Global Positioning System (GPS) or similar global navigation satellite systems (GNSS). A satnav device can determine the user's geographic coordinates and may display the geographical position on a map and offer routing directions (as in turn-by-turn navigation). , four GNSS systems are operational: the original United States' GPS, the European Union's Galileo, Russia's GLONASS, and China's BeiDou Navigation Satellite System. The Indian Regional Navigation Satellite System (IRNSS) will follow and Japan's Quasi-Zenith Satellite System (QZSS) scheduled for 2023 will augment the accuracy of a number of GNSS. A satellite navigation device can retrieve location and time information from one or more GNSS systems in all weather conditions, anywhere on or near the Earth's surface. Satnav reception requires an unobstructed line of sight to four or more GNSS satellites, and is subject to poor satellite signal conditions. In exceptionally poor signal conditions, for example in urban areas, satellite signals may exhibit multipath propagation where signals bounce off structures, or are weakened by meteorological conditions. Obstructed lines of sight may arise from a tree canopy or inside a structure, such as in a building, garage or tunnel. Today, most standalone Satnav receivers are used in automobiles. The Satnav capability of smartphones may use assisted GNSS (A-GNSS) technology, which can use the base station or cell towers to provide a faster Time to First Fix (TTFF), especially when satellite signals are poor or unavailable. However, the mobile network part of the A-GNSS technology would not be available when the smartphone is outside the range of the mobile reception network, while the satnav aspect would otherwise continue to be available. History As with many other technological breakthroughs of the latter 20th century, the modern GNSS system can reasonably be argued to be a direct outcome of the Cold War of the latter 20th century. The multibillion-dollar expense of the US and Russian programs was initially justified by military interest. In contrast, the European Galileo was conceived as purely civilian. In 1960, the US Navy put into service its Transit satellite-based navigation system to aid in naval navigation. The US Navy in the mid-1960s conducted an experiment to track a submarine with missiles with six satellites and orbiting poles and was able to observe satellite changes. Between 1960 and 1982, as the benefits were shown, the US military consistently improved and refined its satellite navigation technology and satellite system. In 1973, the US military began to plan for a comprehensive worldwide navigational system which eventually became known as the GPS (Global Positioning System). In 1983, in the wake of the tragedy of the downing of Korean Air Lines Flight 007, an aircraft which was shot down while in Soviet airspace due to a navigational error, President Ronald Reagan made the navigation capabilities of the existing military GPS system available for dual civilian use. However, civilian use was initially only a slightly degraded "Selective Availability" positioning signal. This new availability of the US military GPS system for civilian use required a certain technical collaboration with the private sector for some time, before it could become a commercial reality. The Macrometer Interferometric Surveyor was the first commercial GNSS-based system for performing geodetic measurements. In 1989, Magellan Navigation Inc. unveiled its Magellan NAV 1000, the world's first commercial handheld GPS receiver. These units initially sold for approximately US$2,900 each. In 1990, Mazda's Eunos Cosmo was the first production car in the world with a built-in Satnav system. In 1991, Mitsubishi introduced Satnav car navigation on the Mitsubishi Debonair (MMCS: Mitsubishi Multi Communication System). In 1997, a navigation system using Differential GPS was developed as a factory-installed option on the Toyota Prius. In 2000, the Clinton administration removed the military use signal restrictions, thus providing full commercial access to the US Satnav satellite system. As GNSS navigation systems became more and more widespread and popular, the pricing of such systems began to fall, and their widespread availability steadily increased. Several additional manufacturers of these systems, such as Garmin (1991), Benefon (1999), Mio (2002) and TomTom (2002) entered the market. Mitac Mio 168 was the first PocketPC to contain a built-in GPS receiver. Benefon's 1999 entry into the market also presented users with the world's first phone based GPS navigation system. Later, as smartphone technology developed, a GPS chip eventually became standard equipment for most smartphones. To date, ever more popular satellite navigation systems and devices continue to proliferate with newly developed software and hardware applications. It has been incorporated, for example, into cameras. While the American GPS was the first satellite navigation system to be deployed on a fully global scale, and to be made available for commercial use, this is not the only system of its type. Due to military and other concerns, similar global or regional systems have been, or will soon be deployed by Russia, the European Union, China, India, and Japan. Technical design GNSS devices vary in sensitivity, speed, vulnerability to multipath propagation, and other performance parameters. High-sensitivity receivers use large banks of correlators and digital signal processing to search for signals very quickly. This results in very fast times to first fix when the signals are at their normal levels, for example, outdoors. When signals are weak, for example, indoors, the extra processing power can be used to integrate weak signals to the point where they can be used to provide a position or timing solution. GNSS signals are already very weak when they arrive at the Earth's surface. The GPS satellites only transmit 27 W (14.3 dBW) from a distance of 20,200 km in orbit above the Earth. By the time the signals arrive at the user's receiver, they are typically as weak as −160 dBW, equivalent to 100 attowatts (10−16 W). This is well below the thermal noise level in its bandwidth. Outdoors, GPS signals are typically around the −155 dBW level (−125 dBm). Sensitivity Conventional GPS receivers integrate the received GPS signals for the same amount of time as the duration of a complete C/A code cycle which is 1 ms. This results in the ability to acquire and track signals down to around the −160 dBW level. High-sensitivity GPS receivers are able to integrate the incoming signals for up to 1,000 times longer than this and therefore acquire signals up to 1,000 times weaker, resulting in an integration gain of 30 dB. A good high-sensitivity GPS receiver can acquire signals down to −185 dBW, and tracking can be continued down to levels approaching −190 dBW. High-sensitivity GPS can provide positioning in many but not all indoor locations. Signals are either heavily attenuated by the building materials or reflected as in multipath. Given that high-sensitivity GPS receivers may be up to 30 dB more sensitive, this is sufficient to track through 3 layers of dry bricks, or up to 20 cm (8 inches) of steel-reinforced concrete, for example. Examples of high-sensitivity receiver chips include SiRFstarIII and MediaTekʼs MTK II. In aviation, the GPS receivers can be "armed" to the approach mode for the destination airport, so that when the aircraft is within , the receiver sensitivity will automatically change from en route (±5 nm) and RAIM (±2 nm) to terminal (±1 nm), and change again to ±0.3 nm at before reaching the final approach way point. Sequential receiver = A sequential GPS receiver tracks the necessary satellites by typically using one or two hardware channels. The set will track one satellite at a time, time tag the measurements and combine them when all four satellite pseudoranges have been measured. These receivers are among the least expensive available, but they cannot operate under high dynamics and have the slowest time-to-first-fix (TTFF) performance. Types Consumer GNSS navigation devices include: Dedicated GNSS navigation devices modules that need to be connected to a computer to be used loggers that record trip information for download. Such GPS tracking is useful for trailblazing, mapping by hikers and cyclists, and the production of geocoded photographs. Converged devices, including Satnav phones and geotagging cameras, in which GNSS is a feature rather than the main purpose of the device. The majority of GNSS devices are now converged devices, and may use assisted GPS or standalone (not network dependent) or both. The vulnerability of consumer GNSS to radio frequency interference from planned wireless data services is controversial. Dedicated GNSS navigation devices Dedicated devices have various degrees of mobility. Hand-held, outdoor, or sport receivers have replaceable batteries that can run them for several hours, making them suitable for hiking, bicycle touring and other activities far from an electric power source. Their design is ergonomical, their screens are small, and some do not show color, in part to save power. Some use transflective liquid-crystal displays, allowing use in bright sunlight. Cases are rugged and some are water-resistant. Other receivers, often called mobile are intended primarily for use in a car, but have a small rechargeable internal battery that can power them away from the car. Special purpose devices for use in a car may be permanently installed and depend entirely on the automotive electrical system. Many of them have touch-sensitive screens as input method. Maps may be stored on a memory card. Some offer additional functionality such as a rudimentary music player, image viewer, and video player. The pre-installed embedded software of early receivers did not display maps; 21st-century ones commonly show interactive street maps (of certain regions) that may also show points of interest, route information and step-by-step routing directions, often in spoken form with a feature called "text to speech". Manufacturers include: Navman products TomTom products Garmin products Mio products Navigon products Magellan Navigation consumer products Satmap Systems Ltd TeleType products Integration into smartphones Almost all smartphones now incorporate GNSS receivers. This has been driven both by consumer demand and by service suppliers. There are now many phone apps that depend on location services, such as navigational aids, and multiple commercial opportunities, such as localised advertising. In its early development, access to user location services was driven by European and American emergency services to help locate callers. All smartphone operating systems offer free mapping and navigational services that require a data connection; some allow the pre-purchase and downloading of maps but the demand for this is diminishing as data connection reliant maps can generally be cached anyway. There are many navigation applications and new versions are constantly being introduced. Major apps include Google Maps Navigation, Apple Maps and Waze, which require data connections, iGo for Android, Maverick and HERE for Windows Phone, which use cached maps and can operate without a data connection. Consequently, almost any smartphone now qualifies as a personal navigation assistant. The use of mobile phones as navigational devices has outstripped the use of standalone GNSS devices. In 2009, independent analyst firm Berg Insight found that GNSS-enabled GSM/WCDMA handsets in the USA alone numbered 150 million units, against the sale of only 40 million standalone GNSS receivers. Assisted GPS (A-GPS) uses a combination of satellite data and cell tower data to shorten the time to first fix, reduce the need to download a satellite almanac periodically and to help resolve a location when satellite signals are disturbed by the proximity of large buildings. When out of range of a cell tower the location performance of a phone using A-GPS may be reduced. Phones with an A-GPS based hybrid positioning system can maintain a location fix when GPS signals are inadequate by cell tower triangulation and WiFi hotspot locations. Most smartphones download a satellite almanac when online to accelerate a GPS fix when out of cell tower range. Some, older, Java-enabled phones lacking integrated GPS may still use external GPS receivers via serial or Bluetooth) connections, but the need for this is now rare. By tethering to a laptop, some phones can provide localisation services to a laptop as well. Palm, pocket and laptop PC Software companies have made available GPS navigation software programs for in-vehicle use on laptop computers. Benefits of GPS on a laptop include larger map overview, ability to use the keyboard to control GPS functions, and some GPS software for laptops offers advanced trip-planning features not available on other platforms, such as midway stops, capability of finding alternative scenic routes as well as only highway option. Palms and Pocket PC's can also be equipped with GPS navigation. A pocket PC differs from a dedicated navigation device as it has an own operating system and can also run other applications. GPS modules Other GPS devices need to be connected to a computer in order to work. This computer can be a home computer, laptop, PDA, digital camera, or smartphones. Depending on the type of computer and available connectors, connections can be made through a serial or USB cable, as well as Bluetooth, CompactFlash, SD, PCMCIA and the newer ExpressCard. Some PCMCIA/ExpressCard GPS units also include a wireless modem. Devices usually do not come with pre-installed GPS navigation software, thus, once purchased, the user must install or write their own software. As the user can choose which software to use, it can be better matched to their personal taste. It is very common for a PC-based GPS receiver to come bundled with a navigation software suite. Also, software modules are significantly cheaper than complete stand-alone systems (around €50 to €100). The software may include maps only for a particular region, or the entire world, if software such as Google Maps are used. Some hobbyists have also made some Satnav devices and open-sourced the plans. Examples include the Elektor GPS units. These are based around a SiRFstarIII chip and are comparable to their commercial counterparts. Other chips and software implementations are also available. Applications Vehicle navigation An automotive navigation system takes its location from a GNSS system and, depending on the installed software, may offer the following services: Mapping, including street maps, text or in a graphical format, Turn-by-turn navigation directions via text or speech, Directions fed directly to a self-driving car, Traffic congestion maps, historical or real-time data, and suggested alternative directions, Information on nearby amenities such as restaurants, fueling stations, and tourist attractions, Alternative routes. Aviation Aviators use Satnav to navigate and to improve safety and the efficiency of the flight. This may allow pilots to be independent of ground-based navigational aids, enable more efficient routes and provide navigation into airports that lack ground-based navigation and surveillance equipment. There are now some GPS units that allow aviators to get a clearer look in areas where the satellite is augmented to be able to have safe landings in bad visibility conditions. There have now been two new signals made for GPS, the first being made to help in critical conditions in the sky and the other will make GPS more of a robust navigation service. Many aviator services have now made it a required service to use a GPS. Commercial aviation applications include GNSS devices that calculate location and feed that information to large multi-input navigational computers for autopilot, course information and correction displays to the pilots, and course tracking and recording devices. Military Military applications include devices similar to consumer sport products for foot soldiers (commanders and regular soldiers), small vehicles and ships, and devices similar to commercial aviation applications for aircraft and missiles. Examples are the United States military's Commander's Digital Assistant and the Soldier Digital Assistant. Prior to May 2000 only the military had access to the full accuracy of GPS. Consumer devices were restricted by selective availability (SA), which was scheduled to be phased out but was removed abruptly by President Clinton. Differential GPS is a method of cancelling out the error of SA and improving GPS accuracy, and has been routinely available in commercial applications such as for golf carts. GPS is limited to about 15 meter accuracy even without SA. DGPS can be within a few centimeters. Issues Hazards of relying on satnav GPS maps and directions are occasionally imprecise. Some people have gotten lost by asking for the shortest route, like a couple in the United States who were looking for the shortest route from South Oregon to Jackpot, Nevada. In August 2009 a young mother and her six-year-old son became stranded in Death Valley after following Satnav directions that led her up an unpaved dead-end road. When they were found five days later, her son had died from the effects of heat and dehydration. In May 2012, Japanese tourists in Australia were stranded when traveling to North Stradbroke Island and their satnav instructed them to drive into Moreton Bay. In 2008 Satnav routed a softball team bus into a 9 ft tunnel, which sliced off the top of the bus and hospitalized the whole team. Brad Preston, Oregon claims that people are routed into his driveway five to eight times a week because their Satnav shows a street through his property. John and Starry Rhodes, a couple from Reno, Nevada were driving home from Oregon when they started to see there was a lot of snow in the area but decided to keep going because they were already 30 miles down the road. But the Satnav led them to a road in the Oregon forest that was not plowed and they were stuck for 3 days. Mary Davis was driving in an unfamiliar place when her Satnav told her to make a right turn onto a train track while there was a train coming down. Mary was lucky there was a local police officer who noticed the situation and urged her quickly to get out of the car as fast as she could. Mary was lucky enough to get out of the car leaving it for the train to hit and total it. The officer commented that there was a very good chance that they could have had a fatality on their hands. Other hazards involve an alley being listed as a street, a lane being identified as a road, or rail tracks as a road. Obsolete maps sometimes cause the unit to lead a user on an indirect, time-wasting route, because roads may change over time. Smartphone Satnav information is usually updated automatically, and free of additional charge. Manufacturers of separate Satnav devices also offer map update services for their merchandise, usually for a fee. Privacy concerns User privacy may be compromised if Satnav equipped handheld devices such as mobile phones upload user geo-location data through associated software installed on the device. User geo-location is currently the basis for navigational apps such as Google Maps, location-based advertising, which can promote nearby shops and may allow an advertising agency to track user movements and habits for future use. Regulatory bodies differ between countries regarding the treatment of geo-location data as privileged or not. Privileged data cannot be stored, or otherwise used, without the user's consent. Vehicle tracking systems allow employers to track their employees' location raising questions regarding violation of employee privacy. There are cases where employers continued to collect geo-location data when an employee was off duty in private time. Rental car services may use the same technique to geo-fence their customers to the areas they have paid for, charging additional fees for violations. In 2010, New York Civil Liberties Union filed a case against the Labor Department for firing Michael Cunningham after tracking his daily activity and locations using a Satnav device attached to his car. Private investigators use planted GPS devices to provide information to their clients on a target's movements.
Technology
Navigation
null
14922915
https://en.wikipedia.org/wiki/Rec.%20709
Rec. 709
ITU-R Recommendation 709, usually abbreviated Rec. 709, BT.709, or ITU-R 709, is a standard developed by the Radiocommunication Sector of the International Telecommunication Union (ITU-R) for image encoding and signal characteristics of high-definition television (HDTV). The standard specifies a scheme for digital encoding of colors as triplets of small integers, a widescreen format with 1080 active lines per picture and 1920 square pixels per line (a 16:9 aspect ratio), as well as several details of signal capture, transmission, and display. While directed to HDTV, some of its specifications (such as the color encoding) have also been adopted for other uses. Technical details The standard is freely available at the ITU website, and that document should be used as the authoritative reference. The essentials are summarized below. Image format and definition Recommendation ITU-R BT.709-6 defines a common image format (CIF) where picture characteristics are independent of the frame rate. The image is 1920x1080 pixels, for a total pixel count of 2,073,600 and a 16:9 aspect ratio. Frame rates BT.709-6 specifies the following possible frame rates and pixel scanning order. The options for the latter are progressively scanned frame (P), progressive segmented frames (PsF), and interlaced (I) 24/P, 24/PsF, 23.976/P, 23.976/PsF These combinations match the frame rate used for theatrical motion pictures. The fractional rates are included for compatibility with the "pull-down" rates used with NTSC. 50/P, 25/P, 25/PsF, 50/I (25 fps) These combinations are provided for compatibility with earlier "50 Hz" TV standards, such as PAL or SECAM. There are no fractional rates as PAL and SECAM did not have the pull-down issue of NTSC. 60/P, 59.94/P, 30/P, 30/PsF, 29.97/P, 29.97/PsF, 60/I (30 fps), 59.94/I (29.97 fps) These combinations offer compatibility with earlier "60 Hz" TV standards, as NTSC. Here again, the fractional rates are for compatibility with legacy NTSC pull-down rates. Cameras and monitors may use any of these modes. Video captured in progressive mode can be recorded, broadcast, or streamed in progressive or progressive segmented frame modes. Video captured using an interlaced mode must be distributed as interlace unless a de-interlace process is applied in post production. In cases where a progressive captured image is distributed in segmented frame mode, segment/field frequency must be twice the frame rate. Thus 30/PsF has the same field rate as 60/I. The RGB color space Colors in the BT.709 standard are basically described according to the RGB color model, namely as mixtures of three primaries, "red" (R), "green" (G) and "blue" (B). For BT.709, their coordinates in the CIE 1931 chromaticity diagram are In the BT.709 standard, a color value is conceptually represented by three numbers between 0 and 1, where 0 means the absence of the corresponding primary color and 1 means the maximum intensity that the device can represent. If these numbers are interpreted as Cartesian coordinates in a three-dimensional space, the representable colors correspond to points in an axis-aligned cube of side 1, with corner representing the color black and representing the maximum-brightness white. More generally, points along the cube's diagonal represent shades of grey. The white point coordinates above define this white color as being CIE illuminant D65 for 2° standard observing conditions. Non-linear encoding The coordinates are supposed to be proportional to the physical intensity of each primary, namely emitted or received light power per unit of area. For efficiency reasons, the standard specifies a non-linear transformation of each component signal, resulting in . This optical electrical transfer transfer function, is defined as where is the linear coordinate (, , or ), and is the corresponding non-linear value (, , or ), both in the range . Non-linear decoding In order to display the colors on a device, such as a HDTV monitor, the encoded values should be converted back to physical intensities of the primaries. Mathematically, the inverse of the non-linear encoding above would be However, the BT.709 standard does not specify this conversion (sometimes referred as the "display gamma"). In practice, it depends on various factors such as the capabilities of the monitor, the viewing conditions, and desired visual effects (such as contrast or saturation stretching). The standard response for HDTV monitors is covered in standards ITU-R BT.1886 and EBU Tech 3320. The Y'C'BC'R color space The BT.709 standard also defines an alternative representation of colors by three coordinates which are linear combinations of the (non-linear) RGB coordinates . Namely, The value is called "luminance" in the standard, and is roughly an approximation of the CIE Y coordinate (which is presumed to measure the perceptual brightness of the color) modified by the non-linear function above. However, since is computed from the non-linear RGB components, this equivalence is correct only for shades of gray. The other two coordinates indicate the "blueness" and "redness" of the color's hue. According to these formulas, as , , and vary between 0 and 1, the luminance will vary between 0 and 1, while and will vary between and Quantization For digital storage, transmission, and processing, the BT.709 standard specifies that the non-linear color coordinates , , , , , and shall be converted into integers , , , , , and with a fixed number of bits, either 8 or 10. This quantization shall be performed by simple scaling and rounding, so as to yield integers that span a proper subset of the -bit integers. Specifically, and similarly for , , ; whereas and similarly for . The function should round the argument to the nearest integer, with ties rounded up (that is, and . These quantization formulas are the same as those defined in ITU-R BT.601. As implied by these formulas, the signals , , , and are mapped from the range to 8-bit integers in [16 .. 235]; while and are mapped from the range to integers in [16..240], with 0 mapped to 128. For bits, the quantized values range in [64..940] and [64..960], respectively. It follows that in 8-bit R'G'B' the color black is represented as (16,16,16) while white is (235, 235, 235). In 8-bit Y'C'BC'R, black is (16, 128, 128) and white is (235, 128, 128). Quantized color coordinates outside the nominal ranges above are allowed, but typically they would be clamped for broadcast or for display (except for Superwhite and xvYCC). However, the 8-bit values 0 and 255 and the 10 bit values 0..3 and 1020..1023 are reserved for timing marks (SAV and EAV) and may not appear in color data. History The creation of a worldwide HDTV standard was approved in 1989 by the Comité consultatif international pour la radio (CCIR) as "Recommendation XA/11 MOD F". The first official version of the standard was approved in 1990 by the CCIR, under the name "Recommendation 709". The CCIR became the ITU-R in 1992, and released a new version of the standard (BT.709-1) in November 1993. These early versions still left many unanswered questions, and the lack of consensus toward a worldwide HDTV standard was evident. So much so, some early HDTV systems such as 1035i30 and 1152i25 were still a part of the standard as late as 2002 in BT.709-5. The most recent version is BT.709-6 released in 2015. The standard strictly determined the picture size but offered several options for the pixel scanning order and frame rate. This flexibility allows BT.709 to become the worldwide standard for HDTV. This allows manufacturers to create a single television set or display for all markets world-wide. Justification for the non-linear encoding The BT.709 standard calls the non-linear encoding of to the optical electrical transfer function because it was meant to resemble the conversion of light intensity into analog electrical signals implemented by older non-digital cameras. It had long been known that a non-linear encoding of colors was more efficient than a linear one because human vision is more sensitive to brightness changes at low light levels. That conversion was commonly described as a power law with exponent near 0.5 (hence the common names "gamma correction" or "camera gamma" for the encoding function). Indeed, the BT.709 encoding function is close to a power law with exponent near 1/2.35. The BT.709 encoding function is not a simple power law because the latter has infinite slope at the origin, which emphasizes camera noise and is problematic for analog-to-digital converters. Thus the standard opted for a piecewise function that combines a simple linear function for low light levels and a shifted power law for larger values. Having chosen 0.45 as the exponent and 4.5 as the slope of the linear part, the conditions for the function to be continuous (without sudden jumps) and smooth (without sudden changes of slope) at the break point are The solution of these equations is and These values were rounded to 0.099 and 0.018, respectively. Standards conversion Conversion between different standards of video frame rates and color encoding has always been a challenge for content producers distributing through regions with different standards and requirements. While BT.709 has eased the compatibility issue in terms of the consumer and television set manufacturer, broadcast facilities still use a particular frame rate based on region, such as 29.97 in North America, or 25 in Europe meaning that broadcast content still requires at least frame rate conversion. Color gamuts The BT.709 red and blue primaries are the same as the EBU Tech 3213 (PAL) primaries. The yG coordinate too is the same, while xG is halfway between EBU Tech 3213's xG and SMPTE C's xG. The resulting BT.709 color space is almost identical to that of the BT.601-6 used by PAL and NTSC, and covers 35.9% of it. It also covers 33.24% of the CIE 1976 u’v’ space and 33.5% of the CIE 1931 x y diagram. Converting standard definition The vast legacy library of standard-definition programs and content presents further challenges. NTSC, PAL, and SECAM are all interlaced formats in a 4:3 aspect ratio, and at a relatively low resolution. Scaling them up to HD resolution with a 16:9 aspect ratio presents a number of challenges. First is the potential for distracting motion artifacts due to interlaced video content. The solution is to either up-convert only to an interlaced BT.709 format at the same field rate, and scale the fields independently, or use motion processing to remove the inter-field motion and deinterlace, creating progressive frames. In the latter case, motion processing can introduce artifacts and can be slow to process. Second is the issue of accommodating the SD 4:3 aspect ratio into the HD 16:9 frame. Cropping the top and/or bottom of the standard-definition frame may or may not work, depending on if the composition allows it and if there are graphics or titles that would be cut off. Alternately, pillar-boxing can show the entire 4:3 image by leaving black borders on the left and right. Sometimes this black is filled with a stretched and blurred form of the image. In addition, the SMPTE C RGB primaries used in North American standard definition are different than those of BT.709 (SMPTE C is commonly referred to as NTSC, however it is a different set of primaries and a different white point than the 1953 NTSC.). The red and blue primaries for PAL and SECAM are the same as BT.709, with a change in the green primary. Converting the image precisely requires a LUT (lookup table) or a color managed workflow to convert the colors to the new colorspace. However, in practice this is often ignored, except in mpv, because even if the player is color managed (most of them are not, including VLC), it can see BT.709 or BT.2020 primaries only. Luma coefficients When encoding Y’CBCR video, BT.709 creates gamma-encoded luma (Y’) using matrix coefficients 0.2126, 0.7152, and 0.0722 (together they add to 1). BT.709-1 used slightly different 0.2125, 0.7154, 0.0721 (changed to standard ones in BT.709-2). Although worldwide agreement on a single R’G’B’ system was achieved with Rec. 709, adoption of different luma coefficients (as those are derived from primaries and white point) for Y’CBCR requires the use of different luma-chroma decoding for standard definition and high definition. Conversion software and hardware These problems can be handled with video processing software which can be slow, or hardware solutions which allow for realtime conversion, and often with quality improvements. Film retransfer A more ideal solution is to go back to original film elements for projects that originated on film. Due to the legacy issues of international distribution, many television programs that shot on film used a traditional negative cutting process, and then had a single film master that could be telecined for different formats. These projects can re-telecine their cut negative masters to a BT.709 master at a reasonable cost, and gain the benefit of the full resolution of film. On the other hand, for projects that originated on film, but completed their online master using video online methods would need to re-telecine the individual needed film takes and then re-assemble, a significantly greater amount of labor and machine time is required in this case, versus a telecine for a conformed negative. In this case, to enjoy the benefits of the film original would entail much higher costs to conform the film originals to a new HD master. Comparison to sRGB sRGB was created after the early development of Rec.709. The creators of sRGB chose to use the same primaries and white point as Rec.709, but changed the tone response curve (sometimes referred to as gamma) to better suit the intended use in offices and brighter conditions than television viewing in a dark living room. Rec. 709 and sRGB share the same primary chromaticities and white point chromaticity; however, sRGB is explicitly output (display) referred with an equivalent gamma of 2.2 (the actual function is also piecewise to avoid near black issues). Display P3 uses sRGB EOTF with its linear segment, a change of that segment from 709 is needed by either using parametric curve encoding of ICC v4 or by using slope limit.
Physical sciences
Basics
Physics
14924067
https://en.wikipedia.org/wiki/Academic%20discipline
Academic discipline
An academic discipline or academic field is a subdivision of knowledge that is taught and researched at the college or university level. Disciplines are defined (in part) and recognized by the academic journals in which research is published, and the learned societies and academic departments or faculties within colleges and universities to which their practitioners belong. Academic disciplines are conventionally divided into the humanities (including philosophy, language, art and cultural studies), the scientific disciplines (such as physics, chemistry, and biology); and the formal sciences like mathematics and computer science. The social sciences are sometimes considered a fourth category. It is also known as a field of study, field of inquiry, research field and branch of knowledge. The different terms are used in different countries and fields. Individuals associated with academic disciplines are commonly referred to as experts or specialists. Others, who may have studied liberal arts or systems theory rather than concentrating in a specific academic discipline, are classified as generalists. While each academic discipline is a more or less focused practice, scholarly approaches such as multidisciplinarity/interdisciplinarity, transdisciplinarity, and cross-disciplinarity integrate aspects from multiple disciplines, thereby addressing any problems that may arise from narrow concentration within specialized fields of study. For example, professionals may encounter trouble communicating across academic disciplines because of differences in jargon, specified concepts, or methodology. Some researchers believe that academic disciplines may, in the future, be replaced by what is known as Mode 2 or "post-academic science", which involves the acquisition of cross-disciplinary knowledge through the collaboration of specialists from various academic disciplines. History of the concept The University of Paris in 1231 consisted of four faculties: Theology, Medicine, Canon Law and Arts. Educational institutions originally used the term "discipline" to catalog and archive the new and expanding body of information produced by the scholarly community. Disciplinary designations originated in German universities during the beginning of the nineteenth century. Most academic disciplines have their roots in the mid-to-late-nineteenth century secularization of universities, when the traditional curricula were supplemented with non-classical languages and literatures, social sciences such as political science, economics, sociology and public administration, and natural science and technology disciplines such as physics, chemistry, biology, and engineering. In the early twentieth century, new academic disciplines such as education and psychology were added. In the 1970s and 1980s, there was an explosion of new academic disciplines focusing on specific themes, such as media studies, women's studies, and Africana studies. Many academic disciplines designed as preparation for careers and professions, such as nursing, hospitality management, and corrections, also emerged in the universities. Finally, interdisciplinary scientific fields of study such as biochemistry and geophysics gained prominence as their contribution to knowledge became widely recognized. Some new disciplines, such as public administration, can be found in more than one disciplinary setting; some public administration programs are associated with business schools (thus emphasizing management), while others are linked to political science (emphasizing policy analysis). As the twentieth century approached, these designations were gradually adopted by other countries and became the accepted conventional subjects. However, these designations differed between various countries. In the twentieth century, the natural science disciplines included: physics, chemistry, biology, geology, and astronomy. The social science disciplines included: economics, politics, sociology, and psychology. Prior to the twentieth century, categories were broad and general, which was expected due to the lack of interest in science at the time. Most practitioners of science were amateurs and were referred to as "natural historians" and "natural philosophers"—labels that date back to Aristotle—instead of "scientists". Natural history referred to what we now call life sciences and natural philosophy referred to the current physical sciences. Prior to the twentieth century, few opportunities existed for science as an occupation outside the educational system. Higher education provided the institutional structure for scientific investigation, as well as economic support for research and teaching. Soon, the volume of scientific information rapidly increased and researchers realized the importance of concentrating on smaller, narrower fields of scientific activity. Because of this narrowing, scientific specializations emerged. As these specializations developed, modern scientific disciplines in universities also improved their sophistication. Eventually, academia's identified disciplines became the foundations for scholars of specific specialized interests and expertise. Functions and criticism An influential critique of the concept of academic disciplines came from Michel Foucault in his 1975 book, Discipline and Punish. Foucault asserts that academic disciplines originate from the same social movements and mechanisms of control that established the modern prison and penal system in eighteenth-century France, and that this fact reveals essential aspects they continue to have in common: "The disciplines characterize, classify, specialize; they distribute along a scale, around a norm, hierarchize individuals in relation to one another and, if necessary, disqualify and invalidate." (Foucault, 1975/1979, p. 223) Communities of academic disciplines Communities of academic disciplines can be found outside academia within corporations, government agencies, and independent organizations, where they take the form of associations of professionals with common interests and specific knowledge. Such communities include corporate think tanks, NASA, and IUPAC. Communities such as these exist to benefit the organizations affiliated with them by providing specialized new ideas, research, and findings. Nations at various developmental stages will find the need for different academic disciplines during different times of growth. A newly developing nation will likely prioritize government, political matters and engineering over those of the humanities, arts and social sciences. On the other hand, a well-developed nation may be capable of investing more in the arts and social sciences. Communities of academic disciplines would contribute at varying levels of importance during different stages of development. Interactions These categories explain how the different academic disciplines interact with one another. Multidisciplinary Multidisciplinary (or pluridisciplinary) knowledge is associated with more than one existing academic discipline or profession. A multidisciplinary community or project is made up of people from different academic disciplines and professions. One key question is how well the challenge can be decomposed into subparts, and then addressed via the distributed knowledge in the community. The lack of shared vocabulary between people and communication overhead can sometimes be an issue in these communities and projects. If challenges of a particular type need to be repeatedly addressed so that each one can be properly decomposed, a multidisciplinary community can be exceptionally efficient and effective. There are many examples of a particular idea appearing in different academic disciplines, all of which came about around the same time. One example of this scenario is the shift from the approach of focusing on sensory awareness of the whole, "an attention to the 'total field, a "sense of the whole pattern, of form and function as a unity", an "integral idea of structure and configuration". This has happened in art (in the form of cubism), physics, poetry, communication and educational theory. According to Marshall McLuhan, this paradigm shift was due to the passage from the era of mechanization, which brought sequentiality, to the era of the instant speed of electricity, which brought simultaneity. Multidisciplinary approaches also encourage people to help shape the innovation of the future. The political dimensions of forming new multidisciplinary partnerships to solve the so-called societal Grand Challenges were presented in the Innovation Union and in the European Framework Programme, the Horizon 2020 operational overlay. Innovation across academic disciplines is considered the pivotal foresight of the creation of new products, systems, and processes for the benefit of all societies' growth and wellbeing. Regional examples such as Biopeople and industry-academia initiatives in translational medicine such as SHARE.ku.dk in Denmark provide evidence of the successful endeavour of multidisciplinary innovation and facilitation of the paradigm shift. Transdisciplinary In practice, transdisciplinary can be thought of as the union of all interdisciplinary efforts. While interdisciplinary teams may be creating new knowledge that lies between several existing disciplines, a transdisciplinary team is more holistic and seeks to relate all disciplines into a coherent whole. Cross-disciplinary Cross-disciplinary knowledge is that which explains aspects of one discipline in terms of another. Common examples of cross-disciplinary approaches are studies of the physics of music or the politics of literature. Bibliometric studies of disciplines Bibliometrics can be used to map several issues in relation to disciplines, for example, the flow of ideas within and among disciplines (Lindholm-Romantschuk, 1998) or the existence of specific national traditions within disciplines. Scholarly impact and influence of one discipline on another may be understood by analyzing the flow of citations. The Bibliometrics approach is described as straightforward because it is based on simple counting. The method is also objective but the quantitative method may not be compatible with a qualitative assessment and therefore manipulated. The number of citations is dependent on the number of persons working in the same domain instead of inherent quality or published result's originality.
Physical sciences
Science basics
Basics and measurement
650522
https://en.wikipedia.org/wiki/Sickle
Sickle
A sickle, bagging hook, reaping-hook or grasshook is a single-handed agricultural tool designed with variously curved blades and typically used for harvesting or reaping grain crops, or cutting succulent forage chiefly for feeding livestock. Falx was a synonym, but was later used to mean any of a number of tools that had a curved blade that was sharp on the inside edge. Since the beginning of the Iron Age hundreds of region-specific variants of the sickle have evolved, initially of iron and later steel. This great diversity of sickle types across many cultures can be divided into smooth or serrated blades, both of which can be used for cutting either green grass or mature cereals using slightly different techniques. The serrated blade that originated in prehistoric sickles still dominates in the reaping of grain and is even found in modern grain-harvesting machines and in some kitchen knives. History Pre-Neolithic The development of the sickle in Mesopotamia can be traced back to times that pre-date the Neolithic Era. Large quantities of sickle blades have been excavated in sites surrounding Israel that have been dated to the Epipaleolithic era (18000-8000 BC). Formal digs in Wadi Ziqlab, Jordan have unearthed various forms of early sickle blades. The artifacts recovered ranged from in length and possessed a jagged edge. This intricate ‘tooth-like’ design showed a greater degree of design and manufacturing credence than most of the other artifacts that were discovered. Sickle blades found during this time were made of flint, straight and used in more of a sawing motion than with the more modern curved design. Flints from these sickles have been discovered near Mt. Carmel, which suggest the harvesting of grains from the area about 10,000 years ago. Neolithic The sickle had a profound impact on the Agricultural Revolution by assisting in the transition to farming and crop based lifestyle. It is now accepted that the use of sickles led directly to the domestication of Near Eastern Wild grasses. Research on domestication rates of wild cereals under primitive cultivation found that the use of the sickle in harvesting was critical to the people of early Mesopotamia. The relatively narrow growing season in the area and the critical role of grain in the late Neolithic Era promoted a larger investment in the design and manufacture of sickle over other tools. Standardization to an extent was done on the measurements of the sickle so that replacement or repair could be more immediate. It was important that the grain be harvested at the appropriate time at one elevation so that the next elevation could be reaped at the proper time. The sickle provided a more efficient option in collecting the grain and significantly sped up the developments of early agriculture. Bronze Age The sickle remained common in the Bronze Age, both in the Ancient Near East and in Europe. Numerous sickles have been found deposited in hoards in the context of the European Urnfield culture (e.g. Frankleben hoard), suggesting a symbolic or religious significance attached to the artifact. In archaeological terminology, Bronze Age sickles are classified by the method of attaching the handle. E.g. the knob-sickle (German Knopfsichel) is so called because of a protruding knob at the base of the blade which apparently served to stabilize the attachment of the blade to the handle. Iron Age The sickle played a prominent role in the Druids' Ritual of oak and mistletoe as described from a single passage in Pliny the Elder's Natural History: Due to this passage, despite the fact that Pliny does not indicate the source on which he based this account, some branches of modern Druidry (Neodruids) have adopted the sickle as a ritual tool. Americas Indigenous sickles have been discovered in southwest North America with unique design. There is evidence that Kodiak islanders had for cutting grass "sickles made of a sharpened animal shoulder blade". The artifacts found in present-day Arizona and New Mexico resemble curved tools that were made from the horns of mountain sheep. A similar site discovered sickles made from other material such as the Caddo Sickle, which was made from a deer mandible. Scripture from early natives document the use of these sickles in the cutting of grass. The instruments ranged from tip to haft. Several other digs in eastern Arizona uncovered wooden sickles that were shaped in a similar fashion. The handles of the tools help describe how the tool was held in such a way so that the inner portion that contained the cutting surface could also serve as a gathering surface for the grain. Sickles were sharpened by scraping a shape beveled edge with a coarse tool. This action has left marks on artifacts that have been found. The sharpening process was necessary to keep the cutting edge from being dulled after extended use. The edge is seen to be quite highly polished, which in part proves that the instrument was used to cut grass. After collection, the grass was used as material to create matting and bedding. The sickle in general provided the convenience of cutting the grass as well as gathering in one step. In modern times, the sickle is being used in South America as tool to harvest rice. Rice clusters are harvested using the instrument and left to dry in the sun. Nepal Called Hasiya (or Aasi), a sickle is very common in Nepal as the most important tool for cutting used in the kitchen and in the fields. Hasiya is used in the kitchen in many villages of Nepal where its used to cut vegetables during food prep. The handle of Hasiya (made of wood) is held pressed by the toe of one's foot and the curve inverted so vegetables can be cut with two hands while rocking the vegetable. Outside of home, Hasiya is used for harvesting. Hasiya have traditionally been made by local blacksmiths in their charcoal foundries that use leather bellows to blow air. Sharpening of the Hasiya is done by rubbing the edges against a smooth rock or taken back to the blacksmith. Sharpening of the Hasiya is generally done during the beginning of the harvesting season. Bigger Hasiya is called Khurpa (or Khoorpa) where the curve is less pronounced, is much heavier and is used to sever branches of trees with leaves (for animal feed), chop meat etc. The famous Nepali Khukuri is also a type of sickle where the curve becomes least visible. Carrying around a sharp and naked Hasiya or Khurpa is unsafe. So Nepalis have traditionally built a cover/holder for it called "Khurpeto" (meaning Khurpa holder in Nepali). It could be a simple piece of wood with a hole big enough to slide the blade of Hasiya inside or could be an intricately carved piece of round wood slung around one's waist with a string made of plants (called "hatteuri"). Nowadays though many use cotton, jute or even cloth strings as a replacement of hatteuri which is not easy to find. Serrated ''Simple'' or "toothed" sickles The genealogy of sickles with serrated edge reaches back to the Stone Age, when individual pieces of flint were first attached to a "blade body" of wood or bone. (The majority among the well-documented specimens made later of bronze are smooth-edged.) Nevertheless, teeth have been cut with hand-held chisels into iron, and later steel-bladed sickles for a long time. In many countries on the African continent, Central and South America as well as the Near, Middle and Far East this is still the case in the regions within these large geographies where the traditional village blacksmith remains alive and well. England appears to have been the first to develop the industrial process of serration-making. Then, by 1897, the Redtenbacher Company of Scharnstein, in Austria—at that time the largest scythe maker in the world—designed its own machine for the job, becoming the only Austrian source of serrated sickles. In 1942, its recently acquired sister company Krenhof also began to produce these. In 1970, a year before the sickle production branch of Redtenbacher was sold to Ethiopia, they were still making 1.5 million of the serrated sickles per year, predominately for market in Africa and Latin America. There were other enterprises in Austria, of course, who produced the smoothed-edged sickles for centuries. The last of the classical "round" versions were forged until the mid-1980s and machined until 2002. While in Central Europe the smooth-edged sickle—either forged or machined (alternately referred to as "stamped") - has been the only one used (and in many regions the only one known), the Iberian Peninsula, Sicily and Greece long had fans of both camps. The many small family-owned enterprises in what is now Italy, Portugal and Spain produced sickles in both versions, with the teeth on the serrated models being hand-cut, one at a time, until the mid-20th century. The Falci Co. of Italy (established in 1921 as a union of several formerly independent forges) developed its own unique method of industrial scale serrated sickle production in 1965. Their innovations, which included tapered blade cross section (thicker at the back - for strength - gradually thinning towards the edge - for ease of penetration) were later adopted by Europe's largest sickle producer in Spain as well as, more recently, a company in India. Use The inside of the blade's curve is sharp, so that the user can either draw or swing it against the base of the crop, catching the stems in the curve and slicing them at the same time. The material to be cut may be held in a bunch in the other hand (for example when reaping), held in place by a wooden stick, or left free. When held in a bunch, the sickle action is typically towards the user (left to right for a right-handed user), but when used free the sickle is usually swung the opposite way. Other colloquial/regional names for principally the same tool are: grasshook, swap hook, rip-hook, slash-hook, reaping hook, brishing hook or bagging hook. A serrated sickle was used for harvesting wheat, the ears being held bunched up in the free hand as described above. After this the straw was cut with a scythe. Oats and barley on the other hand were simply scythed. The reason for this is that wheat straw, unlike that of oats or barley, whose softer straw was suitable only for bedding or fodder, was a valuable crop, used for thatching, and subjecting it to the battering of a flail would have rendered it useless for this purpose. The blades of sickle models intended primarily for the cutting of grass are sometimes "cranked", meaning they are off-set downwards from the handle, which makes it easier to keep the blade closer to the ground. Sickles used for reaping do not benefit by this feature because cereals are usually not cut as close to the ground surface. Instead, what distinguishes this latter group is their often (though not always) serrated edges. A blade which is used regularly to cut the silica-rich stems of cereal crops acquires a characteristic sickle-gloss, or wear pattern. As a weapon Like other farming tools, the sickle can be used as an improvised bladed weapon. Examples include the Japanese kusarigama and kama, the Chinese chicken sickles, and the makraka of the Zande people of north central Africa. Paulus Hector Mair, the author of a German Renaissance combat manual also has a chapter about fighting with sickles. It is particularly prevalent in the martial arts of Malaysia, Indonesia and the Philippines. In Indonesia, the native sickle known as celurit or clurit is commonly associated with the Madurese people, used for both fighting and as a domestic tool. Other uses The hammer and sickle is a communist symbol representing proletarian solidarity, a union between the peasantry and the urban working class. It was first adapted during the Russian Revolution, the hammer representing the industrial workers, and the sickle representing the farmers/peasants. The emblem of the Grim Reaper, who is sometimes portrayed as carrying a sickle rather than the more traditional scythe. Pliny the Elder reports that golden sickles were used in Druidic rituals. Paulus Hector Mair's Manuscript Dresd. C 93 includes a section regarding the martial application of the sickle. Three (or two) entwined sickles were the heraldic badge of the medieval Hungerford family.
Technology
Agricultural tools
null
650825
https://en.wikipedia.org/wiki/Myoclonus
Myoclonus
Myoclonus is a brief, involuntary, irregular (lacking rhythm) twitching of a muscle, a joint, or a group of muscles, different from clonus, which is rhythmic or regular. Myoclonus (myo- "muscle", clonus "spasm") describes a medical sign and, generally, is not a diagnosis of a disease. It belongs to the hyperkinetic movement disorders, among tremor and chorea for example. These myoclonic twitches, jerks, or seizures are usually caused by sudden muscle contractions (positive myoclonus) or brief lapses of contraction (negative myoclonus). The most common circumstance under which they occur is while falling asleep (hypnic jerk). Myoclonic jerks occur in healthy people and are experienced occasionally by everyone. However, when they appear with more persistence and become more widespread they can be a sign of various neurological disorders. Hiccups are a kind of myoclonic jerk specifically affecting the diaphragm. When a spasm is caused by another person it is known as a provoked spasm. Shuddering attacks in babies fall in this category. Myoclonic jerks may occur alone or in sequence, in a pattern or without pattern. They may occur infrequently or many times each minute. Most often, myoclonus is one of several signs in a wide variety of nervous system disorders such as multiple sclerosis, Parkinson's disease, dystonia, cerebral palsy, Alzheimer's disease, Gaucher's disease, subacute sclerosing panencephalitis, Creutzfeldt–Jakob disease (CJD), serotonin toxicity, some cases of Huntington's disease, some forms of epilepsy, and occasionally in intracranial hypotension. In almost all instances in which myoclonus is caused by central nervous system disease it is preceded by other symptoms; for instance, in CJD it is generally a late-stage clinical feature that appears after the patient has already started to exhibit gross neurological deficits. Anatomically, myoclonus may originate from lesions of the cortex, subcortex or spinal cord. The presence of myoclonus above the foramen magnum effectively excludes spinal myoclonus; further localisation relies on further investigation with electromyography (EMG) and electroencephalography (EEG). Types The most common types of myoclonus include action, cortical reflex, essential, palatal, those seen in the progressive myoclonus epilepsies, reticular reflex, sleep and stimulus-sensitive. Epilepsy forms Cortical reflex myoclonus is thought to be a type of epilepsy that originates in the cerebral cortex – the outer layer, or "gray matter", of the brain, responsible for much of the information processing that takes place in the brain. In this type of myoclonus, jerks usually involve only a few muscles in one part of the body, but jerks involving many muscles may occur. Cortical reflex myoclonus can be intensified when patients attempt to move in a certain way or perceive a particular sensation. Essential myoclonus occurs in the absence of epilepsy or other apparent abnormalities in the brain or nerves. It can occur randomly in people with no family history or among members of the same family, indicating that it sometimes may be an inherited disorder. Essential myoclonus tends to be stable without increasing in severity over time. Some scientists speculate that some forms of essential myoclonus may be a type of epilepsy with no known cause. Juvenile myoclonic epilepsy (JME) usually consists of jerking and muscle twitches of the upper extremities. This may include the arms, shoulders, elbows, and very rarely, the legs. JME is among the most common types of epilepsy and can affect one of every 14 people with the disease. These seizures typically occur shortly after waking up. Onset for JME can be seen around puberty for most patients. Administration of medications that also treat multiple seizure types is usually the most effective form of treatment. Lennox–Gastaut syndrome (LGS), or childhood epileptic encephalopathy, is a rare epileptic disorder accounting for 1–4% of childhood epilepsies. The syndrome has much more severe symptoms ranging from multiple seizures daily, learning disabilities, and abnormal findings in electroencephalograms (EEG). Earlier age of seizure onset is correlated with a higher risk of cognitive impairment. Progressive myoclonus epilepsy (PME) is a group of diseases characterized by myoclonus, epileptic seizures, tonic–clonic seizures, and other serious symptoms such as trouble walking or speaking. These rare disorders often get worse over time and can be fatal. Studies have identified at least three forms of PME. Lafora disease is inherited as an autosomal recessive disorder, meaning that the disease occurs only when a child inherits two copies of a defective gene, one from each parent. Lafora disease is characterized by myoclonus, epileptic seizures, and dementia (progressive loss of memory and other intellectual functions). A second group of PME diseases belonging to the class of cerebral storage diseases usually involves myoclonus, visual problems, dementia, and dystonia (sustained muscle contractions that cause twisting movements or abnormal postures). Another group of PME disorders in the class of system degenerations often is accompanied by action myoclonus, seizures, and problems with balance and walking. Many of these PME diseases begin in childhood or adolescence. Treatment is not normally successful for any extended period of time. Reticular reflex myoclonus is thought to be a type of generalized epilepsy that originates in the brainstem, the part of the brain that connects to the spinal cord and controls vital functions such as breathing and heartbeat. Myoclonic jerks usually affect the whole body, with muscles on both sides of the body affected simultaneously. In some people, myoclonic jerks occur in only a part of the body, such as the legs, with all the muscles in that part being involved in each jerk. Reticular reflex myoclonus can be triggered by either a voluntary movement or an external stimulus. Diaphragmatic flutter A very rare form includes the diaphragmatic flutter, the Belly Dancer's Syndrome, or Van Leeuwenhoek's disease. It was first described by Antonie van Leeuwenhoek in 1723, who had it. The condition characterizes spoken communication that sounds like a short-breathed hiccup. These muscle spasms can recur dozens of times per day. Rate of diaphragmatic contraction ranges between 35 and 480 contractions per minute, with the average rate found to be 150. Studies show that possible causes include disruptions within the central or peripheral nervous systems, anxiety, nutritional disorder, and certain pharmaceuticals. No single treatment has proven effective, though blocking or crushing of the phrenic nerve can provide instantaneous relief when pharmacologic treatment has proven ineffective. Only about 50 people in the world have been diagnosed with diaphragmatic flutter. Other forms Action myoclonus is characterized by muscular jerking triggered or intensified by voluntary movement or even the intention to move. It may be made worse by attempts at precise, coordinated movements. Action myoclonus is the most disabling form of myoclonus and can affect the arms, legs, face, and even the voice. It is often associated with tonic-clonic seizures and diffuse neuronal disease such as post-hypoxic encephalopathy, uremia, and the various forms of PME, although, in the case of focal cerebral damage, the disease may be restricted to one limb. This type of myoclonus often is caused by brain damage that results from a lack of oxygen and blood flow to the brain when breathing or heartbeat is temporarily stopped. Over-excitement of the sensorimotor cortex (cortical reflex myoclonus) or reticular formation (reticular reflex myoclonus) is also a cause of action myoclonus. Serotonin and GABA neurotransmitters are thought to cause this lack of inhibition, which is a possible explanation as to why improvements are made with the administration of serotonin precursors. Systems involved include the cerebellodentatorubral, pyramidal, extrapyramidal, optic, auditory, posterior columns and gracile and cuneate nuclei, spinocerebellar tracts, motor neurons of cranial nerves and anterior horns, and muscle fibers. Palatal myoclonus is a regular, rhythmic contraction of one or both sides of the rear of the roof of the mouth, called the soft palate. These contractions may be accompanied by myoclonus in other muscles, including those in the face, tongue, throat, and diaphragm. The contractions are very rapid, occurring as often as 150 times a minute, and may persist during sleep. The condition usually appears in adults and can last indefinitely. People with palatal myoclonus usually regard it as a minor problem; some complain of an occasional "clicking" sound, a noise made as the soft palate muscles contract. Middle ear myoclonus occurs in the muscles of the middle ear. These muscles may include the tensor tympani and stapedius muscles. It can involve the muscles surrounding the Eustachian tube, which include the tensor veli palatini, levator veli palatini, and salpingopharyngeus. Those affected describe it as a thumping sound or sensation in the ear. Spinal myoclonus is myoclonus originating in the spinal cord, including segmental and propriospinal myoclonus. The latter is usually due to a thoracic generator producing truncal flexion jerk. It is often stimulus-induced with a delay due to the slow conducting propriospinal nerve fibers. Stimulus-sensitive myoclonus is triggered by a variety of external events, including noise, movement, and light. Surprise may increase the sensitivity of the patient. Sleep myoclonus occurs during the initial phases of sleep, especially at the moment of dropping off to sleep, and include familiar examples of myoclonus such as the hypnic jerk. Some forms appear to be stimulus-sensitive. Some people with sleep myoclonus are rarely troubled by it, or need treatment. If it is a symptom of more complex and disturbing sleep disorders, such as restless legs syndrome, it may require medical treatment. Myoclonus can be associated with patients with Tourette syndrome. Signs and symptoms Myoclonic seizure can be described as "jumps" or "jolts" experienced in a single extremity or even the entire body. The feeling experienced by the individual is described as uncontrollable jolts common to receiving a mild electric shock. The sudden jerks and twitching of the body can often be so severe that it can cause a small child to fall. A myoclonic seizure (myo "muscle", clonic "jerk") is a sudden involuntary contraction of muscle groups. The muscle jerks consist of symmetric, mostly generalized jerks, localized in the arms and in the shoulders and also simultaneously with a head nod; both the arms may fling out together and simultaneously a head nod may occur. Symptoms have some variability amongst subjects. Sometimes the entire body may jerk, just like a startle response. As is the case with all generalised seizures, the patient is not conscious during the event but the seizure is so brief that the person appears to remain fully conscious. In reflex epilepsies, myoclonic seizures can be brought on by flashing lights or other environmental triggers (see photosensitive epilepsy). Familiar examples of normal myoclonus include hiccups and hypnic jerks that some people experience while drifting off to sleep. Severe cases of pathologic myoclonus can distort movement and severely limit a person's ability to sleep, eat, talk, and walk. Myoclonic jerks commonly occur in individuals with epilepsy. Cause Myoclonus in healthy individuals may indicate nothing other than arbitrary muscle contraction. Myoclonus may also develop in response to infection, hyperosmolar hyperglycemic state, head or spinal cord injury, stroke, stress, brain tumors, kidney or liver failure, lipid storage disease, chemical or drug poisoning, as a side effect of certain drugs (such as tramadol, quinolones, benzodiazepine, gabapentin, sertraline, lamotrigine, opioids), or other disorders. Benign myoclonic movements are commonly seen during the induction of general anesthesia with intravenous medications such as etomidate and propofol. These are postulated to result from decreased inhibitory signaling from cranial neurons. Prolonged oxygen deprivation to the brain, hypoxia, may result in posthypoxic myoclonus. People with benign fasciculation syndrome can often experience myoclonic jerking of limbs, fingers and thumbs. Myoclonus can occur by itself, but most often as one of several symptoms associated with a variety of nervous system disorders, including multiple sclerosis, Parkinson's disease, Alzheimer's disease, opsoclonus myoclonus, Creutzfeldt–Jakob disease, Lyme disease and lupus. Myoclonic jerks commonly occur in persons with epilepsy, a disorder in which the electrical activity in the brain becomes disordered leading to seizures. It is also found in MERRF (Myoclonic Epilepsy with Ragged Red Fibers), a rare mitochondrial encephalomyopathy. Jerks of muscle groups, much of the body, or a series in rapid succession, which results in the person jerking bolt upright from a more relaxed sitting position is sometimes seen in ambulatory patients being treated with high doses of morphine, hydromorphone, and similar drugs, and is possibly a sign of high and/or rapidly increasing serum levels of these drugs. Myoclonic jerks caused by other opioids, such as tramadol and pethidine, may be less benign. Medications unrelated to opioids, such as anticholinergics, are known to cause myoclonic jerks. Pathophysiology Most myoclonus is caused by a disturbance of the central nervous system. Some are from peripheral nervous system injury. Studies suggest several locations in the brain are involved in myoclonus. One is in the brainstem, close to structures that are responsible for the startle response, an automatic reaction to an unexpected stimulus involving rapid muscle contraction. The specific mechanisms underlying myoclonus are not yet fully understood. Scientists believe that some types of stimulus-sensitive myoclonus may involve overexcitability of the parts of the brain that control movement. These parts are interconnected in a series of feedback loops called motor pathways. These pathways facilitate and modulate communication between the brain and muscles. Key elements of this communication are chemicals known as neurotransmitters, which carry messages from one nerve cell, or neuron, to another. Neurotransmitters are released by neurons and attach themselves to receptors on parts of neighboring cells. Some neurotransmitters may make the receiving cell more sensitive, while others tend to make the receiving cell less sensitive. Laboratory studies suggest that an imbalance between these chemicals may underlie myoclonus. Some researchers speculate that abnormalities or deficiencies in the receptors for certain neurotransmitters may contribute to some forms of myoclonus. Receptors that appear to be related to myoclonus include those for two important inhibitory neurotransmitters: serotonin, which constricts blood vessels and brings on sleep, and gamma-aminobutyric acid (GABA), which helps the brain maintain muscle control. Other receptors with links to myoclonus include those for glycine, an inhibitory neurotransmitter that is important for the control of motor and sensory functions in the spinal cord, and those affected by benzodiazepines, a variety of medication that usually induces sleep. More research is needed to determine how these receptor abnormalities cause or contribute to myoclonus. Treatment Concerning more serious conditions, the complex origins of myoclonus may be treated with multiple drugs, which have a limited effect individually, but greater when combined with others that act on different brain pathways or mechanisms. Treatment is most effective when the underlying cause is known, and can be treated as such. Some drugs being studied in different combinations include clonazepam, sodium valproate, piracetam, and primidone. Hormonal therapy may improve responses to antimyoclonic drugs in some people. Some studies have shown that doses of 5-hydroxytryptophan (5-HTP) leads to improvement in patients with some types of action myoclonus and PME. These differences in the effect of 5-HTP on patients with myoclonus have not yet been explained. Many of the drugs used for myoclonus, such as barbiturates, phenytoin and primidone, are also used to treat epilepsy. Barbiturates slow down the central nervous system and cause tranquilizing or antiseizure effects. Phenytoin and primidone are effective antiepileptics drugs, although phenytoin can cause liver failure or have other harmful long-term effects in patients with PME. Sodium valproate is an alternative therapy for myoclonus and can be used either alone or in combination with clonazepam. Some people have adverse reactions to clonazepam and/or sodium valproate. When patients are taking multiple medications, the discontinuation of drugs suspected of causing myoclonus and treatment of metabolic derangements may resolve some cases of myoclonus. When pharmacological treatment is indicated anticonvulsants are the main line of treatment. Paradoxical reactions to treatment are notable. Drugs which most people respond to may in other individuals worsen their symptoms. Sometimes this leads to the mistake of increasing the dose, rather than decreasing or stopping the drug. Treatment of myoclonus focuses on medications that may help reduce symptoms. Drugs used include sodium valproate, clonazepam, the anticonvulsant levetiracetam, and piracetam. Dosages of clonazepam usually are increased gradually until the patient improves or side effects become harmful. Drowsiness and loss of coordination are common side effects. The beneficial effects of clonazepam may diminish over time if the patient develops a tolerance to the drug. In forms of myoclonus where only a single area is affected, and even in a few other various forms, Botox injections (OnabotulinumtoxinA) may be helpful. The chemical messenger responsible for triggering the involuntary muscle contractions is blocked by the Botulinum toxins of the Botox. Surgery is also a viable option for treatment if the symptoms are caused by a tumor or lesion in the brain or spinal cord. Surgery may also correct symptoms in those where myoclonus affects parts of the face or ear. While DBS is still being studied for use with myoclonus, Deep Brain Stimulation has also been tried in those with this and other movement disorders. Prognosis The effects of myoclonus in an individual can vary depending on the form and the overall health of the individual. In severe cases, particularly those indicating an underlying disorder in the brain or nerves, movement can be extremely distorted and limit ability to normally function, such as in eating, talking, and walking. In these cases, treatment that is usually effective, such as clonazepam and sodium valproate, may instead cause adverse reaction to the drug, including increased tolerance and a greater need for increase in dosage. However, the prognosis for more simple forms of myoclonus in otherwise healthy individuals may be neutral, as the disease may cause few to no difficulties. Other times the disease starts simply, in one region of the body, and then spreads. Research Research on myoclonus is supported through the National Institute of Neurological Disorders and Stroke (NINDS). The primary focus of research is on the role of neurotransmitters and receptors involved in the disease. Identifying whether or not abnormalities in these pathways cause myoclonus may help in efforts to develop drug treatments and diagnostic tests. Determining the extent that genetics play in these abnormalities may lead to potential treatments for their reversal, potentially correcting the loss of inhibition while enhancing mechanisms in the body that would compensate for their effects. Etymology The word myoclonus uses combining forms of myo- and clonus, indicating muscle contraction dysfunction. It is pronounced or . The prevalence of the variants shows division between American English and British English. The variant stressing the -oc- syllable is the only pronunciation given in a half dozen major American dictionaries (medical and general). The variant stressing the -clo- syllable is given in the British English module of Oxford Dictionaries online but not in the American English module.
Biology and health sciences
Symptoms and signs
Health
650918
https://en.wikipedia.org/wiki/Electrophilic%20substitution
Electrophilic substitution
Electrophilic substitution reactions are chemical reactions in which an electrophile displaces a functional group in a compound, which is typically, but not always, aromatic. Aromatic substitution reactions are characteristic of aromatic compounds and are common ways of introducing functional groups into benzene rings. Some aliphatic compounds can undergo electrophilic substitution as well. Electrophilic aromatic substitution In electrophilic substitution in aromatic compounds, an atom appended to the aromatic ring, usually hydrogen, is replaced by an electrophile. The most important reactions of this type that take place are aromatic nitration, aromatic halogenation, aromatic sulfonation and acylation and alkylating Friedel-Crafts reactions. It further consists of alkylation and acylation. Electrophilic aliphatic substitution In electrophilic substitution in aliphatic compounds, an electrophile displaces a functional group. This reaction is similar to nucleophilic aliphatic substitution where the reactant is a nucleophile rather than an electrophile. The four possible electrophilic aliphatic substitution reaction mechanisms are SE1, SE2(front), SE2(back) and SEi (Substitution Electrophilic), which are also similar to the nucleophile counterparts SN1 and SN2. In the SE1 course of action the substrate first ionizes into a carbanion and a positively charged organic residue. The carbanion then quickly recombines with the electrophile. The SE2 reaction mechanism has a single transition state in which the old bond and the newly formed bond are both present. Electrophilic aliphatic substitution reactions are: Nitrosation Ketone halogenation Keto-enol tautomerism Aliphatic diazonium coupling Carbene insertion into C-H bonds Carbonyl alpha-substitution reactions
Physical sciences
Organic reactions
Chemistry
650995
https://en.wikipedia.org/wiki/Reactivity%20series
Reactivity series
In chemistry, a reactivity series (or reactivity series of elements) is an empirical, calculated, and structurally analytical progression of a series of metals, arranged by their "reactivity" from highest to lowest. It is used to summarize information about the reactions of metals with acids and water, single displacement reactions and the extraction of metals from their ores. Table Going from the bottom to the top of the table the metals: increase in reactivity; lose electrons (oxidize) more readily to form positive ions; corrode or tarnish more readily; require more energy (and different methods) to be isolated from their compounds; become stronger reducing agents (electron donors). Defining reactions There is no unique and fully consistent way to define the reactivity series, but it is common to use the three types of reaction listed below, many of which can be performed in a high-school laboratory (at least as demonstrations). Reaction with water and acids The most reactive metals, such as sodium, will react with cold water to produce hydrogen and the metal hydroxide: 2 Na (s) + 2 H2O (l) →2 NaOH (aq) + H2 (g) Metals in the middle of the reactivity series, such as iron, will react with acids such as sulfuric acid (but not water at normal temperatures) to give hydrogen and a metal salt, such as iron(II) sulfate: Fe (s) + H2SO4 (l) → FeSO4 (aq) + H2 (g) There is some ambiguity at the borderlines between the groups. Magnesium, aluminium and zinc can react with water, but the reaction is usually very slow unless the metal samples are specially prepared to remove the surface passivation layer of oxide which protects the rest of the metal. Copper and silver will react with nitric acid; but because nitric acid is an oxidizing acid, the oxidizing agent is not the H+ ion as in normal acids, but the NO3− ion. Comparison with standard electrode potentials The reactivity series is sometimes quoted in the strict reverse order of standard electrode potentials, when it is also known as the "electrochemical series". The following list includes the metallic elements of the first six periods. It is mostly based on tables provided by NIST. However, not all sources give the same values: there are some differences between the precise values given by NIST and the CRC Handbook of Chemistry and Physics. In the first six periods this does not make a difference to the relative order, but in the seventh period it does, so the seventh-period elements have been excluded. (In any case, the typical oxidation states for the most accessible seventh-period elements thorium and uranium are too high to allow a direct comparison.) Hydrogen has been included as a benchmark, although it is not a metal. Borderline germanium, antimony, and astatine have been included. Some other elements in the middle of the 4d and 5d rows have been omitted (Zr–Tc, Hf–Os) when their simple cations are too highly charged or of rather doubtful existence. Greyed-out rows indicate values based on estimation rather than experiment. The positions of lithium and sodium are changed on such a series. Standard electrode potentials offer a quantitative measure of the power of a reducing agent, rather than the qualitative considerations of other reactive series. However, they are only valid for standard conditions: in particular, they only apply to reactions in aqueous solution. Even with this proviso, the electrode potentials of lithium and sodium – and hence their positions in the electrochemical series – appear anomalous. The order of reactivity, as shown by the vigour of the reaction with water or the speed at which the metal surface tarnishes in air, appears to be Cs > K > Na > Li > alkaline earth metals, i.e., alkali metals > alkaline earth metals, the same as the reverse order of the (gas-phase) ionization energies. This is borne out by the extraction of metallic lithium by the electrolysis of a eutectic mixture of lithium chloride and potassium chloride: lithium metal is formed at the cathode, not potassium. Comparison with electronegativity values The image shows a periodic table extract with the electronegativity values of metals. Wulfsberg distinguishes: very electropositive metals with electronegativity values below 1.4 electropositive metals with values between 1.4 and 1.9; and electronegative metals with values between 1.9 and 2.54. From the image, the group 1–2 metals and the lanthanides and actinides are very electropositive to electropositive; the transition metals in groups 3 to 12 are very electropositive to electronegative; and the post-transition metals are electropositive to electronegative. The noble metals, inside the dashed border (as a subset of the transition metals) are very electronegative.
Physical sciences
Electrochemistry
Chemistry
651311
https://en.wikipedia.org/wiki/Subdwarf
Subdwarf
A subdwarf, sometimes denoted by "sd", is a star with luminosity class VI under the Yerkes spectral classification system. They are defined as stars with luminosity 1.5 to 2 magnitudes lower than that of main-sequence stars of the same spectral type. On a Hertzsprung–Russell diagram subdwarfs appear to lie below the main sequence. The term "subdwarf" was coined by Gerard Kuiper in 1939, to refer to a series of stars with anomalous spectra that were previously labeled as "intermediate white dwarfs". Since Kuiper coined the term, the subdwarf type has been extended to lower-mass stars than were known at the time. Astronomers have also discovered an entirely different group of blue-white subdwarfs, making two distinct categories: Cool subdwarfs Hot subdwarfs Cool (red) subdwarfs Like ordinary main-sequence stars, cool subdwarfs (of spectral types G to M) produce their energy from hydrogen fusion. The explanation of their underluminosity lies in their low metallicity: These stars are not enriched in elements heavier than helium. The lower metallicity decreases the opacity of their outer layers and decreases the radiation pressure, resulting in a smaller, hotter star for a given mass. This lower opacity also allows them to emit a higher percentage of ultraviolet light for the same spectral type relative to a Population I star, a feature known as the ultraviolet excess. Usually members of the Milky Way's halo, they frequently have high space velocities relative to the Sun. Cool subdwarfs of spectral type L and T exist, for example ULAS J131610.28+075553.0 with spectral type sdT6.5. Subclasses of cool subdwarfs are as following: cool subdwarf Examples: Kapteyn's Star (sdM1), GJ 1062 (sdM2.5) extreme subdwarf Example: APMPM J0559-2903 (esdM7) ultrasubdwarf Example: LSPM J0822+1700 (usdM7.5) Subdwarfs of type L, T and Y The low metallicity of subdwarfs is coupled with their old age. The early universe had a low content of elements heavier than helium and formed stars and brown dwarfs with lower metallicity. Only later supernovae, planetary nebulae and neutron star mergers enriched the universe with heavier elements. The old subdwarfs belong therefore often to the older structures in our Milky Way, mainly the thick disk and the galactic halo. Objects in the thick disk or the halo have a high space velocity compared to the Sun, which belongs to the younger thin disk. A high proper motion can be used to discover subdwarfs. Additionally the subdwarfs have spectral features that make them different from subdwarfs with solar metallicity. All subdwarfs share the suppression of the near-infrared spectrum, mainly the H-band and K-band. The low metallicity increase the collision induced absorption of hydrogen, causing this suppressed near-infrared spectrum. This is seen as blue infrared colors compared to brown dwarfs with solar metallicity. The low metallicity also change other absorption features, such as deeper CaH and TiO bands at 0.7 μm in L-subdwarfs, a weaker VO band at 0.8 μm in early L-subdwarfs and stronger FeH band at 0.99 μm for mid- to late L-subdwarfs. 2MASS J0532+8246 was discovered in 2003 as the first L-type subdwarf, which was later re-classified as an extreme subdwarf. The L-type subdwarfs have subtypes similar to M-type subdwarfs: The subtypes subdwarf (sd), extreme subdwarfs (esd) and ultra subdwarfs (usd), which are defined by their decreasing metallicity, compared to solar metallicity, which is defined on a logarithmic scale: subdwarfs have extreme subdwarfs have and ultra subdwarfs have The Sun sets the scale at by definition. For T-type subdwarfs only a small sample of subdwarfs and extreme subdwarfs is known. 2MASSI J0937347+293142 is the first object that was discovered in 2002 as a T-type subdwarf candidate and in 2006 it was confirmed to have low metallicity. The first two extreme subdwarfs of type T were discovered in 2020 by scientists and volunteers of the Backyard Worlds project. The first extreme subdwarfs of type T are WISEA 0414−5854 and WISEA 1810−1010. Subdwarfs of type T and Y have less methane in their atmosphere, due to the lower concentration of carbon in these subdwarfs. This leads to a bluer W1-W2 (WISE) or ch1-ch2 (Spitzer) color, compared to objects with similar temperature, but with solar metallicity. The color of T-types as a single classification criterion can be misleading. The closest directly imaged exoplanet, COCONUTS-2b, was first classified as a subdwarf of type T due to its color, while not showing a high tangential velocity. Only in 2021 it was identified as an exoplanet. The first Y-type subdwarf candidate was discovered in 2021, the brown dwarf WISE 1534–1043, which shows a moderate red Spitzer Space Telescope color (ch1-ch2 = 0.925±0.039 mag). The very red color between J and ch2 (J-ch2 > 8.03 mag) and the absolute brightness would suggest a much redder ch1-ch2 color of about 2.4 to 3 mag. Due to the agreement with new subdwarf models, together with the high tangential velocity of 200 km/s, Kirkpatrick, Marocco et al. (2021) argue that the most likely explanation is a cold very low-metal brown dwarf, maybe the first subdwarf of type Y. Binaries can help to determine the age and mass of these subdwarfs. The subdwarf VVV 1256−62B (sdL3) was discovered as a companion to a halo white dwarf, allowing the age to be measured at 8.4 to 13.8 billion years. It has a mass of 84 to 87 , making VVV 1256−62B likely a red dwarf star. The subdwarf Wolf 1130C (sdT8) is the companion of an old subdwarf-white dwarf binary, which is estimated to be older than 10 billion years. It has a mass of 44.9 , making it a brown dwarf. Examples of cool subdwarfs Kapteyn's Star Groombridge 1830 Mu Cassiopeiae 2MASS J05325346+8246465, a possible halo brown dwarf and the first substellar subdwarf. SSSPM J1549-3544 Hot (blue) subdwarfs Hot subdwarfs, of bluish spectral types O and B are an entirely different class of object than cool subdwarfs; they are also called "extreme horizontal-branch stars". Hot subdwarf stars represent a late stage in the evolution of some stars, caused when a red giant star loses its outer hydrogen layers before the core begins to fuse helium. The reasons for their premature loss of their hydrogen envelope are unclear, but the interaction of stars in a binary star system is thought to be one of the main mechanisms. Single subdwarfs may be the result of a merger of two white dwarfs or gravitational influence from substellar companions. B-type subdwarfs, being more luminous than white dwarfs, are a significant component in the hot star population of old stellar systems, such as globular clusters and elliptical galaxies. Heavy metal subdwarfs The heavy metal subdwarfs are a type of hot subdwarf star with high concentrations of heavy metals. The metals detected include germanium, strontium, yttrium, zirconium and lead. Known heavy metal subdwarfs include HE 2359-2844, LS IV-14 116, and HE 1256-2738.
Physical sciences
Stellar astronomy
Astronomy
651372
https://en.wikipedia.org/wiki/Evaporative%20cooler
Evaporative cooler
An evaporative cooler (also known as evaporative air conditioner, swamp cooler, swamp box, desert cooler and wet air cooler) is a device that cools air through the evaporation of water. Evaporative cooling differs from other air conditioning systems, which use vapor-compression or absorption refrigeration cycles. Evaporative cooling exploits the fact that water will absorb a relatively large amount of heat in order to evaporate (that is, it has a large enthalpy of vaporization). The temperature of dry air can be dropped significantly through the phase transition of liquid water to water vapor (evaporation). This can cool air using much less energy than refrigeration. In extremely dry climates, evaporative cooling of air has the added benefit of conditioning the air with more moisture for the comfort of building occupants. The cooling potential for evaporative cooling is dependent on the wet-bulb depression, the difference between dry-bulb temperature and wet-bulb temperature (see relative humidity). In arid climates, evaporative cooling can reduce energy consumption and total equipment for conditioning as an alternative to compressor-based cooling. In climates not considered arid, indirect evaporative cooling can still take advantage of the evaporative cooling process without increasing humidity. Passive evaporative cooling strategies can offer the same benefits as mechanical evaporative cooling systems without the complexity of equipment and ductwork. History An earlier form of evaporative cooling, the windcatcher, was first used in ancient Egypt and Persia thousands of years ago in the form of wind shafts on the roof. They caught the wind, passed it over subterranean water in a qanat and discharged the cooled air into the building. Modern Iranians have widely adopted powered evaporative coolers (). The evaporative cooler was the subject of numerous US patents in the 20th century; many of these, starting in 1906, suggested or assumed the use of excelsior (wood wool) pads as the elements to bring a large volume of water in contact with moving air to allow evaporation to occur. A typical design, as shown in a 1945 patent, includes a water reservoir (usually with level controlled by a float valve), a pump to circulate water over the excelsior pads and a centrifugal fan to draw air through the pads and into the house. This design and this material remain dominant in evaporative coolers in the American Southwest, where they are also used to increase humidity. In the United States, the use of the term swamp cooler may be due to the odor of algae produced by early units. Externally mounted evaporative cooling devices (car coolers) were used in some automobiles to cool interior air—often as aftermarket accessories—until modern vapor-compression air conditioning became widely available. Passive evaporative cooling techniques in buildings have been a feature of desert architecture for centuries, but Western acceptance, study, innovation, and commercial application are all relatively recent. In 1974, William H. Goettl noticed how evaporative cooling technology works in arid climates, speculated that a combination unit could be more effective, and invented the "High Efficiency Astro Air Piggyback System", a combination refrigeration and evaporative cooling air conditioner. In 1986, University of Arizona researchers built a passive evaporative cooling tower, and performance data from this experimental facility in Tucson, Arizona became the foundation of evaporative cooling tower design guidelines. Physical principles Evaporative coolers lower the temperature of air using the principle of evaporative cooling, unlike typical air conditioning systems which use vapor-compression refrigeration or absorption refrigeration. Evaporative cooling is the conversion of liquid water into vapor using the thermal energy in the air, resulting in a lower air temperature. The energy needed to evaporate the water is taken from the air in the form of sensible heat, which affects the temperature of the air, and converted into latent heat, the energy present in the water vapor component of the air, whilst the air remains at a constant enthalpy value. This conversion of sensible heat to latent heat is known as an isenthalpic process because it occurs at a constant enthalpy value. Evaporative cooling therefore causes a drop in the temperature of air proportional to the sensible heat drop and an increase in humidity proportional to the latent heat gain. Evaporative cooling can be visualized using a psychrometric chart by finding the initial air condition and moving along a line of constant enthalpy toward a state of higher humidity. A simple example of natural evaporative cooling is perspiration, or sweat, secreted by the body, evaporation of which cools the body. The amount of heat transfer depends on the evaporation rate, however for each kilogram of water vaporized 2,257 kJ of energy (about 890 BTU per pound of pure water, at 95 °F (35 °C)) are transferred. The evaporation rate depends on the temperature and humidity of the air, which is why sweat accumulates more on humid days, as it does not evaporate fast enough. Vapor-compression refrigeration uses evaporative cooling, but the evaporated vapor is within a sealed system, and is then compressed ready to evaporate again, using energy to do so. A simple evaporative cooler's water is evaporated into the environment, and not recovered. In an interior space cooling unit, the evaporated water is introduced into the space along with the now-cooled air; in an evaporative tower the evaporated water is carried off in the airflow exhaust. Other types of phase-change cooling A closely related process, sublimation cooling, differs from evaporative cooling in that a phase transition from solid to vapor, rather than liquid to vapor, occurs. Sublimation cooling has been observed to operate on a planetary scale on the planetoid Pluto, where it has been called an anti-greenhouse effect. Another application of a phase change to cooling is the "self-refrigerating" beverage can. A separate compartment inside the can contains a desiccant and a liquid. Just before drinking, a tab is pulled so that the desiccant comes into contact with the liquid and dissolves. As it does so, it absorbs an amount of heat energy called the latent heat of fusion. Evaporative cooling works with the phase change of liquid into vapor and the latent heat of vaporization, but the self-cooling can uses a change from solid to liquid, and the latent heat of fusion, to achieve the same result. Applications Before the advent of modern refrigeration, evaporative cooling was used for millennia, for instance in qanats, windcatchers, and mashrabiyas. A porous earthenware vessel would cool water by evaporation through its walls; frescoes from about 2500 BCE show slaves fanning jars of water to cool rooms. Alternatively, a bowl filled with milk or butter could be placed in another bowl filled with water, all being covered with a wet cloth resting in the water, to keep the milk or butter as fresh as possible (see zeer, botijo and Coolgardie safe). Evaporative cooling is a common form of cooling buildings for thermal comfort since it is relatively cheap and requires less energy than other forms of cooling. The figure showing the Salt Lake City weather data represents the typical summer climate (June to September). The colored lines illustrate the potential of direct and indirect evaporative cooling strategies to expand the comfort range in summer time. It is mainly explained by the combination of a higher air speed on one hand and elevated indoor humidity when the region permits the direct evaporative cooling strategy on the other hand. Evaporative cooling strategies that involve the humidification of the air should be implemented in dry condition where the increase in moisture content stays below recommendations for occupant's comfort and indoor air quality. Passive cooling towers lack the control that traditional HVAC systems offer to occupants. However, the additional air movement provided into the space can improve occupant comfort. Evaporative cooling is most effective when the relative humidity is on the low side, limiting its popularity to dry climates. Evaporative cooling raises the internal humidity level significantly, which desert inhabitants may appreciate as the moist air re-hydrates dry skin and sinuses. Therefore, assessing typical climate data is an essential procedure to determine the potential of evaporative cooling strategies for a building. The three most important climate considerations are dry-bulb temperature, wet-bulb temperature, and wet-bulb depression during a typical summer day. It is important to determine if the wet-bulb depression can provide sufficient cooling during the summer day. By subtracting the wet-bulb depression from the outside dry-bulb temperature, one can estimate the approximate air temperature leaving the evaporative cooler. It is important to consider that the ability for the exterior dry-bulb temperature to reach the wet-bulb temperature depends on the saturation efficiency. A general recommendation for applying direct evaporative cooling is to implement it in places where the wet-bulb temperature of the outdoor air does not exceed . However, in the example of Salt Lake City, the upper limit for the direct evaporative cooling on psychrometric chart is . Despite the lower temperature, evaporative cooling is suitable for similar climates to Salt Lake City. Evaporative cooling is especially well suited for climates where the air is hot and humidity is low. In the United States, the western and mountain states are good locations, with evaporative coolers prevalent in cities like Albuquerque, Denver, El Paso, Fresno, Salt Lake City, and Tucson. Evaporative air conditioning is also popular and well-suited to the southern (temperate) part of Australia. In dry, arid climates, the installation and operating cost of an evaporative cooler can be much lower than that of refrigerative air conditioning, often by 80% or so. However, evaporative cooling and vapor-compression air conditioning are sometimes used in combination to yield optimal cooling results. Some evaporative coolers may also serve as humidifiers in the heating season. In regions that are mostly arid, short periods of high humidity may prevent evaporative cooling from being an effective cooling strategy. An example of this event is the monsoon season in New Mexico and central and southern Arizona in July and August. In locations with moderate humidity there are many cost-effective uses for evaporative cooling, in addition to their widespread use in dry climates. For example, industrial plants, commercial kitchens, laundries, dry cleaners, greenhouses, spot cooling (loading docks, warehouses, factories, construction sites, athletic events, workshops, garages, and kennels) and confinement farming (poultry ranches, hog, and dairy) often employ evaporative cooling. In highly humid climates, evaporative cooling may have little thermal comfort benefit beyond the increased ventilation and air movement it provides. Other examples Trees transpire large amounts of water through pores in their leaves called stomata, and through this process of evaporative cooling, forests interact with climate at local and global scales. Simple evaporative cooling devices such as evaporative cooling chambers (ECCs) and clay pot coolers, or pot-in-pot refrigerators, are simple and inexpensive ways to keep vegetables fresh without the use of electricity. Several hot and dry regions throughout the world could potentially benefit from evaporative cooling, including North Africa, the Sahel region of Africa, the Horn of Africa, southern Africa, the Middle East, arid regions of South Asia, and Australia. Benefits of evaporative cooling chambers for many rural communities in these regions include reduced post-harvest loss, less time spent traveling to the market, monetary savings, and increased availability of vegetables for consumption. Evaporative cooling is commonly used in cryogenic applications. The vapor above a reservoir of cryogenic liquid is pumped away, and the liquid continuously evaporates as long as the liquid's vapor pressure is significant. Evaporative cooling of ordinary helium forms a 1-K pot, which can cool to at least 1.2 K. Evaporative cooling of helium-3 can provide temperatures below 300 mK. These techniques can be used to make cryocoolers, or as components of lower-temperature cryostats such as dilution refrigerators. As the temperature decreases, the vapor pressure of the liquid also falls, and cooling becomes less effective. This sets a lower limit to the temperature attainable with a given liquid. Evaporative cooling is also the last cooling step in order to reach the ultra-low temperatures required for Bose–Einstein condensation (BEC). Here, so-called forced evaporative cooling is used to selectively remove high-energetic ("hot") atoms from an atom cloud until the remaining cloud is cooled below the BEC transition temperature. For a cloud of 1 million alkali atoms, this temperature is about 1μK. Although robotic spacecraft use thermal radiation almost exclusively, many crewed spacecraft have short missions that permit open-cycle evaporative cooling. Examples include the Space Shuttle, the Apollo command and service module (CSM), lunar module and portable life support system. The Apollo CSM and the Space Shuttle also had radiators, and the Shuttle could evaporate ammonia as well as water. The Apollo spacecraft used sublimators, compact and largely passive devices that dump waste heat in water vapor (steam) that is vented to space. When liquid water is exposed to vacuum it boils vigorously, carrying away enough heat to freeze the remainder to ice that covers the sublimator and automatically regulates the feedwater flow depending on the heat load. The water expended is often available in surplus from the fuel cells used by many crewed spacecraft to produce electricity. Designs Most designs take advantage of the fact that water has one of the highest known enthalpy of vaporization (latent heat of vaporization) values of any common substance. Because of this, evaporative coolers use only a fraction of the energy of vapor-compression or absorption air conditioning systems. Except in very dry climates, the single-stage (direct) cooler can increase relative humidity (RH) to a level that makes occupants uncomfortable. Indirect and two-stage evaporative coolers keep the RH lower. Direct evaporative cooling Direct evaporative cooling (open circuit) is used to lower the temperature and increase the humidity of air by using latent heat of evaporation, changing liquid water to water vapor. In this process, the energy in the air does not change. Warm dry air is changed to cool moist air. The heat of the outside air is used to evaporate water. The RH increases to 70 to 90% which reduces the cooling effect of human perspiration. The moist air has to be continually released to outside or else the air becomes saturated and evaporation stops. A mechanical direct evaporative cooler unit uses a fan to draw air through a wetted membrane, or pad, which provides a large surface area for the evaporation of water into the air. Water is sprayed at the top of the pad so it can drip down into the membrane and continually keep the membrane saturated. Any excess water that drips out from the bottom of the membrane is collected in a pan and recirculated to the top. Single-stage direct evaporative coolers are typically small in size as they only consist of the membrane, water pump, and centrifugal fan. The mineral content of the municipal water supply will cause scaling on the membrane, which will lead to clogging over the life of the membrane. Depending on this mineral content and the evaporation rate, regular cleaning and maintenance are required to ensure optimal performance. Generally, supply air from the single-stage evaporative cooler will need to be exhausted directly (one-through flow) as with direct evaporative cooling. A few design solutions have been conceived to utilize the energy in the air, like directing the exhaust air through two sheets of double glazed windows, thus reducing the solar energy absorbed through the glazing. Compared to energy required to achieve the equivalent cooling load with a compressor, single stage evaporative coolers consume less energy. Passive direct evaporative cooling can occur anywhere that the evaporatively cooled water can cool a space without the assistance of a fan. This can be achieved through the use of fountains or more architectural designs such as the evaporative downdraft cooling tower, also called a "passive cooling tower". The passive cooling tower design allows outside air to flow in through the top of a tower that is constructed within or next to the building. The outside air comes in contact with water inside the tower either through a wetted membrane or a mister. As water evaporates in the outside air, the air becomes cooler and less buoyant and creates a downward flow in the tower. At the bottom of the tower, an outlet allows the cooler air into the interior. Similar to mechanical evaporative coolers, towers can be an attractive low-energy solution for hot and dry climate as they only require a water pump to raise water to the top of the tower. Energy savings from using a passive direct evaporating cooling strategy depends on the climate and heat load. For arid climates with a great wet-bulb depression, cooling towers can provide enough cooling during summer design conditions to be net zero. For example, a 371 m2 (4,000 ft2) retail store in Tucson, Arizona with a sensible heat gain of 29.3 kJ/h (100,000 Btu/h) can be cooled entirely by two passive cooling towers providing 11890 m3/h (7,000 cfm) each. For the Zion National Park visitors' center, which uses two passive cooling towers, the cooling energy intensity was 14.5 MJ/m2 (1.28 kBtu/ft2;), which was 77% less than a typical building in the western United States that uses 62.5 MJ/m2 (5.5 kBtu/ft2). A study of field performance results in Kuwait revealed that power requirements for an evaporative cooler are approximately 75% less than the power requirements for a conventional packaged unit air-conditioner. Indirect evaporative cooling Indirect evaporative cooling (closed circuit) is a cooling process that uses direct evaporative cooling in addition to some heat exchanger to transfer the cool energy to the supply air. The cooled moist air from the direct evaporative cooling process never comes in direct contact with the conditioned supply air. The moist air stream is released outside or used to cool other external devices such as solar cells which are more efficient if kept cool. This is done to avoid excess humidity in enclosed spaces, which is not appropriate for residential systems. Maisotsenko cycle Indirect cooler manufacturer uses the Maisotsenko cycle (M-Cycle), named after inventor and Professor Dr. Valeriy Maisotsenko, employs an iterative (multi-step) heat exchanger made of a thin recyclable membrane that can reduce the temperature of product air to below the wet-bulb temperature, and can approach the dew point. Testing by the US Department of Energy found that a hybrid M-Cycle combined with a standard compression refrigeration system significantly improved efficiency by between 150 and 400% but was only capable of doing so in the dry western half of the US, and did not recommend being used in the much more humid eastern half of the US. The evaluation found that the system water consumption of 2–3 gallons per cooling ton (12,000 BTUs) was roughly equal in efficiency to the water consumption of new high efficiency power plants. This means the higher efficiency can be utilized to reduce load on the grid without requiring any additional water, and may actually reduce water usage if the source of the power does not have a high efficiency cooling system. An M-Cycle based system built by Coolerado is currently being used to cool the Data Center for NASA's National Snow and Ice Data Center (NSIDC). The facility is air cooled below 70 degrees Fahrenheit and uses the Coolerado system above that temperature. This is possible because the air handler for the system uses fresh outside air, which allows it to automatically use cool outside ambient air when conditions allow. This avoids running the refrigeration system when unnecessary. It is powered by a solar panel array which also serves as secondary power in case of main power loss. The system has very high efficiency but, like other evaporative cooling systems, is constrained by the ambient humidity levels, which has limited its adoption for residential use. It may be used as supplementary cooling during times of extreme heat without placing significant additional burden on electrical infrastructure. If a location has excess water supplies or excess desalination capacity it can be used to reduce excessive electrical demand by utilizing water in affordable M-Cycle units. Due to high costs of conventional air conditioning units and extreme limitations of many electrical utility systems, M-Cycle units may be the only appropriate cooling systems suitable for impoverished areas during times of extremely high temperature and high electrical demand. In developed areas, they may serve as supplemental backup systems in case of electrical overload, and can be used to boost efficiency of existing conventional systems. The M-Cycle is not limited to cooling systems and can be applied to various technologies from Stirling engines to Atmospheric water generators. For cooling applications it can be used in both cross flow and counterflow configurations. Counterflow was found to obtain lower temperatures more suitable for home cooling, but cross flow was found to have a higher coefficient of performance (COP), and is therefore better for large industrial installations. Unlike traditional refrigeration techniques, the COP of small systems remains high, as they do not require lift pumps or other equipment required for cooling towers. A 1.5 ton/4.4 kW cooling system requires just 200 watts for operation of the fan, giving a COP of 26.4 and an EER rating of 90. This does not take into account the energy required to purify or deliver the water, and is strictly the power required to run the device once water is supplied. Though desalination of water also presents a cost, the latent heat of vaporization of water is nearly 100 times higher than the energy required to purify the water itself. Furthermore, the device has a maximum efficiency of 55%, so its actual COP is much lower than this calculated value. However, regardless of these losses, the effective COP is still significantly higher than a conventional cooling system, even if water must first be purified by desalination. In areas where water is not available in any form, it can be used with a desiccant to recover water using available heat sources, such as solar thermal energy. Theoretical designs In the newer but yet-to-be-commercialized "cold-SNAP" design from Harvard's Wyss Institute, a 3D-printed ceramic conducts heat but is half-coated with a hydrophobic material that serves as a moisture barrier. While no moisture is added to the incoming air the relative humidity (RH) does rise a little according to the Temperature-RH formula. Still, the relatively dry air resulting from indirect evaporative cooling allows inhabitants' perspiration to evaporate more easily, increasing the relative effectiveness of this technique. Indirect Cooling is an effective strategy for hot-humid climates that cannot afford to increase the moisture content of the supply air due to indoor air quality and human thermal comfort concerns. Passive indirect evaporative cooling strategies are rare because this strategy involves an architectural element to act as a heat exchanger (for example a roof). This element can be sprayed with water and cooled through the evaporation of the water on this element. These strategies are rare due to the high use of water, which also introduces the risk of water intrusion and compromising building structure. Hybrid designs Two-stage evaporative cooling, or indirect-direct In the first stage of a two-stage cooler, warm air is pre-cooled indirectly without adding humidity (by passing inside a heat exchanger that is cooled by evaporation on the outside). In the direct stage, the pre-cooled air passes through a water-soaked pad and picks up humidity as it cools. Since the air supply is pre-cooled in the first stage, less humidity is transferred in the direct stage, to reach the desired cooling temperatures. The result, according to manufacturers, is cooler air with a RH between 50 and 70%, depending on the climate, compared to a traditional system that produces about 70–80% relative humidity in the conditioned air. Evaporative + conventional backup In another hybrid design, direct or indirect cooling has been combined with vapor-compression or absorption air conditioning to increase the overall efficiency and/or to reduce the temperature below the wet-bulb limit. Evaporative + passive daytime radiative + thermal insulation Evaporative cooling can be combined with passive daytime radiative cooling and thermal insulation to enhance cooling power with zero energy use, albeit with an occasional water "re-charge" depending on the climatic zone of the installation. The system, developed by Lu et al. "consists of a solar reflector, a water-rich and IR-emitting evaporative layer, and a vapor-permeable, IR-transparent, and solar-reflecting insulation layer," with the top layer enabling "heat removal through both evaporation and radiation while resisting environmental heating." The system demonstrated 300% higher ambient cooling power than stand-alone passive daytime radiative cooling and could extend the shelf life of food by 40% in cool humid climates and 200% in dry climates without refrigeration. Membrane dehumidification and evaporative cooling Conventional evaporative cooling only works with dry air, e.g. when the humidity ratio is below ~0.02 kgwater/kgair. They also require substantial water inputs. To remove these limitations, dewpoint evaporative cooling can be hybridized with membrane dehumidification, using membranes that pass water vapor but block air. Air passing through these membranes can be concentrated with a compressor, so it can be condensed at warmer temperatures. The first configuration with this approach reused the dehumidification water to provide further evaporative cooling. Such an approach can fully provide its own water for evaporative cooling, outperforms a baseline desiccant wheel system under all conditions, and outperforms vapor compression in dry conditions. It can also allow for cooling at higher humidity without the use of refrigerants, many of which have substantial greenhouse gas potential. Materials Traditionally, evaporative cooler pads consist of excelsior (aspen wood fiber) inside a containment net, but more modern materials, such as some plastics and melamine paper, are entering use as cooler-pad media. Modern rigid media, commonly 8" or 12" thick, adds more moisture, and thus cools air more than typically much thinner aspen media. Another material which is sometimes used is corrugated cardboard. Design considerations Water use In arid and semi-arid climates, the scarcity of water makes water consumption a concern in cooling system design. From the installed water meters, 420938 L (111,200 gal) of water were consumed during 2002 for the two passive cooling towers at the Zion National Park visitors' center. However, such concerns are addressed by experts who note that electricity generation usually requires a large amount of water, and evaporative coolers use far less electricity, and thus comparable water overall, and cost less overall, compared to chillers. Shading Allowing direct solar exposure to any surface which can transfer the extra heat to any part of the air flow through the unit will raise the temperature of the air. If the heat is transferred to the air prior to flowing through the pads, or if the sunlight warms the pads themselves, evaporation will increase, but the additional energy required to achieve this will not come from the energy contained in the ambient air, but will be supplied by the sun, and this will result not only in higher temperatures, but higher humidity as well, just as raising the inlet air temperature by any means, and heating the water prior to distribution over the pad by any means, would do. In addition, sunlight may degrade some media, and other components of the cooler. Therefore, shading is advisable in all circumstances, though the vertical aspect of the pads, and insulation between the exterior and interior horizontal (upwards facing) surfaces to minimise heat transfer will suffice. Mechanical systems Apart from fans used in mechanical evaporative cooling, pumps are the only other piece of mechanical equipment required for the evaporative cooling process in both mechanical and passive applications. Pumps can be used for either recirculating the water to the wet media pad or providing water at very high pressure to a mister system for a passive cooling tower. Pump specifications will vary depending on evaporation rates and media pad area. The Zion National Park visitors' center uses a 250 W (1/3 HP) pump. Exhaust Exhaust ducts and/or open windows must be used at all times to allow air to continually escape the air-conditioned area. Otherwise, pressure develops and the fan or blower in the system is unable to push much air through the media and into the air-conditioned area. The evaporative system cannot function without exhausting the continuous supply of air from the air-conditioned area to the outside. By optimizing the placement of the cooled-air inlet, along with the layout of the house passages, related doors, and room windows, the system can be used most effectively to direct the cooled air to the required areas. A well-designed layout can effectively scavenge and expel the hot air from desired areas without the need for an above-ceiling ducted venting system. Continuous airflow is essential, so the exhaust windows or vents must not restrict the volume and passage of air being introduced by the evaporative cooling machine. One must also be mindful of the outside wind direction, as, for example, a strong hot southerly wind will slow or restrict the exhausted air from a south-facing window. It is always best to have the downwind windows open, while the upwind windows are closed. Different types of installations Typical installations Typically, residential and industrial evaporative coolers use direct evaporation, and can be described as an enclosed metal or plastic box with vented sides. Air is moved by a centrifugal fan or blower (usually driven by an electric motor with pulleys known as "sheaves" in HVAC terminology, or a direct-driven axial fan), and a water pump is used to wet the evaporative cooling pads. The cooling units can be mounted on the roof (down draft, or downflow) or exterior walls or windows (side draft, or horizontal flow) of buildings. To cool, the fan draws ambient air through vents on the unit's sides and through the damp pads. Heat in the air evaporates water from the pads which are constantly re-dampened to continue the cooling process. Then cooled, moist air is delivered into the building via a vent in the roof or wall. Because the cooling air originates outside the building, one or more large vents must exist to allow air to move from inside to outside. Air should only be allowed to pass once through the system, or the cooling effect will decrease. This is due to the air reaching the saturation point. Often 15 or so air changes per hour (ACHs) occur in spaces served by evaporative coolers, a relatively high rate of air exchange. Evaporative (wet) cooling towers Cooling towers are structures for cooling water or other heat transfer media to near-ambient wet-bulb temperature. Wet cooling towers operate on the evaporative cooling principle, but are optimized to cool the water rather than the air. Cooling towers can often be found on large buildings or on industrial sites. They transfer heat to the environment from chillers, industrial processes, or the Rankine power cycle, for example. Misting systems Misting systems work by forcing water via a high pressure pump and tubing through a brass and stainless steel mist nozzle that has an orifice of about 5 micrometres, thereby producing a micro-fine mist. The water droplets that create the mist are so small that they instantly flash-evaporate. Flash evaporation can reduce the surrounding air temperature by as much as 35 °F (20 °C) in just seconds. For patio systems, it is ideal to mount the mist line approximately 8 to 10 feet (2.4 to 3.0 m) above the ground for optimum cooling. Misting is used for applications such as flowerbeds, pets, livestock, kennels, insect control, odor control, zoos, veterinary clinics, cooling of produce, and greenhouses. Misting fans A misting fan is similar to a humidifier. A fan blows a fine mist of water into the air. If the air is not too humid, the water evaporates, absorbing heat from the air, allowing the misting fan to also work as an air cooler. A misting fan may be used outdoors, especially in a dry climate. It may also be used indoors. Small portable battery-powered misting fans, consisting of an electric fan and a hand-operated water spray pump, are sold as novelty items. Their effectiveness in everyday use is unclear. Performance Understanding evaporative cooling performance requires an understanding of psychrometrics. Evaporative cooling performance is variable due to changes in external temperature and humidity level. A residential cooler should be able to decrease the temperature of air to within of the wet bulb temperature. It is simple to predict cooler performance from standard weather report information. Because weather reports usually contain the dewpoint and relative humidity, but not the wet-bulb temperature, a psychrometric chart or a simple computer program must be used to compute the wet bulb temperature. Once the wet bulb temperature and the dry bulb temperature are identified, the cooling performance or leaving air temperature of the cooler may be determined. For direct evaporative cooling, the direct saturation efficiency, , measures in what extent the temperature of the air leaving the direct evaporative cooler is close to the wet-bulb temperature of the entering air. The direct saturation efficiency can be determined as follows: Where: = direct evaporative cooling saturation efficiency (%) = entering air dry-bulb temperature (°C) = leaving air dry-bulb temperature (°C) = entering air wet-bulb temperature (°C) Evaporative media efficiency usually runs between 80% and 90%. Most efficient systems can lower the dry air temperature to 95% of the wet-bulb temperature, the least efficient systems only achieve 50%. The evaporation efficiency drops very little over time. Typical aspen pads used in residential evaporative coolers offer around 85% efficiency while CELdek type of evaporative media offer efficiencies of >90% depending on air velocity. The CELdek media is more often used in large commercial and industrial installations. As an example, in Las Vegas, with a typical summer design day of  dry bulb and  wet bulb temperature or about 8% relative humidity, the leaving air temperature of a residential cooler with 85% efficiency would be:  = 42 °C – [(42 °C – 19 °C) × 85%] = However, either of two methods can be used to estimate performance: Use a psychrometric chart to calculate wet bulb temperature, and then add 5–7 °F as described above. Use a rule of thumb which estimates that the wet bulb temperature is approximately equal to the ambient temperature, minus one third of the difference between the ambient temperature and the dew point. As before, add 5–7 °F as described above. Some examples clarify this relationship: At and 15% relative humidity, air may be cooled to nearly . The dew point for these conditions is . At 32 °C and 50% relative humidity, air may be cooled to about . The dew point for these conditions is . At and 15% relative humidity, air may be cooled to nearly . The dew point for these conditions is . (Cooling examples extracted from the June 25, 2000 University of Idaho publication, "Homewise"). Because evaporative coolers perform best in dry conditions, they are widely used and most effective in arid, desert regions such as the southwestern USA, northern Mexico, and Rajasthan. The same equation indicates why evaporative coolers are of limited use in highly humid environments: for example, a hot August day in Tokyo may be with 85% relative humidity, 1,005 hPa pressure. This gives a dew point of and a wet-bulb temperature of . According to the formula above, at 85% efficiency air may be cooled only down to which makes it quite impractical. Comparison to other types of air conditioning Comparison of evaporative cooling to refrigeration-based air conditioning: Advantages Less expensive to install and operate Estimated cost for professional installation is about half or less that of central refrigerated air conditioning. Estimated cost of operation is 1/8 that of refrigerated air conditioning. No power spike when turned on due to lack of a compressor. Power consumption is limited to the fan and water pump, which have a relatively low current draw at start-up. The working fluid is water. No special refrigerants, such as ammonia or CFCs, are used that could be toxic, expensive to replace, contribute to ozone depletion and/or be subject to stringent licensing and environmental regulations. Newly launched air coolers can be operated though remote control. Ease of installation and maintenance Equipment can be installed by mechanically-inclined users at drastically lower cost than refrigeration equipment which requires specialized skills and professional installation. The only two mechanical parts in most basic evaporative coolers are the fan motor and the water pump, both of which can be repaired or replaced at low cost and often by a mechanically inclined user, eliminating costly service calls to HVAC contractors. Ventilation air The frequent and high volumetric flow rate of air traveling through the building reduces the "age-of-air" in the building dramatically. Evaporative cooling increases humidity. In dry climates, this may improve comfort and decrease static electricity problems. The pad itself acts as a rather effective air filter when properly maintained; it is capable of removing a variety of contaminants in air, including urban ozone caused by pollution, regardless of very dry weather. Refrigeration-based cooling systems lose this ability whenever there is not enough humidity in the air to keep the evaporator wet while providing a frequent trickle of condensation that washes out dissolved impurities removed from the air. Disadvantages Performance Most evaporative coolers are unable to reach as low a temperature as refrigerated air conditioning systems. High dewpoint (humidity) conditions decrease the cooling capability of the evaporative cooler. No dehumidification. Traditional air conditioners remove moisture from the air, except in very dry locations where recirculation can lead to a buildup of humidity. Evaporative cooling adds moisture, and in humid climates, dryness may improve thermal comfort at higher temperatures. Comfort The air supplied by the evaporative cooler is generally 80–90% relative humidity and can cause interior humidity levels as high as 65%; very humid air reduces the evaporation rate of moisture from the skin, nose, lungs, and eyes. High humidity in air accelerates corrosion, particularly in the presence of dust. This can considerably reduce the life of electronics and other equipment. High humidity in air may cause condensation of water. This can be a problem for some situations (e.g., electrical equipment, computers, paper, books, old wood). Odors and other outdoor contaminants may be blown into the building unless sufficient filtering is in place. Water use Evaporative coolers require a constant supply of water. Water high in mineral content (hard water) will leave mineral deposits on the pads and interior of the cooler. Depending on the type and concentration of minerals, possible safety hazards during the replacement and waste removal of the pads could be present. Bleed-off and refill (purge pump) systems can reduce but not eliminate this problem. Installation of an inline water filter (refrigerator drinking water/ice maker type) will drastically reduce the mineral deposits. Maintenance frequency Any mechanical components that can rust or corrode need regular cleaning or replacement due to the environment of high moisture and potentially heavy mineral deposits in areas with hard water. Evaporative media must be replaced on a regular basis to maintain cooling performance. Wood wool pads are inexpensive but require replacement every few months. Higher-efficiency rigid media is much more expensive but will last for a number of years proportional to the water hardness; in areas with very hard water, rigid media may only last for two years before mineral scale build-up unacceptably degrades performance. In areas with cold winters, evaporative coolers must be drained and winterized to protect the water line and cooler from freeze damage and then de-winterized prior to the cooling season. Health hazards An evaporative cooler is a common place for mosquito breeding. Numerous authorities consider an improperly maintained cooler to be a threat to public health. Mold and bacteria may be dispersed into interior air from improperly maintained or defective systems, causing sick building syndrome and adverse effects for asthma and allergy sufferers. This can also cause a foul odor. Wood wool of dry cooler pads can catch fire even from small sparks.
Technology
Heating and cooling
null
651752
https://en.wikipedia.org/wiki/Scaling%20%28geometry%29
Scaling (geometry)
In affine geometry, uniform scaling (or isotropic scaling) is a linear transformation that enlarges (increases) or shrinks (diminishes) objects by a scale factor that is the same in all directions (isotropically). The result of uniform scaling is similar (in the geometric sense) to the original. A scale factor of 1 is normally allowed, so that congruent shapes are also classed as similar. Uniform scaling happens, for example, when enlarging or reducing a photograph, or when creating a scale model of a building, car, airplane, etc. More general is scaling with a separate scale factor for each axis direction. Non-uniform scaling (anisotropic scaling) is obtained when at least one of the scaling factors is different from the others; a special case is directional scaling or stretching (in one direction). Non-uniform scaling changes the shape of the object; e.g. a square may change into a rectangle, or into a parallelogram if the sides of the square are not parallel to the scaling axes (the angles between lines parallel to the axes are preserved, but not all angles). It occurs, for example, when a faraway billboard is viewed from an oblique angle, or when the shadow of a flat object falls on a surface that is not parallel to it. When the scale factor is larger than 1, (uniform or non-uniform) scaling is sometimes also called dilation or enlargement. When the scale factor is a positive number smaller than 1, scaling is sometimes also called contraction or reduction. In the most general sense, a scaling includes the case in which the directions of scaling are not perpendicular. It also includes the case in which one or more scale factors are equal to zero (projection), and the case of one or more negative scale factors (a directional scaling by -1 is equivalent to a reflection). Scaling is a linear transformation, and a special case of homothetic transformation (scaling about a point). In most cases, the homothetic transformations are non-linear transformations. Uniform scaling A scale factor is usually a decimal which scales, or multiplies, some quantity. In the equation y = Cx, C is the scale factor for x. C is also the coefficient of x, and may be called the constant of proportionality of y to x. For example, doubling distances corresponds to a scale factor of two for distance, while cutting a cake in half results in pieces with a scale factor for volume of one half. The basic equation for it is image over preimage. In the field of measurements, the scale factor of an instrument is sometimes referred to as sensitivity. The ratio of any two corresponding lengths in two similar geometric figures is also called a scale. Matrix representation A scaling can be represented by a scaling matrix. To scale an object by a vector v = (vx, vy, vz), each point p = (px, py, pz) would need to be multiplied with this scaling matrix: As shown below, the multiplication will give the expected result: Such a scaling changes the diameter of an object by a factor between the scale factors, the area by a factor between the smallest and the largest product of two scale factors, and the volume by the product of all three. The scaling is uniform if and only if the scaling factors are equal (vx = vy = vz). If all except one of the scale factors are equal to 1, we have directional scaling. In the case where vx = vy = vz = k, scaling increases the area of any surface by a factor of k2 and the volume of any solid object by a factor of k3. Scaling in arbitrary dimensions In -dimensional space , uniform scaling by a factor is accomplished by scalar multiplication with , that is, multiplying each coordinate of each point by . As a special case of linear transformation, it can be achieved also by multiplying each point (viewed as a column vector) with a diagonal matrix whose entries on the diagonal are all equal to , namely . Non-uniform scaling is accomplished by multiplication with any symmetric matrix. The eigenvalues of the matrix are the scale factors, and the corresponding eigenvectors are the axes along which each scale factor applies. A special case is a diagonal matrix, with arbitrary numbers along the diagonal: the axes of scaling are then the coordinate axes, and the transformation scales along each axis by the factor . In uniform scaling with a non-zero scale factor, all non-zero vectors retain their direction (as seen from the origin), or all have the direction reversed, depending on the sign of the scaling factor. In non-uniform scaling only the vectors that belong to an eigenspace will retain their direction. A vector that is the sum of two or more non-zero vectors belonging to different eigenspaces will be tilted towards the eigenspace with largest eigenvalue. Using homogeneous coordinates In projective geometry, often used in computer graphics, points are represented using homogeneous coordinates. To scale an object by a vector v = (vx, vy, vz), each homogeneous coordinate vector p = (px, py, pz, 1) would need to be multiplied with this projective transformation matrix: As shown below, the multiplication will give the expected result: Since the last component of a homogeneous coordinate can be viewed as the denominator of the other three components, a uniform scaling by a common factor s (uniform scaling) can be accomplished by using this scaling matrix: For each vector p = (px, py, pz, 1) we would have which would be equivalent to Function dilation and contraction Given a point , the dilation associates it with the point through the equations for . Therefore, given a function , the equation of the dilated function is Particular cases If , the transformation is horizontal; when , it is a dilation, when , it is a contraction. If , the transformation is vertical; when it is a dilation, when , it is a contraction. If or , the transformation is a squeeze mapping.
Mathematics
Geometry: General
null
651916
https://en.wikipedia.org/wiki/Sauropoda
Sauropoda
Sauropoda (), whose members are known as sauropods (; from sauro- + -pod, 'lizard-footed'), is a clade of saurischian ('lizard-hipped') dinosaurs. Sauropods had very long necks, long tails, small heads (relative to the rest of their body), and four thick, pillar-like legs. They are notable for the enormous sizes attained by some species, and the group includes the largest animals to have ever lived on land. Well-known genera include Apatosaurus, Argentinosaurus, Alamosaurus, Brachiosaurus, Camarasaurus, Diplodocus, and Mamenchisaurus. The oldest known unequivocal sauropod dinosaurs are known from the Early Jurassic. Isanosaurus and Antetonitrus were originally described as Triassic sauropods, but their age, and in the case of Antetonitrus also its sauropod status, were subsequently questioned. Sauropod-like sauropodomorph tracks from the Fleming Fjord Formation (Greenland) might, however, indicate the occurrence of the group in the Late Triassic. By the Late Jurassic (150 million years ago), sauropods had become widespread (especially the diplodocids and brachiosaurids). By the Late Cretaceous, one group of sauropods, the titanosaurs, had replaced all others and had a near-global distribution. However, as with all other non-avian dinosaurs alive at the time, the titanosaurs died out in the Cretaceous–Paleogene extinction event. Fossilised remains of sauropods have been found on every continent, including Antarctica. The name Sauropoda was coined by Othniel Charles Marsh in 1878, and is derived from Ancient Greek, meaning "lizard foot". Sauropods are one of the most recognizable groups of dinosaurs, and have become a fixture in popular culture due to their impressive size. Complete sauropod fossil finds are extremely rare. Many species, especially the largest, are known only from isolated and disarticulated bones. Many near-complete specimens lack heads, tail tips and limbs. Description Sauropods were herbivorous (plant-eating), usually quite long-necked quadrupeds (four-legged), often with spatulate (spatula-shaped: broad at the tip, narrow at the neck) teeth. They had tiny heads, massive bodies, and most had long tails. Their hind legs were thick, straight, and powerful, ending in club-like feet with five toes, though only the inner three (or in some cases four) bore claws. Their forelimbs were rather more slender and typically ended in pillar-like hands built for supporting weight; often only the thumb bore a claw. Many illustrations of sauropods in the flesh miss these facts, inaccurately depicting sauropods with hooves capping the claw-less digits of the feet, or more than three claws or hooves on the hands. The proximal caudal vertebrae are extremely diagnostic for sauropods. Size The sauropods' most defining characteristic was their size. Even the dwarf sauropods (perhaps 5 to 6 metres, or 20 feet long) were counted among the largest animals in their ecosystem. Their only real competitors in terms of size are the rorquals, such as the blue whale. But, unlike whales, sauropods were primarily terrestrial animals. Their body structure did not vary as much as other dinosaurs, perhaps due to size constraints, but they displayed ample variety. Some, like the diplodocids, possessed tremendously long tails, which they may have been able to crack like a whip as a signal or to deter or injure predators, or to make sonic booms. Supersaurus, at long, was the longest sauropod known from reasonably complete remains, but others, like the old record holder, Diplodocus, were also extremely long. The holotype (and now lost) vertebra of Amphicoelias fragillimus (now Maraapunisaurus) may have come from an animal long; its vertebral column would have been substantially longer than that of the blue whale. However, research published in 2015 speculated that the size estimates of A. fragillimus may have been highly exaggerated. The longest dinosaur known from reasonable fossils material is probably Argentinosaurus huinculensis with length estimates of to according to the most recent researches. However the giant Barosaurus specimen BYU 9024 might have been even larger reaching lengths of 45–48 meters (148–157 ft). The longest terrestrial animal alive today, the African elephant, can only reach lengths of . Others, like the brachiosaurids, were extremely tall, with high shoulders and extremely long necks. The tallest sauropod was the giant Barosaurus specimen at tall. By comparison, the giraffe, the tallest of all living land animals, is only 4.8 to 5.6 metres (15.74 to 18.3 ft) tall. The best evidence indicates that the most massive were Argentinosaurus (65–80 metric tons), Mamenchisaurus sinocanadorum (60-80 metric tons), the giant Barosaurus specimen (60-80+ metric tons) and Patagotitan with Puertasaurus (50-55 metric tons). Meanwhile, 'mega-sauropods' such as Bruhathkayosaurus has long been scrutinized due to controversial debates on its validity, but recent photos re-surfacing in 2022 have legitimized it, allowing for more updated estimates that range between 110–170 tons, rivaling the blue whale in size. The weight of Amphicoelias fragillimus was estimated at 122.4 metric tons with lengths of up to nearly 60 meters but 2015 research argued that these estimates were based on a diplodocid rather than the more modern rebbachisaurid, suggesting a much shorter length of 35–40 meters with mass between 80–120 tons. Additional finds indicate a number of species likely reached or exceeded weights of 40 tons. The largest land animal alive today, the bush elephant, weighs no more than . Among the smallest sauropods were the primitive Ohmdenosaurus (4 m, or 13 ft long), the dwarf titanosaur Magyarosaurus (6 m or 20 ft long), and the dwarf brachiosaurid Europasaurus, which was 6.2 meters long as a fully-grown adult. Its small stature was probably the result of insular dwarfism occurring in a population of sauropods isolated on an island of the late Jurassic in what is now the Langenberg area of northern Germany. The diplodocoid sauropod Brachytrachelopan was the shortest member of its group because of its unusually short neck. Unlike other sauropods, whose necks could grow to up to four times the length of their backs, the neck of Brachytrachelopan was shorter than its backbone. Fossils from perhaps the largest dinosaur ever found were discovered in 2012 in the Neuquén Province of northwest Patagonia, Argentina. It is believed that they are from a titanosaur, which were amongst the largest sauropods. On or shortly before 29 March 2017 a sauropod footprint about 5.6 feet (1.7 meters) long was found at Walmadany in the Kimberley Region of Western Australia. The report said that it was the biggest known yet. In 2020 Molina-Perez and Larramendi estimated the size of the animal at 31 meters (102 ft) and 72 tonnes (79.4 short tons) based on the 1.75 meter (5.7 ft) long footprint. Limbs and feet As massive quadrupeds, sauropods developed specialized "graviportal" (weight-bearing) limbs. The hind feet were broad, and retained three claws in most species. Particularly unusual compared with other animals were the highly modified front feet (manus). The front feet of sauropods were very dissimilar from those of modern large quadrupeds, such as elephants. Rather than splaying out to the sides to create a wide foot as in elephants, the manus bones of sauropods were arranged in fully vertical columns, with extremely reduced finger bones (though it is not clear if the most primitive sauropods, such as Vulcanodon and Barapasaurus, had such forefeet). The front feet were so modified in eusauropods that individual digits would not have been visible in life. The arrangement of the forefoot bone (metacarpal) columns in eusauropods was semi-circular, so sauropod forefoot prints are horseshoe-shaped. Unlike elephants, print evidence shows that sauropods lacked any fleshy padding to back the front feet, making them concave. The only claw visible in most sauropods was the distinctive thumb claw (associated with digit I). Almost all sauropods had such a claw, though what purpose it served is unknown. The claw was largest (as well as tall and laterally flattened) in diplodocids, and very small in brachiosaurids, some of which seem to have lost the claw entirely based on trackway evidence. Titanosaurs may have lost the thumb claw completely (with the exception of early forms, such as Janenschia). Titanosaurs were most unusual among sauropods, as, across their history as a clade, they lost not just the external claw but also completely lost the digits of the front foot. Advanced titanosaurs had no digits or digit bones, and walked only on horseshoe-shaped "stumps" made up of the columnar metacarpal bones. Print evidence from Portugal shows that, in at least some sauropods (probably brachiosaurids), the bottom and sides of the forefoot column was likely covered in small, spiny scales, which left score marks in the prints. In titanosaurs, the ends of the metacarpal bones that contacted the ground were unusually broad and squared-off, and some specimens preserve the remains of soft tissue covering this area, suggesting that the front feet were rimmed with some kind of padding in these species. Matthew Bonnan has shown that sauropod dinosaur long bones grew isometrically: that is, there was little to no change in shape as juvenile sauropods became gigantic adults. Bonnan suggested that this odd scaling pattern (most vertebrates show significant shape changes in long bones associated with increasing weight support) might be related to a stilt-walker principle (suggested by amateur scientist Jim Schmidt) in which the long legs of adult sauropods allowed them to easily cover great distances without changing their overall mechanics. Air sacs Along with other saurischian dinosaurs (such as theropods, including birds), sauropods had a system of air sacs, evidenced by indentations and hollow cavities in most of their vertebrae that had been invaded by them. Pneumatic, hollow bones are a characteristic feature of all sauropods. These air spaces reduced the overall weight of the massive necks that the sauropods had, and the air-sac system in general, allowing for a single-direction airflow through stiff lungs, made it possible for the sauropods to get enough oxygen. This adaptation would have advantaged sauropods particularly in the relatively low oxygen conditions of the Jurassic and Early Cretaceous. The bird-like hollowing of sauropod bones was recognized early in the study of these animals, and, in fact, at least one sauropod specimen found in the 19th century (Ornithopsis) was originally misidentified as a pterosaur (a flying reptile) because of this. Armor Some sauropods had armor. There were genera with small clubs on their tails, a prominent example being Shunosaurus, and several titanosaurs, such as Saltasaurus and Ampelosaurus, had small bony osteoderms covering portions of their bodies. Teeth A study by Michael D'Emic and his colleagues from Stony Brook University found that sauropods evolved high tooth replacement rates to keep up with their large appetites. The study suggested that Nigersaurus, for example, replaced each tooth every 14 days, Camarasaurus replaced each tooth every 62 days, and Diplodocus replaced each tooth once every 35 days. The scientists found qualities of the tooth affected how long it took for a new tooth to grow. Camarasaurus's teeth took longer to grow than those for Diplodocus because they were larger. It was also noted by D'Emic and his team that the differences between the teeth of the sauropods also indicated a difference in diet. Diplodocus ate plants low to the ground and Camarasaurus browsed leaves from top and middle branches. According to the scientists, the specializing of their diets helped the different herbivorous dinosaurs to coexist. Necks Sauropod necks have been found at over in length, a full six times longer than the world record giraffe neck. Enabling this were a number of essential physiological features. The dinosaurs' overall large body size and quadrupedal stance provided a stable base to support the neck, and the head was evolved to be very small and light, losing the ability to orally process food. By reducing their heads to simple harvesting tools that got the plants into the body, the sauropods needed less power to lift their heads, and thus were able to develop necks with less dense muscle and connective tissue. This drastically reduced the overall mass of the neck, enabling further elongation. Sauropods also had a great number of adaptations in their skeletal structure. Some sauropods had as many as 19 cervical vertebrae, whereas almost all mammals are limited to only seven. Additionally, each vertebra was extremely long and had a number of empty spaces in them which would have been filled only with air. An air-sac system connected to the spaces not only lightened the long necks, but effectively increased the airflow through the trachea, helping the creatures to breathe in enough air. By evolving vertebrae consisting of 60% air, the sauropods were able to minimize the amount of dense, heavy bone without sacrificing the ability to take sufficiently large breaths to fuel the entire body with oxygen. According to Kent Stevens, computer-modeled reconstructions of the skeletons made from the vertebrae indicate that sauropod necks were capable of sweeping out large feeding areas without needing to move their bodies, but were unable to be retracted to a position much above the shoulders for exploring the area or reaching higher. Another proposed function of the sauropods' long necks was essentially a radiator to deal with the extreme amount of heat produced from their large body mass. Considering that the metabolism would have been doing an immense amount of work, it would certainly have generated a large amount of heat as well, and elimination of this excess heat would have been essential for survival. It has also been proposed that the long necks would have cooled the veins and arteries going to the brain, avoiding excessively heated blood from reaching the head. It was in fact found that the increase in metabolic rate resulting from the sauropods' necks was slightly more than compensated for by the extra surface area from which heat could dissipate. Palaeobiology Ecology Dental microwear texture analysis (DMTA) performed on a titanosauriform sauropod from the Turonian-aged Tamagawa Formation suggests that the sauropod fed on plant material that was softer than insect exoskeletons or mollusc shells, with the diet likely consisting of ferns and gymnosperms. The DMTA results also suggested that sauropods likely masticated more energetically than present-day lepidosaurs do. When sauropods were first discovered, their immense size led many scientists to compare them with modern-day whales. Most studies in the 19th and early 20th centuries concluded that sauropods were too large to have supported their weight on land, and therefore that they must have been mainly aquatic. Most life restorations of sauropods in art through the first three quarters of the 20th century depicted them fully or partially immersed in water. This early notion was cast in doubt beginning in the 1950s, when a study by Kermack (1951) demonstrated that, if the animal were submerged in several metres of water, the pressure would be enough to fatally collapse the lungs and airway. However, this and other early studies of sauropod ecology were flawed in that they ignored a substantial body of evidence that the bodies of sauropods were heavily permeated with air sacs. In 1878, paleontologist E.D. Cope had even referred to these structures as "floats". Beginning in the 1970s, the effects of sauropod air sacs on their supposed aquatic lifestyle began to be explored. Paleontologists such as Coombs and Bakker used this, as well as evidence from sedimentology and biomechanics, to show that sauropods were primarily terrestrial animals. In 2004, D.M. Henderson noted that, due to their extensive system of air sacs, sauropods would have been buoyant and would not have been able to submerge their torsos completely below the surface of the water; in other words, they would float, and would not have been in danger of lung collapse due to water pressure when swimming. Evidence for swimming in sauropods comes from fossil trackways that have occasionally been found to preserve only the forefeet (manus) impressions. Henderson showed that such trackways can be explained by sauropods with long forelimbs (such as macronarians) floating in relatively shallow water deep enough to keep the shorter hind legs free of the bottom, and using the front limbs to punt forward. However, due to their body proportions, floating sauropods would also have been very unstable and maladapted for extended periods in the water. This mode of aquatic locomotion, combined with its instability, led Henderson to refer to sauropods in water as "tipsy punters". While sauropods could therefore not have been aquatic as historically depicted, there is evidence that they preferred wet and coastal habitats. Sauropod footprints are commonly found following coastlines or crossing floodplains, and sauropod fossils are often found in wet environments or intermingled with fossils of marine organisms. A good example of this would be the massive Jurassic sauropod trackways found in lagoon deposits on Scotland's Isle of Skye. Studies published in 2021 suggest sauropods could not inhabit polar regions. This study suggests they were largely confined to tropical areas and had metabolisms that were very different to those of other dinosaurs, perhaps intermediate between mammals and reptiles. New studies published by Taia Wyenberg-henzler in 2022 suggest that sauropods in North America declined due to undetermined reasons in regards to their niches and distribution during the end of the Jurassic and into the latest Cretaceous. Why this is remains unclear, but some similarities in feeding niches between iguanodontians, hadrosauroids and sauropods have been suggested and may have resulted in some competition. However, this cannot fully explain the full decline in distribution of sauropods, as competitive exclusion would have resulted in a much more rapid decline than what is shown in the fossil record. Moreover, it must be determined as to whether sauropod declines in North America was the result of a change in preferred flora that sauropods ate, climate, or other factors. It is also suggested in this same study that iguanodontians and hadrosauroids took advantage of recently vacated niches left by a decline in sauropod diversity during the late Jurassic and the Cretaceous in North America. Herding and parental care Many lines of fossil evidence, from both bone beds and trackways, indicate that sauropods were gregarious animals that formed herds. However, the makeup of the herds varied between species. Some bone beds, for example a site from the Middle Jurassic of Argentina, appear to show herds made up of individuals of various age groups, mixing juveniles and adults. However, a number of other fossil sites and trackways indicate that many sauropod species travelled in herds segregated by age, with juveniles forming herds separate from adults. Such segregated herding strategies have been found in species such as Alamosaurus, Bellusaurus and some diplodocids. In a review of the evidence for various herd types, Myers and Fiorillo attempted to explain why sauropods appear to have often formed segregated herds. Studies of microscopic tooth wear show that juvenile sauropods had diets that differed from their adult counterparts, so herding together would not have been as productive as herding separately, where individual herd members could forage in a coordinated way. The vast size difference between juveniles and adults may also have played a part in the different feeding and herding strategies. Since the segregation of juveniles and adults must have taken place soon after hatching, and combined with the fact that sauropod hatchlings were most likely precocial, Myers and Fiorillo concluded that species with age-segregated herds would not have exhibited much parental care. On the other hand, scientists who have studied age-mixed sauropod herds suggested that these species may have cared for their young for an extended period of time before the young reached adulthood. A 2014 study suggested that the time from laying the egg to the time of the hatching was likely to have been between 65 and 82 days. Exactly how segregated versus age-mixed herding varied across different groups of sauropods is unknown. Further examples of gregarious behavior will need to be discovered from more sauropod species to begin detecting possible patterns of distribution. Rearing stance Since early in the history of their study, scientists, such as Osborn, have speculated that sauropods could rear up on their hind legs, using the tail as the third 'leg' of a tripod. A skeletal mount depicting the diplodocid Barosaurus lentus rearing up on its hind legs at the American Museum of Natural History is one illustration of this hypothesis. In a 2005 paper, Rothschild and Molnar reasoned that if sauropods had adopted a bipedal posture at times, there would be evidence of stress fractures in the forelimb 'hands'. However, none were found after they examined a large number of sauropod skeletons. [[File:Barosaurus mount 1.jpg|thumb|left|upright|Mounted skeleton of Barosaurus lentus, depicted in a rearing tripodal stance]] Heinrich Mallison (in 2009) was the first to study the physical potential for various sauropods to rear into a tripodal stance. Mallison found that some characters previously linked to rearing adaptations were actually unrelated (such as the wide-set hip bones of titanosaurs) or would have hindered rearing. For example, titanosaurs had an unusually flexible backbone, which would have decreased stability in a tripodal posture and would have put more strain on the muscles. Likewise, it is unlikely that brachiosaurids could rear up onto the hind legs, as their center of gravity was much farther forward than other sauropods, which would cause such a stance to be unstable. Diplodocids, on the other hand, appear to have been well adapted for rearing up into a tripodal stance. Diplodocids had a center of mass directly over the hips, giving them greater balance on two legs. Diplodocids also had the most mobile necks of sauropods, a well-muscled pelvic girdle, and tail vertebrae with a specialised shape that would allow the tail to bear weight at the point it touched the ground. Mallison concluded that diplodocids were better adapted to rearing than elephants, which do so occasionally in the wild. He also argues that stress fractures in the wild do not occur from everyday behaviour, such as feeding-related activities (contra Rothschild and Molnar). Head and neck posture There is little agreement over how sauropods held their heads and necks, and the postures they could achieve in life. Whether sauropods' long necks could be used for browsing high trees has been questioned based on calculations suggesting that just pumping blood up to the head in such a posture for long would have used some half of its energy intake. Further, to move blood to such a height—dismissing posited auxiliary hearts in the neck—would require a heart 15 times as large as of a similar-sized whale. The above have been used to argue that the long neck must instead have been held more or less horizontally, presumed to enable feeding on plants over a wide area with less need to move about, yielding a large energy saving for such a large animal. Reconstructions of the necks of Diplodocus and Apatosaurus have therefore often portrayed them in near-horizontal, so-called "neutral, undeflected posture". However, research on living animals demonstrates that almost all extant tetrapods hold the base of their necks sharply flexed when alert, showing that any inference from bones about habitual "neutral postures" is deeply unreliable.Museums and TV have dinosaurs' posture all wrong, claim scientists. Guardian, 27 May 2009 Meanwhile, computer modeling of ostrich necks has raised doubts over the flexibility needed for stationary grazing. Trackways and locomotion Sauropod trackways and other fossil footprints (known as "ichnites") are known from abundant evidence present on most continents. Ichnites have helped support other biological hypotheses about sauropods, including general fore and hind foot anatomy (see Limbs and feet above). Generally, prints from the forefeet are much smaller than the hind feet, and often crescent-shaped. Occasionally ichnites preserve traces of the claws, and help confirm which sauropod groups lost claws or even digits on their forefeet. Sauropod tracks from the Villar del Arzobispo Formation of early Berriasian age in Spain support the gregarious behaviour of the group. The tracks are possibly more similar to Sauropodichnus giganteus than any other ichnogenera, although they have been suggested to be from a basal titanosauriform. The tracks are wide-gauge, and the grouping as close to Sauropodichnus is also supported by the manus-to-pes distance, the morphology of the manus being kidney bean-shaped, and the morphology of the pes being subtriangular. It cannot be identified whether the footprints of the herd were caused by juveniles or adults, because of the lack of previous trackway individual age identification. Generally, sauropod trackways are divided into three categories based on the distance between opposite limbs: narrow gauge, medium gauge, and wide gauge. The gauge of the trackway can help determine how wide-set the limbs of various sauropods were and how this may have impacted the way they walked. A 2004 study by Day and colleagues found that a general pattern could be found among groups of advanced sauropods, with each sauropod family being characterised by certain trackway gauges. They found that most sauropods other than titanosaurs had narrow-gauge limbs, with strong impressions of the large thumb claw on the forefeet. Medium gauge trackways with claw impressions on the forefeet probably belong to brachiosaurids and other primitive titanosauriformes, which were evolving wider-set limbs but retained their claws. Primitive true titanosaurs also retained their forefoot claw but had evolved fully wide gauge limbs. Wide gauge limbs were retained by advanced titanosaurs, trackways from which show a wide gauge and lack of any claws or digits on the forefeet. Occasionally, only trackways from the forefeet are found. Falkingham et al. used computer modelling to show that this could be due to the properties of the substrate. These need to be just right to preserve tracks. Differences in hind limb and fore limb surface area, and therefore contact pressure with the substrate, may sometimes lead to only the forefeet trackways being preserved. Biomechanics and speed In a study published in PLoS ONE on October 30, 2013, by Bill Sellers, Rodolfo Coria, Lee Margetts et al., Argentinosaurus was digitally reconstructed to test its locomotion for the first time. Before the study, the most common way of estimating speed was through studying bone histology and ichnology. Commonly, studies about sauropod bone histology and speed focus on the postcranial skeleton, which holds many unique features, such as an enlarged process on the ulna, a wide lobe on the ilia, an inward-slanting top third of the femur, and an extremely ovoid femur shaft. Those features are useful when attempting to explain trackway patterns of graviportal animals. When studying ichnology to calculate sauropod speed, there are a few problems, such as only providing estimates for certain gaits because of preservation bias, and being subject to many more accuracy problems. To estimate the gait and speed of Argentinosaurus, the study performed a musculoskeletal analysis. The only previous musculoskeletal analyses were conducted on hominoids, terror birds, and other dinosaurs. Before they could conduct the analysis, the team had to create a digital skeleton of the animal in question, show where there would be muscle layering, locate the muscles and joints, and finally find the muscle properties before finding the gait and speed. The results of the biomechanics study revealed that Argentinosaurus was mechanically competent at a top speed of 2 m/s (5 mph) given the great weight of the animal and the strain that its joints were capable of bearing. The results further revealed that much larger terrestrial vertebrates might be possible, but would require significant body remodeling and possible sufficient behavioral change to prevent joint collapse. Body size Sauropods were gigantic descendants of surprisingly small ancestors. Basal dinosauriformes, such as Pseudolagosuchus and Marasuchus from the Middle Triassic of Argentina, weighed approximately or less. These evolved into saurischia, which saw a rapid increase of bauplan size, although more primitive members like Eoraptor, Panphagia, Pantydraco, Saturnalia and Guaibasaurus still retained a moderate size, possibly under . Even with these small, primitive forms, there is a notable size increase among sauropodomorphs, although scanty remains of this period make interpretation conjectural. There is one definite example of a small derived sauropodomorph: Anchisaurus, under , even though it is closer to the sauropods than Plateosaurus and Riojasaurus, which were upwards of in weight. Evolving from sauropodomorphs, the sauropods were huge. Their giant size probably resulted from an increased growth rate made possible by tachymetabolic endothermy, a trait which evolved in sauropodomorphs. Once branched into sauropods, sauropodomorphs continued steadily to grow larger, with smaller sauropods, like the Early Jurassic Barapasaurus and Kotasaurus, evolving into even larger forms like the Middle Jurassic Mamenchisaurus and Patagosaurus. Responding to the growth of sauropods, their theropod predators grew also, as shown by an Allosaurus-sized coelophysoid from Germany. Size in Neosauropoda Neosauropoda is quite plausibly the clade of dinosaurs with the largest body sizes ever to have existed. The few exceptions of smaller size are hypothesized to be caused by island dwarfism, or other ecological pressures, although there is a trend in some Titanosauria towards a smaller size. The titanosaurs, however, were some of the largest sauropods ever. Other than titanosaurs, diplodocoids also reached truly gigantic sizes. Meanwhile, a clade of diplodocoids, called Dicraeosauridae, are identified by a small to medium body size. No sauropods were very small, however, for even "dwarf" sauropods are larger than , a size reached by only about 10% of all mammalian species. Independent gigantism Although in general, sauropods were large, a gigantic size ( or more) was reached independently at multiple times in their evolution. Many gigantic forms existed in the Late Jurassic (specifically Kimmeridgian), such as the turiasaur Turiasaurus, the mamenchisaurids Mamenchisaurus and Xinjiangtitan, the diplodocoids Maraapunisaurus, Diplodocus, Apatosaurus, Supersaurus and Barosaurus, the camarasaurid Camarasaurus, and the brachiosaurids Brachiosaurus and Giraffatitan. Through the Early to Late Cretaceous, the giants Borealosaurus, Sauroposeidon, Paralititan, Argentinosaurus, Puertasaurus, Antarctosaurus, Dreadnoughtus, Notocolossus, Futalognkosaurus, Patagotitan and Alamosaurus lived, with all possibly being titanosaurs. One sparsely known possible giant is Huanghetitan ruyangensis, only known from long ribs. These giant species lived in the Late Jurassic to the Late Cretaceous, appearing independently over a time span of 85 million years. Dwarfism in sauropods Two well-known island dwarf species of sauropods are the Cretaceous Magyarosaurus (at one point its identity as a dwarf was challenged) and the Jurassic Europasaurus, both from Europe. Even though these sauropods are small, the only way to prove they are true dwarfs is through a study of their bone histology. A study by Martin Sander and colleagues in 2006 examined eleven individuals of Europasaurus holgeri using bone histology and demonstrated that the small island species evolved through a decrease in the growth rate of long bones as compared to rates of growth in ancestral species on the mainland. Two other possible dwarfs are Rapetosaurus, which existed on the island of Madagascar, an isolated island in the Cretaceous, and Ampelosaurus, a titanosaur that lived on the Iberian peninsula of southern Spain and France. Amanzia from Switzerland might also be a dwarf, but this has yet to be proven. One of the most extreme cases of island dwarfism is found in Europasaurus, a relative of the much larger Camarasaurus and Brachiosaurus: it was only about long, an identifying trait of the species. As for all dwarf species, their reduced growth rate led to their small size. Another taxon of tiny sauropods, the saltasaurid titanosaur Ibirania, 5.7 m (18.7 ft) long, lived a non-insular context in Upper Creaceous Brazil, and is an example of nanism resultant from other ecological pressures. Paleopathology and paleoparasitology Sauropods are rarely known for preserved injuries or signs of illnesses, but more recent discoveries show they could suffer from such pathologies. A diplodocid specimen from the Morrison Formation referred to as "Dolly" was described in 2022 with evidence of a severe respiratory infection. Sauropod ribs from Yunyang County, Chongqing, in southwest China show evidence of rib breakage by way of traumatic fracture, bone infection, and osteosclerosis. A sauropod tibia exhibiting initial fracture has been described from the Middle Jurassic of Yunyang County in southwestern China.Ibirania, a nanoid titanosaur fossil from Brazil, suggests that individuals of various genera were susceptible to diseases such as osteomyelitis and parasite infestations. The specimen hails from the late cretaceous São José do Rio Preto Formation, Bauru Basin, and was described in the journal Cretaceous Research by Aureliano et al. (2021). Examination of the titanosaur's bones revealed what appear to be parasitic blood worms similar to the prehistoric Paleoleishmania but are 10-100 times larger, that seemed to have caused the osteomyelitis. The fossil is the first known instance of an aggressive case of osteomyelitis being caused by blood worms in an extinct animal. History of discovery The first scraps of fossil remains now recognized as sauropods all came from England and were originally interpreted in a variety of different ways. Their relationship to other dinosaurs was not recognized until well after their initial discovery. The first sauropod fossil to be scientifically described was a single tooth known by the non-Linnaean descriptor Rutellum implicatum. This fossil was described by Edward Lhuyd in 1699, but was not recognized as a giant prehistoric reptile at the time. Dinosaurs would not be recognized as a group until over a century later. Richard Owen published the first modern scientific descriptions of sauropods in 1841, in a book and a paper naming Cardiodon and Cetiosaurus. Cardiodon was known only from two unusual, heart-shaped teeth (from which it got its name), which could not be identified beyond the fact that they came from a previously unknown large reptile. Cetiosaurus was known from slightly better, but still scrappy remains. Owen thought at the time that Cetiosaurus was a giant marine reptile related to modern crocodiles, hence its name, which means "whale lizard". A year later, when Owen coined the name Dinosauria, he did not include Cetiosaurus and Cardiodon in that group. In 1850, Gideon Mantell recognized the dinosaurian nature of several bones assigned to Cetiosaurus by Owen. Mantell noticed that the leg bones contained a medullary cavity, a characteristic of land animals. He assigned these specimens to the new genus Pelorosaurus, and grouped it together with the dinosaurs. However, Mantell still did not recognize the relationship to Cetiosaurus. The next sauropod find to be described and misidentified as something other than a dinosaur were a set of hip vertebrae described by Harry Seeley in 1870. Seeley found that the vertebrae were very lightly constructed for their size and contained openings for air sacs (pneumatization). Such air sacs were at the time known only in birds and pterosaurs, and Seeley considered the vertebrae to come from a pterosaur. He named the new genus Ornithopsis, or "bird face" because of this. When more complete specimens of Cetiosaurus were described by Phillips in 1871, he finally recognized the animal as a dinosaur related to Pelorosaurus. However, it was not until the description of new, nearly complete sauropod skeletons from the United States (representing Apatosaurus and Camarasaurus) later that year that a complete picture of sauropods emerged. An approximate reconstruction of a complete sauropod skeleton was produced by artist John A. Ryder, hired by paleontologist E.D. Cope, based on the remains of Camarasaurus, though many features were still inaccurate or incomplete according to later finds and biomechanical studies. Also in 1877, Richard Lydekker named another relative of Cetiosaurus, Titanosaurus, based on an isolated vertebra. In 1878, the most complete sauropod yet was found and described by Othniel Charles Marsh, who named it Diplodocus. With this find, Marsh also created a new group to contain Diplodocus, Cetiosaurus, and their increasing roster of relatives to differentiate them from the other major groups of dinosaurs. Marsh named this group Sauropoda, or "lizard feet". Classification The first phylogenetic definition of Sauropoda was published in 1997 by Salgado and colleagues. They defined the clade as a node-based taxon, containing "the most recent common ancestor of Vulcanodon karibaensis and Eusauropoda and all of its descendants". Later, several stem-based definitions were proposed, including one by Yates (2007), who defined Sauropoda as "the most inclusive clade that includes Saltasaurus loricatus but not Melanorosaurus readi". Proponents of this definition also use the clade name Gravisauria, defined as the most recent ancestor of Tazoudasaurus naimi and Saltasaurus loricatus and all of its descendants for the clade equivalent to Sauropoda as defined by Salgado et al. The clade Gravisauria was appointed by the French paleontologist Ronan Allain and Moroccan paleontologist Najat Aquesbi in 2008 when a cladistic analysis of the dinosaur found by Allain, Tazoudasaurus, as the outcome was that the family Vulcanodontidae. The group includes Tazoudasaurus and Vulcanodon, and the sister taxon Eusauropoda, but also certain species such as Antetonitrus, Gongxianosaurus and Isanosaurus that do not belong in Vulcanodontidae but to an even more basic position occupied in Sauropoda. It made sense to have Sauropoda compared to this, more derived group that included Vulcanodontidae and Eusauropoda in a definition: defined as the group formed by the last common ancestor of Tazoudasaurus and Saltasaurus (Bonaparte and Powell, 1980) and all its descendants. Aquesbi mentioned two synapomorphies, shared derived characteristics of Gravisauria: the vertebrae are wider side to side than front to rear and possession of asymmetrical condyles femoris at the bottom of the femur. Those were previously not thought to be Eusauropoda synapomorphies but Allian found these properties also on Tazoudasaurus. Gravisauria split off in the Early Jurassic, around the Pliensbachian and Toarcian, 183 million years ago, and Aquesbi thought that this was part of a much larger revolution in the fauna, which includes the disappearance of Prosauropoda, Coelophysoidea and basal Thyreophora, which they attributed to a worldwide mass extinction. The phylogenetic relationships of the sauropods have largely stabilised in recent years, though there are still some uncertainties, such as the placement of Euhelopus, Haplocanthosaurus, Jobaria'' and Nemegtosauridae. Cladogram after an analysis presented by Sander and colleagues in 2011.
Biology and health sciences
Sauropods
Animals
652517
https://en.wikipedia.org/wiki/Bougainvillea
Bougainvillea
Bougainvillea ( , ) is a genus of thorny ornamental vines, bushes, and trees belonging to the four o' clock family, Nyctaginaceae. They are native to Brazil, Bolivia, Paraguay, Peru, and Argentina. There are between 4 and 22 species in the genus. The inflorescence consists of large colourful sepal-like bracts which surround three simple waxy flowers, gaining popularity for the plant as an ornamental. The plant is named after explorer Louis Antoine de Bougainville (1729-1811), after it was documented on one of his expeditions. Description The species grow tall, scrambling over other plants with their spiky thorns. They are evergreen where rainfall occurs all year, or deciduous if there is a dry season. The leaves are alternate, simple ovate-acuminate, long and broad. The actual flower of the plant is small and generally white, but each cluster of three flowers is surrounded by three or six bracts with the bright colours associated with the plant, including pink, magenta, purple, red, orange, white, or yellow. Bougainvillea glabra is sometimes called "paper flower" because its bracts are thin and papery. The fruit is a narrow five-lobed achene. History The first European to describe these plants was Philibert Commerçon, a botanist accompanying French Navy admiral Louis Antoine de Bougainville during his voyage of circumnavigation of the Earth, and first published by Antoine Laurent de Jussieu in 1789. Twenty years after Commerçon's description, it was first published as 'Buginvillæa' in Genera Plantarum by A. L. de Jussieu in 1789. The genus was subsequently spelled in several ways until it was finally established as "Bougainvillea" in the Index Kewensis in the 1930s. Originally, B. spectabilis and B. glabra were undifferentiated until the mid-1980s when botanists classified them as distinct species. In the early 19th century, these two species were the first to be introduced into Europe, and soon nurseries in France and Britain sold these varieties in Australia and throughout their former colonies. Meanwhile, Kew Gardens distributed plants it had propagated to British colonies throughout the world. Soon thereafter, a crimson specimen in Cartagena, Colombia was added to the genus descriptions. Originally thought to be a distinct species, it was named B. buttiana in honour of the European who first encountered it. However, later studies classified it as a natural hybrid of a variety of B. glabra and possibly B. peruviana—a "local pink bougainvillea" from Peru. Natural hybrids were soon found to be common occurrences all over the world. For instance, around the 1930s, when the three species were grown together, many hybrid crosses were produced almost spontaneously in East Africa, India, the Canary Islands, Australia, North America, and the Philippines. Cultivation and uses Bougainvillea are popular ornamental plants in most areas with warm climates, including Florida, South Carolina, South India, California, across the Mediterranean Basin. Although it is frost-sensitive and hardy in USDA Hardiness Zones 9b and 10, bougainvillea can be used as a houseplant or hanging basket in cooler climates. In the landscape, it makes an excellent hot season plant, and its drought tolerance makes it ideal for warm climates year-round. Its high salt tolerance makes it a natural choice for colour in coastal regions. It can be pruned into a standard, but is also grown along fence lines, on walls, in containers and hanging baskets, and as a hedge or an accent plant. Its long arching thorny branches bear heart-shaped leaves and masses of papery bracts in white, pink, orange, purple, and burgundy. Many cultivars, including double-flowered and variegated, are available. Many bougainvillea today are the result of interbreeding among only three out of the eighteen South American species recognised by botanists. There are over 300 varieties of bougainvillea. Because many of the hybrids have been crossed over several generations, it is difficult to identify their respective origins. Natural mutations seem to occur spontaneously throughout the world; wherever large numbers of plants are being produced, bud-sports will occur. This had led to multiple names for the same cultivar (or variety) and has added to the confusion over the names of bougainvillea cultivars. The growth rate of bougainvillea varies from slow to rapid, depending on the variety. They tend to flower all year round in equatorial regions. Elsewhere, they are seasonal, with bloom cycles typically four to six weeks. Bougainvillea grow best in dry soil, in very bright full sun and with frequent fertilisation; but they require little water once established, and in fact will not flourish if over-watered. They can be easily propagated via tip cuttings. Bougainvillea is also a very attractive genus for Bonsai enthusiasts, due to their ease of training and their radiant flowering during the spring. They can be kept as indoor houseplants in temperate regions and kept small by bonsai techniques. B. × buttiana is a garden hybrid of B. glabra and B. peruviana. It has produced numerous garden-worthy cultivars. The cultivars 'San Diego Red' and 'Mary Palmer's Enchantment' have gained the Royal Horticultural Society's Award of Garden Merit. Bougainvillea are relatively pest-free plants, but they may be susceptible to worms, snails and aphids. The larvae of some Lepidoptera species also use them as food plants, for example the giant leopard moth (Hypercompe scribonia). Symbolism and nomenclature Various species of Bougainvillea are the official flowers of Guam (where it is known as the Puti Tai Nobiu); Lienchiang and Pingtung Counties in Taiwan; Ipoh, Malaysia; the cities of Tagbilaran, Philippines; Camarillo, California; Laguna Niguel, California; San Clemente, California; the cities of Shenzhen, Huizhou, Zhuhai, and Jiangmen in Guangdong Province, China; Xiamen, Fujian and Naha, Okinawa. Is also the national flower in Grenada. Native to South America, bougainvillea carry several names in the different regions where they are present. Apart from Rioplatense Spanish santa-rita, Colombian Spanish veranera, Peruvian Spanish papelillo, it may be variously named primavera, três-marias, sempre-lustrosa, santa-rita, ceboleiro, roseiro, roseta, riso, pataguinha, pau-de-roseira and flor-de-papel in Brazilian Portuguese. Nevertheless, in Portuguese and in Spanish are the most common names accepted by people of the regions where these languages are spoken but it is an introduced plant. Toxicity The sap of bougainvillea can cause serious skin rashes, similar to Toxicodendron species. Taxonomy and phylogeny As of 2010, Bougainvillea is generally placed in the Bougainvilleeae subtribe (containing three genera) of the Nyctaginaceae tribe with Pisonieae being a sister subtribe (containing four genera): Species According to the Catalogue of Life, there are 16 species of Bougainvillea.
Biology and health sciences
Caryophyllales
Plants
652816
https://en.wikipedia.org/wiki/Mixing%20%28process%20engineering%29
Mixing (process engineering)
In industrial process engineering, mixing is a unit operation that involves manipulation of a heterogeneous physical system with the intent to make it more homogeneous. Familiar examples include pumping of the water in a swimming pool to homogenize the water temperature, and the stirring of pancake batter to eliminate lumps (deagglomeration). Mixing is performed to allow heat and/or mass transfer to occur between one or more streams, components or phases. Modern industrial processing almost always involves some form of mixing. Some classes of chemical reactors are also mixers. With the right equipment, it is possible to mix a solid, liquid or gas into another solid, liquid or gas. A biofuel fermenter may require the mixing of microbes, gases and liquid medium for optimal yield; organic nitration requires concentrated (liquid) nitric and sulfuric acids to be mixed with a hydrophobic organic phase; production of pharmaceutical tablets requires blending of solid powders. The opposite of mixing is segregation. A classical example of segregation is the brazil nut effect. The mathematics of mixing is highly abstract, and is a part of ergodic theory, itself a part of chaos theory. Mixing classification The type of operation and equipment used during mixing depends on the state of materials being mixed (liquid, semi-solid, or solid) and the miscibility of the materials being processed. In this context, the act of mixing may be synonymous with stirring-, or kneading-processes. Liquid–liquid mixing Mixing of liquids occurs frequently in process engineering. The nature of liquids to blend determines the equipment used. Single-phase blending tends to involve low-shear, high-flow mixers to cause liquid engulfment, while multi-phase mixing generally requires the use of high-shear, low-flow mixers to create droplets of one liquid in laminar, turbulent or transitional flow regimes, depending on the Reynolds number of the flow. Turbulent or transitional mixing is frequently conducted with turbines or impellers; laminar mixing is conducted with helical ribbon or anchor mixers. Single-phase blending Mixing of liquids that are miscible or at least soluble in each other occurs frequently in engineering (and in everyday life). An everyday example would be the addition of milk or cream to tea or coffee. Since both liquids are water-based, they dissolve easily in one another. The momentum of the liquid being added is sometimes enough to cause enough turbulence to mix the two, since the viscosity of both liquids is relatively low. If necessary, a spoon or paddle could be used to complete the mixing process. Blending in a more viscous liquid, such as honey, requires more mixing power per unit volume to achieve the same homogeneity in the same amount of time. Gas–gas mixing Solid–solid mixing Dry blenders are a type of industrial mixer which are typically used to blend multiple dry components until they are homogeneous. Often minor liquid additions are made to the dry blend to modify the product formulation. Blending times using dry ingredients are often short (15–30 minutes) but are somewhat dependent upon the varying percentages of each component, and the difference in the bulk densities of each. Ribbon, paddle, tumble and vertical blenders are available. Many products including pharmaceuticals, foods, chemicals, fertilizers, plastics, pigments, and cosmetics are manufactured in these designs. Dry blenders range in capacity from half-cubic-foot laboratory models to 500-cubic-foot production units. A wide variety of horsepower-and-speed combinations and optional features such as sanitary finishes, vacuum construction, special valves and cover openings are offered by most manufacturers. Blending powders is one of the oldest unit-operations in the solids handling industries. For many decades powder blending has been used just to homogenize bulk materials. Many different machines have been designed to handle materials with various bulk solids properties. On the basis of the practical experience gained with these different machines, engineering knowledge has been developed to construct reliable equipment and to predict scale-up and mixing behavior. Nowadays the same mixing technologies are used for many more applications: to improve product quality, to coat particles, to fuse materials, to wet, to disperse in liquid, to agglomerate, to alter functional material properties, etc. This wide range of applications of mixing equipment requires a high level of knowledge, long time experience and extended test facilities to come to the optimal selection of equipment and processes. Solid-solid mixing can be performed either in batch mixers, which is the simpler form of mixing, or in certain cases in continuous dry-mix, more complex but which provide interesting advantages in terms of segregation, capacity and validation. One example of a solid–solid mixing process is mulling foundry molding sand, where sand, bentonite clay, fine coal dust and water are mixed to a plastic, moldable and reusable mass, applied for molding and pouring molten metal to obtain sand castings that are metallic parts for automobile, machine building, construction or other industries. Mixing mechanisms In powder two different dimensions in the mixing process can be determined: convective mixing and intensive mixing. In the case of convective mixing material in the mixer is transported from one location to another. This type of mixing leads to a less ordered state inside the mixer, the components that must be mixed are distributed over the other components. With progressing time the mixture becomes more randomly ordered. After a certain mixing time the ultimate random state is reached. Usually this type of mixing is applied for free-flowing and coarse materials. Possible threats during macro mixing is the de-mixing of the components, since differences in size, shape or density of the different particles can lead to segregation. When materials are cohesive, which is the case with e.g. fine particles and also with wet material, convective mixing is no longer sufficient to obtain a randomly ordered mixture. The relative strong inter-particle forces form lumps, which are not broken up by the mild transportation forces in the convective mixer. To decrease the lump size additional forces are necessary; i.e. more energy intensive mixing is required. These additional forces can either be impact forces or shear forces. Liquid–solid mixing Liquid–solid mixing is typically done to suspend coarse free-flowing solids, or to break up lumps of fine agglomerated solids. An example of the former is the mixing granulated sugar into water; an example of the latter is the mixing of flour or powdered milk into water. In the first case, the particles can be lifted into suspension (and separated from one another) by bulk motion of the fluid; in the second, the mixer itself (or the high shear field near it) must destabilize the lumps and cause them to disintegrate. One example of a solid–liquid mixing process in industry is concrete mixing, where cement, sand, small stones or gravel and water are commingled to a homogeneous self-hardening mass, used in the construction industry. Solid suspension Suspension of solids into a liquid is done to improve the rate of mass transfer between the solid and the liquid. Examples include dissolving a solid reactant into a solvent, or suspending catalyst particles in liquid to improve the flow of reactants and products to and from the particles. The associated eddy diffusion increases the rate of mass transfer within the bulk of the fluid, and the convection of material away from the particles decreases the size of the boundary layer, where most of the resistance to mass transfer occurs. Axial-flow impellers are preferred for solid suspension because solid suspension needs momentum rather than shear, although radial-flow impellers can be used in a tank with baffles, which converts some of the rotational motion into vertical motion. When the solid is denser than the liquid (and therefore collects at the bottom of the tank), the impeller is rotated so that the fluid is pushed downwards; when the solid is less dense than the liquid (and therefore floats on top), the impeller is rotated so that the fluid is pushed upwards (though this is relatively rare). The equipment preferred for solid suspension produces large volumetric flows but not necessarily high shear; high flow-number turbine impellers, such as hydrofoils, are typically used. Multiple turbines mounted on the same shaft can reduce power draw. The degree of homogeneity of a solid-liquid suspension can be described by the RSD (Relative Standard Deviation of the solid volume fraction field in the mixing tank). A perfect suspension would have a RSD of 0% but in practice, a RSD inferior or equal to 20% can be sufficient for the suspension to be considered homogeneous, although this is case-dependent. The RSD can be obtained by experimental measurements or by calculations. Measurements can be performed at full scale but this is generally unpractical, so it is common to perform measurements at small scale and use a "scale-up" criterion to extrapolate the RSD from small to full scale. Calculations can be performed using a computational fluid dynamics software or by using correlations built on theoretical developments, experimental measurements and/or computational fluid dynamics data. Computational fluid dynamics calculations are quite accurate and can accommodate virtually any tank and agitator designs, but they require expertise and long computation time. Correlations are easy to use but are less accurate and don't cover any possible designs. The most popular correlation is the ‘just suspended speed’ correlation published by Zwietering (1958). It's an easy to use correlation but it is not meant for homogeneous suspension. It only provides a crude estimate of the stirring speed for ‘bad’ quality suspensions (partial suspensions) where no particle remains at the bottom for more than 1 or 2 seconds. Another equivalent correlation is the correlation from Mersmann (1998). For ‘good’ quality suspensions, some examples of useful correlations can be found in the publications of Barresi (1987), Magelli (1991), Cekinski (2010) or Macqueron (2017). Machine learning can also be used to build models way more accurate than "classical" correlations. Solid deagglomeration Very fine powders, such as titanium dioxide pigments, and materials that have been spray dried may agglomerate or form lumps during transportation and storage. Starchy materials or those that form gels when exposed to solvent can form lumps that are wetted on the outside but dry on the inside. These types of materials are not easily mixed into liquid with the types of mixers preferred for solid suspension because the agglomerate particles must be subjected to intense shear to be broken up. In some ways, deagglomeration of solids is similar to the blending of immiscible liquids, except for the fact that coalescence is usually not a problem. An everyday example of this type of mixing is the production of milkshakes from liquid milk and solid ice cream. Liquid–gas mixing Liquids and gases are typically mixed to allow mass transfer to occur. For instance, in the case of air stripping, gas is used to remove volatiles from a liquid. Typically, a packed column is used for this purpose, with the packing acting as a motionless mixer and the air pump providing the driving force. When a tank and impeller are used, the objective is typically to ensure that the gas bubbles remain in contact with the liquid for as long as possible. This is especially important if the gas is expensive, such as pure oxygen, or diffuses slowly into the liquid. Mixing in a tank is also useful when a (relatively) slow chemical reaction is occurring in the liquid phase, and so the concentration difference in the thin layer near the bubble is close to that of the bulk. This reduces the driving force for mass transfer. If there is a (relatively) fast chemical reaction in the liquid phase, it is sometimes advantageous to disperse but not recirculate the gas bubbles, ensuring that they are in plug flow and can transfer mass more efficiently. Rushton turbines have been traditionally used to disperse gases into liquids, but newer options, such as the Smith turbine and Bakker turbine are becoming more prevalent. One of the issues is that as the gas flow increases, more and more of the gas accumulates in the low pressure zones behind the impeller blades, which reduces the power drawn by the mixer (and therefore its effectiveness). Newer designs, such as the GDX impeller, have nearly eliminated this problem. Gas–solid mixing Gas–solid mixing may be conducted to transport powders or small particulate solids from one place to another, or to mix gaseous reactants with solid catalyst particles. In either case, the turbulent eddies of the gas must provide enough force to suspend the solid particles, which otherwise sink under the force of gravity. The size and shape of the particles is an important consideration, since different particles have different drag coefficients, and particles made of different materials have different densities. A common unit operation the process industry uses to separate gases and solids is the cyclone, which slows the gas and causes the particles to settle out. Multiphase mixing Multiphase mixing occurs when solids, liquids and gases are combined in one step. This may occur as part of a catalytic chemical process, in which liquid and gaseous reagents must be combined with a solid catalyst (such as hydrogenation); or in fermentation, where solid microbes and the gases they require must be well-distributed in a liquid medium. The type of mixer used depends upon the properties of the phases. In some cases, the mixing power is provided by the gas itself as it moves up through the liquid, entraining liquid with the bubble plume. This draws liquid upwards inside the plume, and causes liquid to fall outside the plume. If the viscosity of the liquid is too high to allow for this (or if the solid particles are too heavy), an impeller may be needed to keep the solid particles suspended. Basic nomenclature For liquid mixing, the nomenclature is rather standardized: Impeller Diameter, "D" is measured for industrial mixers as the maximum diameter swept around the axis of rotation. Rotational Speed, "N" is usually measured in revolutions per minute (RPM) or revolutions per second (RPS). This variable refers to the rotational speed of the impeller as this number can differ along points of the drive train. Tank Diameter, "T" The inside diameter of a cylindrical vessel. Most mixing vessels receiving industrial mixers will be cylindrical. Power, "P" Is the energy input into a system usually by an electric motor or a pneumatic motor Impeller Pumping Capacity, "Q" The resulting fluid motion from impeller rotation. Constitutive equations Many of the equations used for determining the output of mixers are empirically derived, or contain empirically derived constants. Since mixers operate in the turbulent regime, many of the equations are approximations that are considered acceptable for most engineering purposes. When a mixing impeller rotates in the fluid, it generates a combination of flow and shear. The impeller generated flow can be calculated with the following equation: Flow numbers for impellers have been published in the North American Mixing Forum sponsored Handbook of Industrial Mixing. The power required to rotate an impeller can be calculated using the following equations: (Turbulent regime) (Laminar regime) is the (dimensionless) power number, which is a function of impeller geometry; is the density of the fluid; is the rotational speed, typically rotations per second; is the diameter of the impeller; is the laminar power constant; and is the viscosity of the fluid. Note that the mixer power is strongly dependent upon the rotational speed and impeller diameter, and linearly dependent upon either the density or viscosity of the fluid, depending on which flow regime is present. In the transitional regime, flow near the impeller is turbulent and so the turbulent power equation is used. The time required to blend a fluid to within 5% of the final concentration, , can be calculated with the following correlations: (Turbulent regime) (Transitional region) (Laminar regime) The Transitional/Turbulent boundary occurs at The Laminar/Transitional boundary occurs at Laboratory mixing At a laboratory scale, mixing is achieved by magnetic stirrers or by simple hand-shaking. Sometimes mixing in laboratory vessels is more thorough and occurs faster than is possible industrially. Magnetic stir bars are radial-flow mixers that induce solid body rotation in the fluid being mixed. This is acceptable on a small scale, since the vessels are small and mixing therefore occurs rapidly (short blend time). A variety of stir bar configurations exist, but because of the small size and (typically) low viscosity of the fluid, it is possible to use one configuration for nearly all mixing tasks. The cylindrical stir bar can be used for suspension of solids, as seen in iodometry, deagglomeration (useful for preparation of microbiology growth medium from powders), and liquid–liquid blending. Another peculiarity of laboratory mixing is that the mixer rests on the bottom of the vessel instead of being suspended near the center. Furthermore, the vessels used for laboratory mixing are typically more widely varied than those used for industrial mixing; for instance, Erlenmeyer flasks, or Florence flasks may be used in addition to the more cylindrical beaker. Mixing in microfluidics When scaled down to the microscale, fluid mixing behaves radically different. This is typically at sizes from a couple (2 or 3) millimeters down to the nanometer range. At this size range normal advection does not happen unless it is forced by a hydraulic pressure gradient. Diffusion is the dominant mechanism whereby two different fluids come together. Diffusion is a relatively slow process. Hence a number of researchers had to devise ways to get the two fluids to mix. This involved Y junctions, T junctions, three-way intersections and designs where the interfacial area between the two fluids is maximized. Beyond just interfacing the two liquids people also made twisting channels to force the two fluids to mix. These included multilayered devices where the fluids would corkscrew, looped devices where the fluids would flow around obstructions and wavy devices where the channel would constrict and flare out. Additionally channels with features on the walls like notches or groves were tried. One way to know if mixing is happening due to advection or diffusion is by finding the Peclet number. It is the ratio of advection to diffusion. At high Peclet numbers (> 1), advection dominates. At low Peclet numbers (< 1), diffusion dominates. Peclet number = (flow velocity × mixing path) / diffusion coefficient Industrial mixing equipment At an industrial scale, efficient mixing can be difficult to achieve. A great deal of engineering effort goes into designing and improving mixing processes. Mixing at industrial scale is done in batches (dynamic mixing), inline or with help of static mixers. Moving mixers are powered with electric motors that operate at standard speeds of 1800 or 1500 RPM, which is typically much faster than necessary. Gearboxes are used to reduce speed and increase torque. Some applications require the use of multi-shaft mixers, in which a combination of mixer types are used to completely blend the product. In addition to performing typical batch mixing operations, some mixing can be done continuously. Using a machine like the Continuous Processor, one or more dry ingredients and one or more liquid ingredients can be accurately and consistently metered into the machine and see a continuous, homogeneous mixture come out the discharge of the machine. Many industries have converted to continuous mixing for many reasons. Some of those are ease of cleaning, lower energy consumption, smaller footprint, versatility, control, and many others. Continuous mixers, such as the twin-screw Continuous Processor, also have the ability to handle very high viscosities. Turbines A selection of turbine geometries and power numbers are shown below. Different types of impellers are used for different tasks; for instance, Rushton turbines are useful for dispersing gases into liquids, but are not very helpful for dispersing settled solids into liquid. Newer turbines have largely supplanted the Rushton turbine for gas–liquid mixing, such as the Smith turbine and Bakker turbine. The power number is an empirical measure of the amount of torque needed to drive different impellers in the same fluid at constant power per unit volume; impellers with higher power numbers require more torque but operate at lower speed than impellers with lower power numbers, which operate at lower torque but higher speeds. Planetary mixer A planetary mixer is a device used to mix round products including adhesives, pharmaceuticals, foods (including dough), chemicals, solid rocket propellants, electronics, plastics and pigments. Planetary mixers are ideal for mixing and kneading viscous pastes (up to 6 million centipoise) under atmospheric or vacuum conditions. Capacities range from through . Many options including jacketing for heating or cooling, vacuum or pressure, vari speed drives, etc. are available. Planetary blades each rotate on their own axes, and at the same time on a common axis, thereby providing complete mixing in a very short timeframe. Large industrial scale planetary mixers are used in the production of solid rocket fuel for long-range ballistic missiles. They are used to blend and homgenize the components of solid rocket propellant, ensuring a consistent and stable mixture of fuel & oxidizer. ResonantAcoustic mixer ResonantAcoustic mixing (RAM) is able to mix, coat, mill, and sieve materials without impellers or blades touching the materials, yet typically 10X-100X faster than alternative technologies by generating a high level of energy (up to 100 g) through seeking and operating at the resonant condition of the mechanical system - at all times. ResonantAcoustic mixers from lab scale to industrial production to continuous mixing are used for energetic materials like explosives, propellants, and pyrotechnic compositions, as well as pharmaceuticals, powder metallurgy, 3D printing, rechargeable battery materials, and battery recycling. Close-clearance mixers There are two main types of close-clearance mixers: anchors and helical ribbons. Anchor mixers induce solid-body rotation and do not promote vertical mixing, but helical ribbons do. Close clearance mixers are used in the laminar regime, because the viscosity of the fluid overwhelms the inertial forces of the flow and prevents the fluid leaving the impeller from entraining the fluid next to it. Helical ribbon mixers are typically rotated to push material at the wall downwards, which helps circulate the fluid and refresh the surface at the wall. High shear dispersers High shear dispersers create intense shear near the impeller but relatively little flow in the bulk of the vessel. Such devices typically resemble circular saw blades and are rotated at high speed. Because of their shape, they have a relatively low drag coefficient and therefore require comparatively little torque to spin at high speed. High shear dispersers are used for forming emulsions (or suspensions) of immiscible liquids and solid deagglomeration. Static mixers Static mixers are used when a mixing tank would be too large, too slow, or too expensive to use in a given process. Liquid whistles Liquid whistles are a kind of static mixer which pass fluid at high pressure through an orifice and subsequently over a blade. This subjects the fluid to high turbulent stresses and may result in mixing, emulsification, deagglomeration and disinfection. Other Ribbon Blender Ribbon blenders are very common in process industries for performing dry-mixing operations. The mixing is performed thanks to 2 helix (ribbon) welded on the shafts. Both helix move the product in opposite directions thus achieving the mixing (see picture of ribbon blender). V Blender Twin-Screw Continuous Blender Continuous Processor Cone Screw Blender Screw Blender Double Cone Blender Double Planetary High Viscosity Mixer Counter-rotating Double & Triple Shaft Vacuum Mixer High Shear Rotor Stator Impinging mixer Dispersion Mixers Paddle Jet Mixer Mobile Mixers Drum Blenders Intermix mixer Horizontal Mixer Hot/Cold mixing combination Vertical mixer Turbomixer Banbury mixer The Banbury mixer is a brand of internal batch mixer, named for inventor Fernley H. Banbury. The "Banbury" trademark is owned by Farrel Corporation. Internal batch mixers such as the Banbury mixer are used for mixing or compounding rubber and plastics. The original design dates back to 1916. The mixer consists of two rotating spiral-shaped blades encased in segments of cylindrical housings. These intersect so as to leave a ridge between the blades. The blades may be cored for circulation of heating or cooling. Its invention resulted in major labor and capital savings in the tire industry, doing away with the initial step of roller-milling rubber. It is also used for reinforcing fillers in a resin system.
Physical sciences
Chemical engineering
Chemistry
653006
https://en.wikipedia.org/wiki/Polycyclic%20aromatic%20hydrocarbon
Polycyclic aromatic hydrocarbon
A Polycyclic aromatic hydrocarbon (PAH) is a class of organic compounds that is composed of multiple aromatic rings. Most are produced by the incomplete combustion of organic matter— by engine exhaust fumes, tobacco, incinerators, in roasted meats and cereals, or when biomass burns at lower temperatures as in forest fires. The simplest representative is naphthalene, having two aromatic rings, and the three-ring compounds anthracene and phenanthrene. PAHs are uncharged, non-polar and planar. Many are colorless. Many of them are also found in fossil fuel deposits such as coal and in petroleum. Exposure to PAHs can lead to different types of cancer, to fetal development complications, and to cardiovascular issues. Polycyclic aromatic hydrocarbons are discussed as possible starting materials for abiotic syntheses of materials required by the earliest forms of life. Nomenclature and structure The terms polyaromatic hydrocarbon, or polynuclear aromatic hydrocarbon (abbreviated as PNA) are also used for this concept. By definition, polycyclic aromatic hydrocarbons have multiple aromatic rings, precluding benzene from being considered a PAH. Some sources, such as the US EPA and CDC, consider naphthalene to be the simplest PAH. Other authors consider PAHs to start with the tricyclic species phenanthrene and anthracene. Most authors exclude compounds that include heteroatoms in the rings, or carry substituents. A polyaromatic hydrocarbon may have rings of various sizes, including some that are not aromatic. Those that have only six-membered rings are said to be alternant. The following are examples of PAHs that vary in the number and arrangement of their rings: Geometry Most PAHs, like naphthalene, anthracene, and coronene, are planar. This geometry is a consequence of the fact that the σ-bonds that result from the merger of sp2 hybrid orbitals of adjacent carbons lie on the same plane as the carbon atom. Those compounds are achiral, since the plane of the molecule is a symmetry plane. In rare cases, PAHs are not planar. In some cases, the non-planarity may be forced by the topology of the molecule and the stiffness (in length and angle) of the carbon-carbon bonds. For example, unlike coronene, corannulene adopts a bowl shape in order to reduce the bond stress. The two possible configurations, concave and convex, are separated by a relatively low energy barrier (about 11 kcal/mol). In theory, there are 51 structural isomers of coronene that have six fused benzene rings in a cyclic sequence, with two edge carbons shared between successive rings. All of them must be non-planar and have considerable higher bonding energy (computed to be at least 130 kcal/mol) than coronene; as of 2002, none of them had been synthesized. Other PAHs that might seem to be planar, considering only the carbon skeleton, may be distorted by repulsion or steric hindrance between the hydrogen atoms in their periphery. Benzo[c]phenanthrene, with four rings fused in a "C" shape, has a slight helical distortion due to repulsion between the closest pair of hydrogen atoms in the two extremal rings. This effect also causes distortion of picene. Adding another benzene ring to form dibenzo[c,g]phenanthrene creates steric hindrance between the two extreme hydrogen atoms. Adding two more rings on the same sense yields heptahelicene in which the two extreme rings overlap. These non-planar forms are chiral, and their enantiomers can be isolated. Benzenoid hydrocarbons The benzenoid hydrocarbons have been defined as condensed polycyclic unsaturated fully-conjugated hydrocarbons whose molecules are essentially planar with all rings six-membered. Full conjugation means that all carbon atoms and carbon-carbon bonds must have the sp2 structure of benzene. This class is largely a subset of the alternant PAHs, but is considered to include unstable or hypothetical compounds like triangulene or heptacene. As of 2012, over 300 benzenoid hydrocarbons had been isolated and characterized. Bonding and aromaticity The aromaticity varies for PAHs. According to Clar's rule, the resonance structure of a PAH that has the largest number of disjoint aromatic pi sextets—i.e. benzene-like moieties—is the most important for the characterization of the properties of that PAH. For example, phenanthrene has two Clar structures: one with just one aromatic sextet (the middle ring), and the other with two (the first and third rings). The latter case is therefore the more characteristic electronic nature of the two. Therefore, in this molecule the outer rings have greater aromatic character whereas the central ring is less aromatic and therefore more reactive. In contrast, in anthracene the resonance structures have one sextet each, which can be at any of the three rings, and the aromaticity spreads out more evenly across the whole molecule. This difference in number of sextets is reflected in the differing ultraviolet–visible spectra of these two isomers, as higher Clar pi-sextets are associated with larger HOMO-LUMO gaps; the highest-wavelength absorbance of phenanthrene is at 293 nm, while anthracene is at 374 nm. Three Clar structures with two sextets each are present in the four-ring chrysene structure: one having sextets in the first and third rings, one in the second and fourth rings, and one in the first and fourth rings. Superposition of these structures reveals that the aromaticity in the outer rings is greater (each has a sextet in two of the three Clar structures) compared to the inner rings (each has a sextet in only one of the three). Properties Physicochemical PAHs are nonpolar and lipophilic. Larger PAHs are generally insoluble in water, although some smaller PAHs are soluble. The larger members are also poorly soluble in organic solvents and in lipids. The larger members, e.g. perylene, are strongly colored. Redox Polycyclic aromatic compounds characteristically yield radicals and anions upon treatment with alkali metals. The large PAH form dianions as well. The redox potential correlates with the size of the PAH. {| class="wikitable" |+Half-cell potential of aromatic compounds against the SCE (Fc+/0) !Compound !Potential (V) |- |benzene | −3.42 |- |biphenyl | −2.60 (-3.18) |- | naphthalene | −2.51 (-3.1) |- | anthracene | −1.96 (-2.5) |- | phenanthrene | −2.46 |- | perylene | −1.67 (-2.2) |- | pentacene | −1.35 |} Sources Artificial The dominant sources of PAHs in the environment are from human activity: wood-burning and combustion of other biofuels such as dung or crop residues contribute more than half of annual global PAH emissions, particularly due to biofuel use in India and China. As of 2004, industrial processes and the extraction and use of fossil fuels made up slightly more than one quarter of global PAH emissions, dominating outputs in industrial countries such as the United States. A year-long sampling campaign in Athens, Greece found a third (31%) of PAH urban air pollution to be caused by wood-burning, like diesel and oil (33%) and gasoline (29%). It also found that wood-burning is responsible for nearly half (43%) of annual PAH cancer-risk (carcinogenic potential) compared to the other sources and that wintertime PAH levels were 7 times higher than in other seasons, especially if atmospheric dispersion is low. Lower-temperature combustion, such as tobacco smoking or wood-burning, tends to generate low molecular weight PAHs, whereas high-temperature industrial processes typically generate PAHs with higher molecular weights. Incense is also a source. PAHs are typically found as complex mixtures. Natural Natural fires PAHs may result from the incomplete combustion of organic matter in natural wildfires. Substantially higher outdoor air, soil, and water concentrations of PAHs have been measured in Asia, Africa, and Latin America than in Europe, Australia, the U.S., and Canada. Fossil carbon Polycyclic aromatic hydrocarbons are primarily found in natural sources such as bitumen. PAHs can also be produced geologically when organic sediments are chemically transformed into fossil fuels such as oil and coal. The rare minerals idrialite, curtisite, and carpathite consist almost entirely of PAHs that originated from such sediments, that were extracted, processed, separated, and deposited by very hot fluids. High levels of such PAHs have been detected in the Cretaceous-Tertiary (K-T) boundary, more than 100 times the level in adjacent layers. The spike was attributed to massive fires that consumed about 20% of the terrestrial above-ground biomass in a very short time. Extraterrestrial PAHs are prevalent in the interstellar medium (ISM) of galaxies in both the nearby and distant Universe and make up a dominant emission mechanism in the mid-infrared wavelength range, containing as much as 10% of the total integrated infrared luminosity of galaxies. PAHs generally trace regions of cold molecular gas, which are optimum environments for the formation of stars. NASA's Spitzer Space Telescope and James Webb Space Telescope include instruments for obtaining both images and spectra of light emitted by PAHs associated with star formation. These images can trace the surface of star-forming clouds in our own galaxy or identify star forming galaxies in the distant universe. In June 2013, PAHs were detected in the upper atmosphere of Titan, the largest moon of the planet Saturn. Minor sources Volcanic eruptions may emit PAHs. Certain PAHs such as perylene can also be generated in anaerobic sediments from existing organic material, although it remains undetermined whether abiotic or microbial processes drive their production. Distribution in the environment Aquatic environments Most PAHs are insoluble in water, which limits their mobility in the environment, although PAHs sorb to fine-grained organic-rich sediments. Aqueous solubility of PAHs decreases approximately logarithmically as molecular mass increases. Two-ringed PAHs, and to a lesser extent three-ringed PAHs, dissolve in water, making them more available for biological uptake and degradation. Further, two- to four-ringed PAHs volatilize sufficiently to appear in the atmosphere predominantly in gaseous form, although the physical state of four-ring PAHs can depend on temperature. In contrast, compounds with five or more rings have low solubility in water and low volatility; they are therefore predominantly in solid state, bound to particulate air pollution, soils, or sediments. In solid state, these compounds are less accessible for biological uptake or degradation, increasing their persistence in the environment. Human exposure Human exposure varies across the globe and depends on factors such as smoking rates, fuel types in cooking, and pollution controls on power plants, industrial processes, and vehicles. Developed countries with stricter air and water pollution controls, cleaner sources of cooking (i.e., gas and electricity vs. coal or biofuels), and prohibitions of public smoking tend to have lower levels of PAH exposure, while developing and undeveloped countries tend to have higher levels. Surgical smoke plumes have been proven to contain PAHs in several independent research studies. Burning solid fuels such as coal and biofuels in the home for cooking and heating is a dominant global source of PAH emissions that in developing countries leads to high levels of exposure to indoor particulate air pollution containing PAHs, particularly for women and children who spend more time in the home or cooking. In industrial countries, people who smoke tobacco products, or who are exposed to second-hand smoke, are among the most highly exposed groups; tobacco smoke contributes to 90% of indoor PAH levels in the homes of smokers. For the general population in developed countries, the diet is otherwise the dominant source of PAH exposure, particularly from smoking or grilling meat or consuming PAHs deposited on plant foods, especially broad-leafed vegetables, during growth. Exposure also occurs through drinking alcohol aged in charred barrels, flavored with peat smoke, or made with roasted grains. PAHs are typically at low concentrations in drinking water. Emissions from vehicles such as cars and trucks can be a substantial outdoor source of PAHs in particulate air pollution. Geographically, major roadways are thus sources of PAHs, which may distribute in the atmosphere or deposit nearby. Catalytic converters are estimated to reduce PAH emissions from gasoline-fired vehicles by 25-fold. People can also be occupationally exposed during work that involves fossil fuels or their derivatives, wood-burning, carbon electrodes, or exposure to diesel exhaust. Industrial activity that can produce and distribute PAHs includes aluminum, iron, and steel manufacturing; coal gasification, tar distillation, shale oil extraction; production of coke, creosote, carbon black, and calcium carbide; road paving and asphalt manufacturing; rubber tire production; manufacturing or use of metal working fluids; and activity of coal or natural gas power stations. Environmental pollution and degradation PAHs typically disperse from urban and suburban non-point sources through road runoff, sewage, and atmospheric circulation and subsequent deposition of particulate air pollution. Soil and river sediment near industrial sites such as creosote manufacturing facilities can be highly contaminated with PAHs. Oil spills, creosote, coal mining dust, and other fossil fuel sources can also distribute PAHs in the environment. Two- and three-ringed PAHs can disperse widely while dissolved in water or as gases in the atmosphere, while PAHs with higher molecular weights can disperse locally or regionally adhered to particulate matter that is suspended in air or water until the particles land or settle out of the water column. PAHs have a strong affinity for organic carbon, and thus highly organic sediments in rivers, lakes, and the ocean can be a substantial sink for PAHs. Algae and some invertebrates such as protozoans, mollusks, and many polychaetes have limited ability to metabolize PAHs and bioaccumulate disproportionate concentrations of PAHs in their tissues; however, PAH metabolism can vary substantially across invertebrate species. Most vertebrates metabolize and excrete PAHs relatively rapidly. Tissue concentrations of PAHs do not increase (biomagnify) from the lowest to highest levels of food chains. PAHs transform slowly to a wide range of degradation products. Biological degradation by microbes is a dominant form of PAH transformation in the environment. Soil-consuming invertebrates such as earthworms speed PAH degradation, either through direct metabolism or by improving the conditions for microbial transformations. Abiotic degradation in the atmosphere and the top layers of surface waters can produce nitrogenated, halogenated, hydroxylated, and oxygenated PAHs; some of these compounds can be more toxic, water-soluble, and mobile than their parent PAHs. Urban soils The British Geological Survey reported the amount and distribution of PAH compounds including parent and alkylated forms in urban soils at 76 locations in Greater London. The study showed that parent (16 PAH) content ranged from 4 to 67 mg/kg (dry soil weight) and an average PAH concentration of 18 mg/kg (dry soil weight) whereas the total PAH content (33 PAH) ranged from 6 to 88 mg/kg and fluoranthene and pyrene were generally the most abundant PAHs. Benzo[a]pyrene (BaP), the most toxic of the parent PAHs, is widely considered a key marker PAH for environmental assessments; the normal background concentration of BaP in the London urban sites was 6.9 mg/kg (dry soil weight). London soils contained more stable four- to six-ringed PAHs which were indicative of combustion and pyrolytic sources, such as coal and oil burning and traffic-sourced particulates. However, the overall distribution also suggested that the PAHs in London soils had undergone weathering and been modified by a variety of pre-and post-depositional processes such as volatilization and microbial biodegradation. Peatlands Managed burning of moorland vegetation in the UK has been shown to generate PAHs which become incorporated into the peat surface. Burning of moorland vegetation such as heather initially generates high amounts of two- and three-ringed PAHs relative to four- to six-ringed PAHs in surface sediments, however, this pattern is reversed as the lower molecular weight PAHs are attenuated by biotic decay and photodegradation. Evaluation of the PAH distributions using statistical methods such as principal component analyses (PCA) enabled the study to link the source (burnt moorland) to pathway (suspended stream sediment) to the depositional sink (reservoir bed). Rivers, estuarine and coastal sediments Concentrations of PAHs in river and estuarine sediments vary according to a variety of factors including proximity to municipal and industrial discharge points, wind direction and distance from major urban roadways, as well as tidal regime which controls the diluting effect of generally cleaner marine sediments relative to freshwater discharge. Consequently, the concentrations of pollutants in estuaries tends to decrease at the river mouth. Understanding of sediment hosted PAHs in estuaries is important for the protection of commercial fisheries (such as mussels) and general environmental habitat conservation because PAHs can impact the health of suspension and sediment feeding organism. River-estuary surface sediments in the UK tend to have a lower PAH content than sediments buried 10–60 cm from the surface reflecting lower present day industrial activity combined with improvement in environmental legislation of PAH. Typical PAH concentrations in UK estuaries range from about 19 to 16,163 µg/kg (dry sediment weight) in the River Clyde and 626 to 3,766 µg/kg in the River Mersey. In general estuarine sediments with a higher natural total organic carbon content (TOC) tend to accumulate PAHs due to high sorption capacity of organic matter. A similar correspondence between PAHs and TOC has also been observed in the sediments of tropical mangroves located on the coast of southern China. Human health Cancer is a primary human health risk of exposure to PAHs. Exposure to PAHs has also been linked with cardiovascular disease and poor fetal development. Cancer PAHs have been linked to skin, lung, bladder, liver, and stomach cancers in well-established animal model studies. Specific compounds classified by various agencies as possible or probable human carcinogens are identified in the section "Regulation and Oversight" below. History Historically, PAHs contributed substantially to our understanding of adverse health effects from exposures to environmental contaminants, including chemical carcinogenesis. In 1775, Percivall Pott, a surgeon at St. Bartholomew's Hospital in London, observed that scrotal cancer was unusually common in chimney sweepers and proposed the cause as occupational exposure to soot. A century later, Richard von Volkmann reported increased skin cancers in workers of the coal tar industry of Germany, and by the early 1900s increased rates of cancer from exposure to soot and coal tar was widely accepted. In 1915, Yamigawa and Ichicawa were the first to experimentally produce cancers, specifically of the skin, by topically applying coal tar to rabbit ears. In 1922, Ernest Kennaway determined that the carcinogenic component of coal tar mixtures was an organic compound consisting of only carbon and hydrogen. This component was later linked to a characteristic fluorescent pattern that was similar but not identical to benz[a]anthracene, a PAH that was subsequently demonstrated to cause tumors. Cook, Hewett and Hieger then linked the specific spectroscopic fluorescent profile of benzo[a]pyrene to that of the carcinogenic component of coal tar, the first time that a specific compound from an environmental mixture (coal tar) was demonstrated to be carcinogenic. In the 1930s and later, epidemiologists from Japan, the UK, and the US, including Richard Doll and various others, reported greater rates of death from lung cancer following occupational exposure to PAH-rich environments among workers in coke ovens and coal carbonization and gasification processes. Mechanisms of carcinogenesis The structure of a PAH influences whether and how the individual compound is carcinogenic. Some carcinogenic PAHs are genotoxic and induce mutations that initiate cancer; others are not genotoxic and instead affect cancer promotion or progression. PAHs that affect cancer initiation are typically first chemically modified by enzymes into metabolites that react with DNA, leading to mutations. When the DNA sequence is altered in genes that regulate cell replication, cancer can result. Mutagenic PAHs, such as benzo[a]pyrene, usually have four or more aromatic rings as well as a "bay region", a structural pocket that increases reactivity of the molecule to the metabolizing enzymes. Mutagenic metabolites of PAHs include diol epoxides, quinones, and radical PAH cations. These metabolites can bind to DNA at specific sites, forming bulky complexes called DNA adducts that can be stable or unstable. Stable adducts may lead to DNA replication errors, while unstable adducts react with the DNA strand, removing a purine base (either adenine or guanine). Such mutations, if they are not repaired, can transform genes encoding for normal cell signaling proteins into cancer-causing oncogenes. Quinones can also repeatedly generate reactive oxygen species that may independently damage DNA. Enzymes in the cytochrome family (CYP1A1, CYP1A2, CYP1B1) metabolize PAHs to diol epoxides. PAH exposure can increase production of the cytochrome enzymes, allowing the enzymes to convert PAHs into mutagenic diol epoxides at greater rates. In this pathway, PAH molecules bind to the aryl hydrocarbon receptor (AhR) and activate it as a transcription factor that increases production of the cytochrome enzymes. The activity of these enzymes may at times conversely protect against PAH toxicity, which is not yet well understood. Low molecular weight PAHs, with two to four aromatic hydrocarbon rings, are more potent as co-carcinogens during the promotional stage of cancer. In this stage, an initiated cell (a cell that has retained a carcinogenic mutation in a key gene related to cell replication) is removed from growth-suppressing signals from its neighboring cells and begins to clonally replicate. Low-molecular-weight PAHs that have bay or bay-like regions can dysregulate gap junction channels, interfering with intercellular communication, and also affect mitogen-activated protein kinases that activate transcription factors involved in cell proliferation. Closure of gap junction protein channels is a normal precursor to cell division. Excessive closure of these channels after exposure to PAHs results in removing a cell from the normal growth-regulating signals imposed by its local community of cells, thus allowing initiated cancerous cells to replicate. These PAHs do not need to be enzymatically metabolized first. Low molecular weight PAHs are prevalent in the environment, thus posing a significant risk to human health at the promotional phases of cancer. Cardiovascular disease Adult exposure to PAHs has been linked to cardiovascular disease. PAHs are among the complex suite of contaminants in tobacco smoke and particulate air pollution and may contribute to cardiovascular disease resulting from such exposures. In laboratory experiments, animals exposed to certain PAHs have shown increased development of plaques (atherogenesis) within arteries. Potential mechanisms for the pathogenesis and development of atherosclerotic plaques may be similar to the mechanisms involved in the carcinogenic and mutagenic properties of PAHs. A leading hypothesis is that PAHs may activate the cytochrome enzyme CYP1B1 in vascular smooth muscle cells. This enzyme then metabolically processes the PAHs to quinone metabolites that bind to DNA in reactive adducts that remove purine bases. The resulting mutations may contribute to unregulated growth of vascular smooth muscle cells or to their migration to the inside of the artery, which are steps in plaque formation. These quinone metabolites also generate reactive oxygen species that may alter the activity of genes that affect plaque formation. Oxidative stress following PAH exposure could also result in cardiovascular disease by causing inflammation, which has been recognized as an important factor in the development of atherosclerosis and cardiovascular disease. Biomarkers of exposure to PAHs in humans have been associated with inflammatory biomarkers that are recognized as important predictors of cardiovascular disease, suggesting that oxidative stress resulting from exposure to PAHs may be a mechanism of cardiovascular disease in humans. Fetal development impacts Multiple epidemiological studies of people living in Europe, the United States, and China have linked in utero exposure to PAHs, through air pollution or parental occupational exposure, with poor fetal growth, reduced immune function, and poorer neurological development, including lower IQ. Regulation and oversight Some governmental bodies, including the European Union as well as NIOSH and the United States Environmental Protection Agency (EPA), regulate concentrations of PAHs in air, water, and soil. The European Commission has restricted concentrations of 8 carcinogenic PAHs in consumer products that contact the skin or mouth. Priority polycyclic aromatic hydrocarbons identified by the US EPA, the US Agency for Toxic Substances and Disease Registry (ATSDR), and the European Food Safety Authority (EFSA) due to their carcinogenicity or genotoxicity and/or ability to be monitored are the following: Considered probable or possible human carcinogens by the US EPA, the European Union, and/or the International Agency for Research on Cancer (IARC). Detection and optical properties A spectral database exists for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. Detection of PAHs in materials is often done using gas chromatography-mass spectrometry or liquid chromatography with ultraviolet-visible or fluorescence spectroscopic methods or by using rapid test PAH indicator strips. Structures of PAHs have been analyzed using infrared spectroscopy. PAHs possess very characteristic UV absorbance spectra. These often possess many absorbance bands and are unique for each ring structure. Thus, for a set of isomers, each isomer has a different UV absorbance spectrum than the others. This is particularly useful in the identification of PAHs. Most PAHs are also fluorescent, emitting characteristic wavelengths of light when they are excited (when the molecules absorb light). The extended pi-electron electronic structures of PAHs lead to these spectra, as well as to certain large PAHs also exhibiting semi-conducting and other behaviors. Origins of life PAHs may be abundant in the universe. They seem to have been formed as early as a couple of billion years after the Big Bang, and are associated with new stars and exoplanets. More than 20% of the carbon in the universe may be associated with PAHs. PAHs are considered possible starting material for the earliest forms of life. Light emitted by the Red Rectangle nebula possesses spectral signatures that suggest the presence of anthracene and pyrene. This report was considered a controversial hypothesis that as nebulae of the same type as the Red Rectangle approach the ends of their lives, convection currents cause carbon and hydrogen in the nebulae's cores to get caught in stellar winds, and radiate outward. As they cool, the atoms supposedly bond to each other in various ways and eventually form particles of a million or more atoms. Adolf Witt and his team inferred that PAHs—which may have been vital in the formation of early life on Earth—can only originate in nebulae. PAHs, subjected to interstellar medium (ISM) conditions, are transformed, through hydrogenation, oxygenation, and hydroxylation, to more complex organic compounds—"a step along the path toward amino acids and nucleotides, the raw materials of proteins and DNA, respectively". Further, as a result of these transformations, the PAHs lose their spectroscopic signature which could be one of the reasons "for the lack of PAH detection in interstellar ice grains, particularly the outer regions of cold, dense clouds or the upper molecular layers of protoplanetary disks." Low-temperature chemical pathways from simple organic compounds to complex PAHs are of interest. Such chemical pathways may help explain the presence of PAHs in the low-temperature atmosphere of Saturn's moon Titan, and may be significant pathways, in terms of the PAH world hypothesis, in producing precursors to biochemicals related to life as we know it.
Physical sciences
Aromatic hydrocarbons
Chemistry
653348
https://en.wikipedia.org/wiki/Human%20musculoskeletal%20system
Human musculoskeletal system
The human musculoskeletal system (also known as the human locomotor system, and previously the activity system) is an organ system that gives humans the ability to move using their muscular and skeletal systems. The musculoskeletal system provides form, support, stability, and movement to the body. The human musculoskeletal system is made up of the bones of the skeleton, muscles, cartilage, tendons, ligaments, joints, and other connective tissue that supports and binds tissues and organs together. The musculoskeletal system's primary functions include supporting the body, allowing motion, and protecting vital organs. The skeletal portion of the system serves as the main storage system for calcium and phosphorus and contains critical components of the hematopoietic system. This system describes how bones are connected to other bones and muscle fibers via connective tissue such as tendons and ligaments. The bones provide stability to the body. Muscles keep bones in place and also play a role in the movement of bones. To allow motion, different bones are connected by joints. Cartilage prevents the bone ends from rubbing directly onto each other. Muscles contract to move the bone attached at the joint. There are, however, diseases and disorders that may adversely affect the function and overall effectiveness of the system. These diseases can be difficult to diagnose due to the close relation of the musculoskeletal system to other internal systems. The musculoskeletal system refers to the system having its muscles attached to an internal skeletal system and is necessary for humans to move to a more favorable position. Complex issues and injuries involving the musculoskeletal system are usually handled by a physiatrist (specialist in physical medicine and rehabilitation) or an orthopaedic surgeon. Subsystems Skeletal The skeletal system serves many important functions; it provides the shape and form for the body, support and protection, allows bodily movement, produces blood for the body, and stores minerals. The number of bones in the human skeletal system is a controversial topic. Humans are born with over 300 bones; however, many bones fuse together between birth and maturity. As a result, an average adult skeleton consists of 206 bones. The number of bones varies according to the method used to derive the count. While some consider certain structures to be a single bone with multiple parts, others may see it as a single part with multiple bones. There are five general classifications of bones. These are long bones, short bones, flat bones, irregular bones, and sesamoid bones. The human skeleton is composed of both fused and individual bones supported by ligaments, tendons, muscles and cartilage. It is a complex structure with two distinct divisions; the axial skeleton, which includes the vertebral column, and the appendicular skeleton. Function The skeletal system serves as a framework for tissues and organs to attach themselves to. This system acts as a protective structure for vital organs. Major examples of this are the brain being protected by the skull and the lungs being protected by the rib cage. Located in long bones are two distinctions of bone marrow (yellow and red). The yellow marrow has fatty connective tissue and is found in the marrow cavity. During starvation, the body uses the fat in yellow marrow for energy. The red marrow of some bones is an important site for blood cell production, approximately 2.6 million red blood cells per second in order to replace existing cells that have been destroyed by the liver. Here all erythrocytes, platelets, and most leukocytes form in adults. From the red marrow, erythrocytes, platelets, and leukocytes migrate to the blood to do their special tasks. Another function of bones is the storage of certain minerals. Calcium and phosphorus are among the main minerals being stored. The importance of this storage "device" helps to regulate mineral balance in the bloodstream. When the fluctuation of minerals is high, these minerals are stored in the bone; when it is low they will be withdrawn from the bone. Muscular There are three types of muscles—cardiac, skeletal, and smooth. Smooth muscles are used to control the flow of substances within the lumens of hollow organs, and are not consciously controlled. Skeletal and cardiac muscles have striations that are visible under a microscope due to the components within their cells. Only skeletal and smooth muscles are part of the musculoskeletal system and only the muscles can move the body. Cardiac muscles are found in the heart and are used only to circulate blood; like the smooth muscles, these muscles are not under conscious control. Skeletal muscles are attached to bones and arranged in opposing groups around joints. Muscles are innervated, whereby nervous signals are communicated by nerves, which conduct electrical currents from the central nervous system and cause the muscles to contract. Contraction initiation In mammals, when a muscle contracts, a series of reactions occur. Muscle contraction is stimulated by the motor neuron sending a message to the muscles from the somatic nervous system. Depolarization of the motor neuron results in neurotransmitters being released from the nerve terminal. The space between the nerve terminal and the muscle cell is called the neuromuscular junction. These neurotransmitters diffuse across the synapse and bind to specific receptor sites on the cell membrane of the muscle fiber. When enough receptors are stimulated, an action potential is generated and the permeability of the sarcolemma is altered. This process is known as initiation. Tendons A tendon is a tough, flexible band of fibrous connective tissue that connects muscles to bones. The extra-cellular connective tissue between muscle fibers binds to tendons at the distal and proximal ends, and the tendon binds to the periosteum of individual bones at the muscle's origin and insertion. As muscles contract, tendons transmit the forces to the relatively rigid bones, pulling on them and causing movement. Tendons can stretch substantially, allowing them to function as springs during locomotion, thereby saving energy. Joints, ligaments and bursae The Joints are structures that connect individual bones and may allow bones to move against each other to cause movement. There are three divisions of joints, diarthroses which allow extensive mobility between two or more articular heads; amphiarthrosis, which is a joint that allows some movement, and false joints or synarthroses, joints that are immovable, that allow little or no movement and are predominantly fibrous. Synovial joints, joints that are not directly joined, are lubricated by a solution called synovial fluid that is produced by the synovial membranes. This fluid lowers the friction between the articular surfaces and is kept within an articular capsule, binding the joint with its taut tissue. Ligaments A ligament is a small band of dense, white, fibrous elastic tissue. Ligaments connect the ends of bones together in order to form a joint. Most ligaments limit dislocation, or prevent certain movements that may cause breaks. Since they are only elastic they increasingly lengthen when under pressure. When this occurs the ligament may be susceptible to break resulting in an unstable joint. Ligaments may also restrict some actions: movements such as hyper extension and hyper flexion are restricted by ligaments to an extent. Also ligaments prevent certain directional movement. Bursae A bursa is a small fluid-filled sac made of white fibrous tissue and lined with synovial membrane. Bursa may also be formed by a synovial membrane that extends outside of the joint capsule. It provides a cushion between bones and tendons or muscles around a joint; bursa are filled with synovial fluid and are found around almost every major joint of the body. Clinical significance Because many other body systems, including the vascular, nervous, and integumentary systems, are interrelated, disorders of one of these systems may also affect the musculoskeletal system and complicate the diagnosis of the disorder's origin. Diseases of the musculoskeletal system mostly encompass functional disorders or motion discrepancies; the level of impairment depends specifically on the problem and its severity. In a study of hospitalizations in the United States, the most common inpatient OR procedures in 2012 involved the musculoskeletal system: knee arthroplasty, laminectomy, hip replacement, and spinal fusion. Articular (of or pertaining to the joints) disorders are the most common. However, also among the diagnoses are: primary muscular diseases, neurologic (related to the medical science that deals with the nervous system and disorders affecting it) deficits, toxins, endocrine abnormalities, metabolic disorders, infectious diseases, blood and vascular disorders, and nutritional imbalances. Disorders of muscles from another body system can bring about irregularities such as: impairment of ocular motion and control, respiratory dysfunction, and bladder malfunction. Complete paralysis, paresis, or ataxia may be caused by primary muscular dysfunctions of infectious or toxic origin; however, the primary disorder is usually related to the nervous system, with the muscular system acting as the effector organ, an organ capable of responding to a stimulus, especially a nerve impulse. One understated disorder that begins during pregnancy is pelvic girdle pain. It is complex, multi-factorial, and likely to be also represented by a series of sub-groups driven by pain varying from peripheral or central nervous system, altered laxity/stiffness of muscles, laxity to injury of tendinous/ligamentous structures to maladaptive body mechanics.
Biology and health sciences
Human anatomy
Health
653670
https://en.wikipedia.org/wiki/Cockchafer
Cockchafer
The common cockchafer (Melolontha melolontha), also colloquially known as the Maybug, Maybeetle, or doodlebug, is a species of scarab beetle belonging to the genus Melolontha. It is native to Europe, and it is one of several closely-related and morphologically similar species of Melolontha called cockchafers, alongside Melolontha hippocastani (the forest cockchafer). The adults and larvae feed on plants, and are regarded as serious agricultural pests of crops such as grasses and fruit trees. Adults have harmful effects for the crop when they aggregate in large groups. The larvae can cause severe damage and kill the plant by gnawing the plant roots. The cockchafer develops via metamorphosis, in which the beetle undergoes stages of egg, larvae, pupae and adults. The mating behaviour is controlled by pheromones. The males usually swarm during the mating season while the females stay put and feed on leaves. The leaves release green leaf volatiles when they are fed on by females, which the male can sense and thus locate the female for mating opportunity. The larvae use both the plant volatiles and CO2 to locate the plant root for food. The number of cockchafers increased over the past few years due to the decrease in pesticide usage. Soil tilling can be used to remove larvae hatching. Entomopathogenic fungi and nematodes can effectively remove beetles at the larval stage. Distribution Cockchafers are prevalent across Europe, including in Germany, France, and the United Kingdom. They are particularly prevalent in temperate regions with suitable soil conditions for larval development. However, they have also been reported in parts of Asia, including Turkey and the Caucasus region. Geographical barriers, climatic conditions, and ecological factors may limit their dispersal to other continents. Description Adults Adults of M. melolontha reach sizes of in length. Behind their heads they have a black pronotum covered with short hairs. This black coloration distinguishes them from their close relative M. hippocastani, whose pronotum is brown. The top of their bodies have hard, brown elytra and a black thorax, while their underside is black and partly white on the sides. They have a dark head with two antennae with ten segments each. Male cockchafers have seven "leaves" on their antennae, whereas the females have only six. Larvae Larvae have 3 stages of development over the course of 3-4 years. In the first stage, they are 10-20 mm long, then grow to 30-35 mm in the second year of development, and finally reach their full size of 40-46 mm in their final year of development before emerging. In some areas of Eastern Europe the larvae develop for a fourth year. They have white bodies that curve into an arc with a black coloration at the abdomen and long, hairy, and well developed legs. They have large orange heads with strong, grabbing mandibles. On their heads they have 2 small antennae which they use to smell and taste their surroundings while underground. Food resources Cockchafer feeds on deciduous plant and fruit tree leaves, including oaks, maple, sweet chestnut, beech, plum, and walnut trees. The feeding behaviour of larvae can cause severe damage to the plants. They feed on both the small roots of field plants such as grain, grass, tree, beet roots and the large part of crop rootlets. Larvae can gnaw the root for 30cm each day, which quickly kills the plant. Life cycle Adults appear at the end of April or in May and live for about five to seven weeks. After about two weeks, the female begins laying eggs, which she buries about 10 to 20 cm deep in the earth. She may do this several times until she has laid between 60 and 80 eggs. Most typically, the female beetle lays its eggs in fields. The preferred food for adults is oak leaves, but they will also feed on conifer needles. The larvae, known as "chafer grubs" or "white grubs", hatch four to six weeks after being laid as eggs. They feed on plant roots, for instance potato roots. The grubs develop in the earth for three to four years, in colder climates even five years, and grow continually to a size of about 4–5 cm, before they pupate in early autumn and develop into an adult cockchafer in six weeks. The cockchafer overwinters in the earth at depths between 20 and 100 cm. They work their way to the surface only in spring. Because of their long development time as larvae, cockchafers appear in a cycle of every three or four years; the years vary from region to region. There is a larger cycle of around 30 years superimposed, in which they occur (or rather, used to occur) in unusually high numbers (10,000s). Enemies Predators The European mole is a natural predator of cockchafers. Moles are known to feed on cockchafer larvae. They can detect them using their keen sense of smell and specialised digging behaviour. This predation can help regulate cockchafer populations in mole-inhabited areas. M. melolontha adults are predated by ground beetles and ants. Larvae are predated by click beetles while underground. Starlings, crows, and gulls also predate M. melolontha larvae, often after a field has been plowed. Parasites Dexia rustica is a parasitic fly that uses M. melolontha larvae as their hosts. D. rustica eggs hatch underground and look for cockchafer larvae to hibernate within over the winter. Their presence will ultimately kill the beetle larvae in the spring. One to six fly larva can parasitise a single host. Behaviour Mating behaviour Males leave the soil when the temperature is favourable in April or May. Sexual dimorphism is observed as male beetles, at dusk, will begin to swarm and locate around groups of trees at forest edges. On the other hand, females will stay in place and feed on leaves until they reach sexual maturity. Males primarily fly around the branches looking for females to mate with. This behaviour occurs for several hours until darkness for about 10-20 days. These swarms typically have minimal damage to the trees, but they are occasionally harmful in cherry or plum orchards because of their consumption of blossoms. Once the females have matured and mated, they return to the fields to lay their eggs in the soil. Only a third of females will survive this trip, but any survivors will make a second, and occasionally third, swarming trip and return to the field to lay eggs again. Green leaf volatiles (GLVs) are a series of saturated and monounsaturated six-carbon aldehydes, alcohols, and esters released by vascular plants in response to stresses. GLVs have been found to act as a kairomone, which is a compound released by an organism that only benefits the receiver. This enhances the attractiveness of toluquinone, a sex pheromone in scarab beetles. Only male M. melolontha are attracted to GLVs, using its release to identify leaves that female beetles are feeding on. Females have the ability to detect GLV, but any change in behaviour that it may cause is unclear. M. melolontha males are more sensitive to lower GLV concentrations, possibly due to the anatomical differences between male and female antennae. Due to this phenomenon, sexual dimorphism can be observed in flight behaviour. During swarming behaviour, males will hover around the foliage while females remain on twigs and branches to feed. Males then use GLVs to identify which leaves have females that they can mate with. GLVs are being investigated as a possible pest control technique to attract males and prevent mating. Pest behaviour Though adults can damage some fruit trees, M. melolontha larvae are the primary agricultural pests. Larva hatch from their eggs 4-6 weeks after being laid and develop into adults over the course of 3-4 years. Immediately after hatching, larvae will gnaw on small roots. It will continue feeding on roots, particularly grasses, cereals, and other crops, during its three larval stages, only pausing to burrow deep into the soil for winter hibernation. In their first stage, M. melolontha larvae identify roots by CO2 release. They will only do damage at extreme densities. In their second stage, larva will cause the most damage to crops. In their third stage, larva will do less but still severe damage to crops. They most prominently use structures on their antennae called pore plates to smell. This structure is a thin layer of cells that covers a number of sensory units consisting of dendrite bundles. These and other olfactory organs on the head of the larva can identify CO2 and plant volatiles. They've also been found to push their heads into the walls of their burrows and probe with their antennae, likely to taste the soil with bristle-like sensilla. Pest control and history Middle Ages In the Middle Ages, pest control was rare, and people had no effective means to protect their harvest. This gave rise to events that seem bizarre from a modern perspective. In 1320, for instance, cockchafers were brought to court in Avignon and sentenced to withdraw within three days onto a specially designated area, otherwise they would be outlawed. Subsequently, since they failed to comply, they were collected and killed. (Similar animal trials also occurred for many other animals in the Middle Ages.) 19th century Both the grubs and adults have a voracious appetite and thus have been and sometimes continue to be a major problem in agriculture and forestry. In the pre-industrialised era, the main mechanism to control their numbers was to collect and kill the adult beetles, thereby interrupting the cycle. They were once very abundant: in 1911, more than 20 million individuals were collected in 18 km2 of forest. Collecting adults was an only moderately successful method. In some areas and times, cockchafers were served as food. A 19th-century recipe from France for cockchafer soup reads: "roast one pound of cockchafers without wings and legs in sizzling butter, then cook them in a chicken soup, add some veal liver and serve with chives on a toast". A German newspaper from Fulda from the 1920s tells of students eating sugar-coated cockchafers. Cockchafer larvae can also be fried or cooked over open flames, although they require some preparation by soaking in vinegar in order to purge them of soil in their digestive tracts. A cockchafer stew is referred to in W. G. Sebald's novel The Emigrants. In Sweden the peasants looked upon the grub of the cockchafer as furnishing an unfailing prognostic whether the ensuing winter will be mild or severe; if the animal has a bluish hue (a circumstance which arises from its being replete with food), they affirm it will be mild, but if it is white, the weather will be severe: and they carry this so far as to foretell, that if the anterior be white and the posterior blue, the cold will be most severe at the beginning of the winter. Hence they call this grub Bemärkelse-mask—prognostic worm. Modern times Only with the modernisation of agriculture in the 20th century and the invention of chemical pesticides did it become possible to effectively combat the cockchafer. Combined with the transformation of many pastures into agricultural land, this has resulted in a decrease of the cockchafer to near-extinction in some areas in Europe in the 1970s. Since the 1970s, agriculture has generally reduced its use of pesticides. Because of environmental and public health concerns (pesticides may enter the food chain and thus also the human body) many chemical pesticides have been phased out in the European Union and worldwide. In recent years, the cockchafer's numbers have been increasing again, causing damage to agricultural use of over of land all over Europe (0.001% of land). Due to legal provisions from the European Union for the sustainable use of pesticides, aerial treatment, which had been used to successfully control M. melolontha populations, is now banned. Light traps have been successful in attracting M. melolontha adults, particularly males, when put at height (4 m). If a peak swarming time can be identified, shaking isolated trees and collecting feeding adults can reduce population, though it is time consuming. Azadirachtin is a chemical that inhibits maturation feeding and egg development, but low persistence and difficulty spraying it high enough in trees prevents widespread use. Soil tilling has been a historically successful method, particularly in early June when larvae are first hatching. Pre-cropping is also a promising possibility, with buckwheat being of particular interest because it can reduce grub weight and population density before the crop of interest is planted. Sex pheromones have been used for mass trapping, mating disruption, and “Attract and Kill” methods. The unlikelihood of developing resistance due to the sex pheromones being produced by the beetles makes this a promising method of pest control. Entomopathogens Entomopathogenic organisms—organisms that produce disease in insects—are an active area of research for the control of M. melolontha grub populations. Entomopathogenic fungi is currently being studied as a way to control M. melolontha grub populations. Beauveria brongniartii has been found to work on the Melolontha species, and B. bassiana has been successful with other agricultural pests. There have been difficulties with determining the best strategy to apply the fungi to the fields. Entomopathogenic nematodes have been found to be particularly successful ways of reducing populations, particularly when larvae are in the first and second stage. Entomopathogenic bacteria from the genera Steinernema and Heterorhabditis are also being investigated, but they have been difficult to apply to fields as opposed to laboratory settings. The focus on entomopathogenic bacteria has been on its symbiosis with entomopathogenic nematodes and their ability to act together as a larval control strategy. Poor results with the application of these methods have stemmed intensive research into the gut enzymes and microbiome of M. melolontha to determine if they are acting as defense against entomopathogenic organisms. Intestinal components and microbiome The gut enzymes and microbiota of M. melolontha larvae allow them to exploit a variety of ecological niches unique to their phylogenetic family. These are low energy foods such as grass roots and rotting organic matter in the soil. There are two major compartments in the scarabaeid larvae intestinal tract. The first is a tubular midgut that secretes hydrolytic enzymes for macromolecule breakdown, and the second is a bulbous hindgut used for fermentation. High bacterial diversity between individuals of M. melolontha in the intestinal tract reflects the diversity of food sources. In the midgut, glucose is broken down and absorbed by the epithelium. It has been shown that proteolytic breakdown of toxins is a common resistance mechanism for agricultural pests. Proteolytic activity of enzymes in the midgut is hypothesised to increase resistance to entomopathogenic bacteria in the beetle larvae. Trypsin-like enzymes from the midgut of M. melolontha have been found to break down certain bacterial toxins and inactivate them. The hindgut has a high density of bacteria that ferment recalcitrant residues such as cellulose, with the byproducts being absorbed by the beetle. Acetate is a major product of this fermentation, suggesting that much of the bacteria in the hindgut is homoacetogenic. High abundance of species in the bacterial genus Desulfovibrio in the hindgut suggests that sulphate reduction is an important process, but the source of this sulphate in the diet is unknown. Some research on the M. melolontha microbiome has been focused on increasing the entomopathogenic properties of nematodes used as pest control due to their symbiosis. Bacteria such as Xenorhabdus nematophila are transported by nematodes and released into the insect's midgut. The bacteria will release lytic enzymes and other antimicrobial substances to decrease competition from the beetle's native microbiome. This creates an optimal environment for nematode development. Bacterial species in the midgut of M. melolontha such as Pseudomonas chlororaphis have been found to fight back, acting as antagonists to entomopathogenic bacteria. These bacteria have been identified differentially in different larval stages, with P. chlororaphis usually being found in the third and final larval stage. Ecological impact Environmental factors such as temperature, humidity, and plant type have a considerable impact on the existence and behaviour of cockchafers in wooded environments. It indicates that cockchafer populations are strongly influenced by climatic conditions, with warmer temperatures and higher humidity level favouring their occurrence. Additionally, specific vegetation types, including deciduous trees and shrubs, provide suitable habitats for cockchafers, facilitating their survival and reproduction within forest stands. Etymology The name "cockchafer" derives from the late-17th-century usage of "cock" (in the sense of expressing size or vigour) + "chafer" which simply means an insect of this type, referring to its propensity for gnawing and damaging plants. The term "chafer" has its root in Old English ceafor or cefer, of Germanic origin and is related to the Dutch kever, all of which mean "gnawer" as it relates to the jaw. As such, the name "cockchafer" can be understood to mean "large plant-gnawing beetle" and is applicable to its history as a pest animal. In culture Children since antiquity have played with cockchafers. In ancient Greece, boys caught the insect, tied a linen thread to its feet and set it free, amusing themselves to watch it fly in spirals. English boys in Victorian times played a very similar game by sticking a pin through one of its wings. Nikola Tesla recalls that as a child he made one of his first "inventions", an "engine" made by harnessing four cockchafers in this fashion. Cockchafers appear in the fairy tales "Thumbelina" by Hans Christian Andersen and "Princess Rosette" by Madame d'Aulnoy. The cockchafer is featured in a German children's song similar to the English Ladybird, Ladybird: The verse dates back to the Thirty Years' War in the first half of the 17th century, in which Pomerania was pillaged and suffered heavily. Since World War II, it is associated in Germany with the closing months of that war as well, when Soviet troops advanced into eastern Germany. According to one source, the dumbledore in Thomas Hardy's 1899 poem An August Midnight is a cockchafer. However, in his novel The Mayor of Casterbridge, Hardy uses the dialect word dumbledore to mean a bumble bee. There have been four Royal Navy ships named .
Biology and health sciences
Beetles (Coleoptera)
Animals
653750
https://en.wikipedia.org/wiki/Wrinkle
Wrinkle
A wrinkle, also known as a rhytid, is a fold, ridge or crease in an otherwise smooth surface, such as on skin or fabric. Skin wrinkles typically appear as a result of ageing processes such as glycation, habitual sleeping positions, loss of body mass, sun damage, or temporarily, as the result of prolonged immersion in water. Age wrinkling in the skin is promoted by habitual facial expressions, aging, sun damage, smoking, poor hydration, and various other factors. In humans, it can also be prevented to some degree by avoiding excessive solar exposure and through diet (in particular through consumption of carotenoids, tocopherols and flavonoids, vitamins (A, C, D and E), essential omega-3-fatty acids, certain proteins and lactobacilli). Skin Causes for aging wrinkles Development of facial wrinkles is a kind of fibrosis of the skin. Misrepair-accumulation aging theory suggests that wrinkles develop from incorrect repairs of injured elastic fibers and collagen fibers. Repeated extensions and compressions of the skin cause repeated injuries of extracellular fibers in derma. During the repairing process, some of the broken elastic fibers and collagen fibers are not regenerated and restored but replaced by altered fibers. When an elastic fiber is broken in an extended state, it may be replaced by a "long" collagen fiber. Accumulation of "long" collagen fibers makes part of the skin looser and stiffer, and as a consequence, a big fold of skin appears. When a "long" collagen is broken in a compressed state, it may be replaced by a "short" collagen fiber. The "shorter" collagen fibers will restrict the extension of "longer" fibers, and make the “long" fibers in a folding state permanently. A small fold, namely a permanent wrinkle, then appears. Sleep wrinkles Sleep wrinkles are created and reinforced when the face is compressed against a pillow or bed surface in side or stomach sleeping positions during sleep. They appear in predictable locations due to the underlying superficial musculoaponeurotic system (SMAS), and are usually distinct from wrinkles of facial expression. As with wrinkles of facial expression, sleep wrinkles can deepen and become permanent over time, unless the habitual sleeping positions which cause the wrinkles are altered. Water-immersion wrinkling The wrinkles that occur in skin over prolonged exposure to water are sometimes referred to as pruney fingers or water aging. This is a temporary skin condition where the skin on the palms of the hand or feet becomes wrinkly. This wrinkling response may have imparted an evolutionary benefit by providing improved traction in wet conditions, and a better grasp of wet objects. These results were called into question by a 2014 study that failed to reproduce any improvement of handling wet objects with wrinkled fingertips. However, a 2020 study of gripping efficiency found that wrinkles decreased the force required to grip wet objects by 20%, supporting the traction hypothesis. Prior to a 1935 study, the common explanation was based on water absorption in the keratin-laden epithelial skin when immersed in water, causing the skin to expand and resulting in a larger surface area, forcing it to wrinkle. Usually the tips of the fingers and toes are the first to wrinkle because of a thicker layer of keratin and an absence of hairs which secrete the protective oil called sebum. In the 1935 study, however, Lewis and Pickering were studying patients with palsy of the median nerve when they discovered that skin wrinkling did not occur in the areas of the patients' skin normally innervated by the damaged nerve. This suggested that the nervous system plays an essential role in wrinkling, so the phenomenon could not be entirely explained simply by water absorption. Recent research shows that wrinkling is related to vasoconstriction. Water probably initiates the wrinkling process by altering the balance of electrolytes in the skin as it diffuses into the hands and soles via their many sweat ducts. This could alter the stability of the membranes of the many neurons that synapse on the many blood vessels underneath skin, causing them to fire more rapidly. Increased neuronal firing causes blood vessels to constrict, decreasing the amount of fluid underneath the skin. This decrease in fluid would cause a decrease in tension, causing the skin to become wrinkly. This insight resulted in bedside tests for nerve damage and vasoconstriction. Wrinkling is often scored with immersion of the hands for 30 minutes in water or EMLA cream with measurements steps of 5 minutes, and counting the number of visible wrinkles in time. Not all healthy persons have finger wrinkling after immersion, so it would be safe to say that sympathetic function is preserved if finger wrinkling after immersion in water is observed, but if the fingers emerge smooth it cannot be assumed that there is a lesion to the autonomic supply or to the peripheral nerves of the hand. Other animals with wrinkles Examples of wrinkles can be found in various animal species that grow loose, excess skin, particularly when they are young. Several breeds of dog, such as the Pug and the Shar Pei, have been bred to exaggerate this trait. In dogs bred for fighting, this is the result of selection for loose skin, which confers a protective advantage. Techniques for reducing the appearance of aging wrinkles Current evidence suggests that tretinoin decreases cohesiveness of follicular epithelial cells, although the exact mode of action is unknown. Additionally, tretinoin stimulates mitotic activity and increased turnover of follicular epithelial cells. Tretinoin is better known by the brand name Retin-A. Topical glycosaminoglycans supplements can help to provide temporary restoration of enzyme balance to slow or prevent matrix breakdown and consequent onset of wrinkle formation. Glycosaminoglycans (GAGs) are produced by the body to maintain structural integrity in tissues and to maintain fluid balance. Hyaluronic acid is a type of GAG that promotes collagen synthesis, repair, and hydration. GAGs serve as a natural moisturizer and lubricant between epidermal cells to inhibit the production of matrix metalloproteinases (MMPs). Dermal fillers are injectable products frequently used to correct wrinkles, and other depressions in the skin. They are often a kind of soft tissue designed to enable injection into the skin for purposes of improving the appearance. The most common products are based on hyaluronic acid and calcium hydroxylapatite. Botulinum toxin is a neurotoxin protein produced by the bacterium Clostridium botulinum. Botox is a specific form of botulinum toxin manufactured by Allergan for both therapeutic and cosmetic use. Besides its cosmetic application, Botox is used in the treatment of other conditions including migraine headache and cervical dystonia (spasmodic torticollis) (a neuromuscular disorder involving the head and neck). Dysport, manufactured by Ipsen, received FDA approval and is now used to treat cervical dystonia as well as glabellar lines in adults. In 2010, another form of botulinum toxin, one free of complexing proteins, became available to Americans. Xeomin received FDA approval for medical indications in 2010 and cosmetic indications in 2011. Botulinum toxin treats wrinkles by immobilizing the muscles which cause wrinkles. It is not appropriate for the treatment of all wrinkles; it is indicated for the treatment of glabellar lines (between the eyebrows) in adults. Any other usage is not approved by the FDA and is considered off-label use. Laser resurfacing is FDA-cleared skin resurfacing procedure in which lasers are used to improve the condition of the skin. Two types of lasers are used to reduce the appearance of fine lines and wrinkles on the face; laser ablation, which removes thin layers of skin, and nonablative lasers that stimulate collagen production. Nonablative lasers are less effective than ablative ones but they are less invasive and recovery time is short. After the procedure people experience temporary redness, itching and swelling.
Biology and health sciences
Human anatomy
Health
654092
https://en.wikipedia.org/wiki/Diving%20bell
Diving bell
A diving bell is a rigid chamber used to transport divers from the surface to depth and back in open water, usually for the purpose of performing underwater work. The most common types are the open-bottomed wet bell and the closed bell, which can maintain an internal pressure greater than the external ambient. Diving bells are usually suspended by a cable, and lifted and lowered by a winch from a surface support platform. Unlike a submersible, the diving bell is not designed to move under the control of its occupants, or to operate independently of its launch and recovery system. The wet bell is a structure with an airtight chamber which is open to the water at the bottom, that is lowered underwater to operate as a base or a means of transport for a small number of divers. Air is trapped inside the bell by pressure of the water at the interface. These were the first type of diving chamber, and are still in use in modified form. The closed bell is a pressure vessel for human occupation, which may be used for bounce diving or saturation diving, with access to the water through a hatch at the bottom. The hatch is sealed before ascent to retain internal pressure. At the surface, this type of bell can lock on to a hyperbaric chamber where the divers live under saturation or are decompressed. The bell is mated with the chamber system via the bottom hatchway or a side hatchway, and the trunking in between is pressurized to enable the divers to transfer through to the chamber under pressure. In saturation diving the bell is merely the ride to and from the job, and the chamber system is the living quarters. If the dive is relatively short (a bounce dive), decompression can be done in the bell in exactly the same way it would be done in the chamber. A third type is the rescue bell, used for the rescue of personnel from sunk submarines which have maintained structural integrity. These bells may operate at atmospheric internal pressure and must withstand the ambient water pressure. History The diving bell is one of the earliest types of equipment for underwater work and exploration. Its use was first described by Aristotle in the 4th century BC: "they enable the divers to respire equally well by letting down a cauldron, for this does not fill with water, but retains the air, for it is forced straight down into the water." Recurring legends about Alexander the Great (including some versions of the Alexander Romance) tell he explored the sea in some closed vessel, lowered from his ships. Their origin is hard to determine, but some of the earliest dated works are from the early Middle Ages. In 1535, Guglielmo de Lorena created and tested his own diving bell to explore a sunken vessel in a lake near Rome. De Lorena's diving bell only had space for enough oxygen for a few minutes however, the air in his diving bell was reported to last for one to two hours with the limiting factor being a diver's ability to withstand cold and fatigue, not lack of oxygen. The mechanism he used needed to keep the pressure inside the bell continuous, supply fresh air, and remove air exhaled by the diver. To accomplish this, it is believed that de Lorena used a method similar to what would later be Edmond Halley's 1691 design. In 1616, Franz Kessler designed an improved diving bell, making the bell reach the diver's ankles, and adding windows and a ballast to the bottom. This design no longer needed to be tethered to the surface, but it is unclear whether or not it was actually built. In 1642, John Winthrop reported one Edward Bendall building two large wooden barrels, weighted with lead and open at their bottoms, to salvage a ship Mary Rose which had exploded and sunk, blocking the harbor of Charlestown, Boston. Bendall undertook the work on condition that he be awarded all the value of the salvage should he succeed in unblocking the harbor, or half the value he could salvage if he could not. In 1658, Albrecht von Treileben was permitted to salvage the warship Vasa, which sank in Stockholm harbor on its maiden voyage in 1628. Between 1663 and 1665 von Treileben's divers were successful in raising most of the cannon, working from a diving bell. A diving bell is mentioned in the 1663 Ballad of Gresham College (stanza 16): In late 1686, Sir William Phipps convinced investors to fund an expedition to what is now Haiti and the Dominican Republic to find sunken treasure, despite the location of the shipwreck being based entirely on rumor and speculation. In January 1687, Phipps found the wreck of the Spanish galleon Nuestra Señora de la Concepción off the coast of Santo Domingo. Some sources say they used an inverted container for the salvage operation while others say the crew was assisted by Indian divers in the shallow waters. The operation lasted from February to April 1687 during which time they salvaged jewels, some gold and 30 tons of silver which, at the time, was worth over £200,000. In 1689, Denis Papin suggested that the pressure and fresh air inside a diving bell could be maintained by a force pump or bellows. Engineer John Smeaton utilized this concept in 1789. In 1691, Dr. Edmond Halley completed plans for a diving bell capable of remaining submerged for extended periods of time, and fitted with a window for the purpose of undersea exploration. In Halley's design, atmosphere is replenished by sending weighted barrels of air down from the surface. In 1775, Charles Spalding, an Edinburgh confectioner, improved on Halley's design by adding a system of balance-weights to ease the raising and lowering of the bell, along with a series of ropes for signaling the surface crew. Spalding and his nephew, Ebenezer Watson, later suffocated off the coast of Dublin in 1783 doing salvage work in a diving bell of Spalding's design. Mechanics The bell is lowered into the water by cables from a crane, gantry or A-frame attached to a floating platform or shore structure. The bell is ballasted so as to remain upright in the water and to be negatively buoyant, so that it will sink even when full of air. Hoses, supplied by gas compressors or banks of high pressure storage cylinders at the surface, provide breathing gas to the bell, serving two functions: Fresh gas is available for breathing by the occupants. Volume reduction of the air in an open bell due to increasing hydrostatic pressure as the bell is lowered is compensated. Adding pressurized gas ensures that the gas space within the bell remains at constant volume as the bell descends in the water. Otherwise the bell would partially fill with water as the gas was compressed. The physics of the diving bell applies also to an underwater habitat equipped with a moon pool, which is like a diving bell enlarged to the size of a room or two, and with the water–air interface at the bottom confined to a section rather than forming the entire bottom of the structure. Wet bell A wet bell, or open bell, is a platform for lowering and lifting divers to and from the underwater workplace, which has an air filled space, open at the bottom, where the divers can stand or sit with their heads out of the water. The air space is at ambient pressure at all times, so there are no great pressure differences, and the greatest structural loads are usually self weight and the buoyancy of the air space. A fairly heavy ballast is often required to counteract the buoyancy of the airspace, and this is usually set low at the bottom of the bell, which helps with stability. The base of the bell is usually a grating or deck which the divers can stand on, and folding seats may be fitted for the divers' comfort during ascent, as in-water decompression may be long. Other equipment that is carried on the bell includes cylinders with the emergency gas supply, and racks or boxes for tools and equipment to be used on the job. There may be a tackle for hoisting and supporting a disabled diver so that their head projects into the air space. Type 1 wet bell The type 1 wet bell does not have an umbilical supplying the bell, because diver's umbilicals supply the divers directly from the surface, similar to a diving stage. Divers deploying from a type 1 bell will exit on the opposite side to where the umbilicals enter the bell so that the umbilicals pass through the bell and the divers can find their way back to the bell at all times by following the umbilical. Bailout from a type 1 bell is done by exiting the bell on the side that the umbilicals enter the bell so they no longer pass through the bell, leaving the divers free to surface. Type 2 wet bell A gas panel inside the bell is supplied by the bell umbilical and the emergency gas cylinders, and supplies the divers' umbilicals and sometimes BIBS sets. There will be racks to hang the divers' excursion umbilicals, which for this application must not be buoyant. Abandonment of a type 2 wet bell requires the divers to manage their own umbilicals as they ascend along a remaining connection to the surface. Operation of a wet bell The bell with divers on board is deployed from the working platform (usually a vessel) by a crane, davit, or other mechanism with a man-rated winch. The bell is lowered into the water and to the working depth at a rate recommended by the decompression schedule, and which allows the divers to equalize comfortably. Wet bells with an air space will have the air space topped up as the bell descends and the air is compressed by increasing hydrostatic pressure. The air will also be refreshed as required to keep the carbon dioxide level acceptable to the occupants. The oxygen content is also replenished, but this is not the limiting factor, as the oxygen partial pressure will be higher than in surface air due to the depth. When the bell is raised, the pressure will drop and excess air due to expansion will automatically spill under the edges. If the divers are breathing from the bell airspace at the time, it may need to be vented with additional air to maintain a low carbon dioxide level. The decrease in pressure is proportional to the depth as the airspace is at ambient pressure, and the ascent must be conducted according to the planned decompression schedule appropriate to the depth and duration of the diving operation. Closed bell A closed, or dry, bell, also known as a personnel transfer capsule or submersible decompression chamber, is a pressure vessel for human occupation which is lowered into the sea to the workplace, equalised in pressure to the environment, and opened to allow the divers in and out. These functional requirements dictate the structure and arrangement. The internal pressure requires a strong structure, and a sphere or spherically ended cylinder is most efficient for this purpose. When the bell is underwater, it must be possible for the occupants to get in or out without flooding the interior. This requires a pressure hatch at the bottom. The requirement that the bell reliably retain its internal pressure when the external pressure is lowered dictates that the hatch open inward, so that internal pressure will hold it closed. The bell is lowered through the water to working depth, so must be negatively buoyant. This may require additional ballast, which may be attached by a system that can be released from inside the bell in an emergency, without losing pressure, to allow the bell to float back to the surface. Locking onto a deck decompression chamber or saturation system at the surface is possible either from the bottom or the side. Using the bell bottom hatch for this purpose has the advantage of only needing one hatch, and the disadvantage of having to lift the bell up and place it over a vertical entry to the chamber. A bell used in this way may be called a personnel transfer capsule. If decompression is done inside the bell, it may be referred to as a submersible decompression chamber. The bell bottom hatch must be wide enough for a large diver fully kitted with appropriate bailout cylinders, to get in and out without undue difficulty, and it can not be closed while the diver is outside as the umbilical is tended through the hatch by the bellman. It must also be possible for the bellman to lift the working diver in through the hatch if he is unconscious, and close the hatch after him, so that the bell can be sealed and pressurised for the ascent. A lifting tackle is usually fitted inside the bell for this purpose, and the bell may be partially flooded to assist the procedure. The internal space must be large enough for a fully kitted diver and bellman (the stand-by diver responsible for manning the bell while the working diver is locked out) to sit, and for their umbilicals to be stowed neatly on racks, and the hatch to be opened inwards while they are inside. Anything bigger will make the bell heavier than it really needs to be, so all equipment that does not need to be inside is mounted outside. This includes a framework to support the ancillary equipment and protect the bell from impact and snagging on obstacles, and the emergency gas and power supplies, which are usually racked around the framework. The emergency gas supply (EGS) is connected via manifolds to the internal gas panel. The part of the framework that keeps the lower hatch off the bottom is called the bell stage. It may be removable, which can facilitate connection to a vertical access chamber lock. The bell umbilical is connected to the bell via through hull fittings (hull penetrations), which must withstand all operating pressures without leaking. The internal gas panel connects to the hull penetrations and the diver's umbilicals. The umbilicals will carry main breathing gas supply, a communications cable, a pneumofathometer hose, hot water supply for suit heating, power for helmet mounted lights, and possibly gas reclaim hose and video cable. The bell umbilical will usually also carry a power cable for internal and external bell lighting. Hydraulic power lines for tools do not have to pass into the interior of the bell as they will never be used there, and tools can also be stored outside. There may be an emergency through-water communications system with a battery power supply, and a location transponder working on the international standard 37.5 kHz. The bell may also have viewports and a medical lock. A closed bell may be fitted with an umbilical cutter, a mechanism which allows the occupants to sever the bell umbilical from inside the sealed and pressurised bell in the event of an umbilical snag that prevents bell recovery. The device is typically hydraulically operated using a hand pump inside the bell, and can shear the umbilical at or just above the point where it is fastened to the top of the bell. Once cut, the bell can be raised and if the umbilical can then be recovered, it can be reconnected with only a short length lost. An external connection known as a hot stab unit which allows an emergency umbilical to be connected to maintain life support in the bell during a rescue operation may be fitted. The divers in the bell may also be monitored from the diving control point by closed circuit video, and the bell atmosphere can be monitored for volatile hydrocarbon contamination by a hyperbaric hydrocarbon analyser which can be linked to a topside repeater and set to give an alarm if the hydrocarbon levels exceed 10% of the anaesthetic level. The bell may be fitted with an external emergency battery power pack, carbon dioxide scrubber for the internal atmosphere, and air conditioner for temperature control. Power supply is typically 12 or 24V DC. A bell will be provided with equipment to rescue and treat an injured diver. This will normally include a small tackle to lift the disabled diver into the bell through the bottom hatch and secure them in an upright position if needed. A bell flooding valve, also known as a flood-up valve may be available to partially flood the interior to aid in lifting a disabled diver into the bell. Once inside and secure, the bell is cleared of water using the blow-down valve to fill the interior with breathing gas at ambient pressure and displace the water out through the hatch. A first aid kit will be carried. British mini-bell system A variant of this system used in the North Sea oilfields between early 1986 and the early 90s was the Oceantech Minibell system, which was used for bell-bounce dives, and was operated as an open bell for the descent, and as a closed bell for the ascent. The divers would climb into the bell after stowing their umbilicals on outside racks, remove their helmets for outside storage, seal the bell, and return to the surface, venting to the depth of the first decompression stop. The bell would then be locked onto a deck decompression chamber, the divers transferred under pressure to complete decompression in the chamber, and the bell would be available for use for another dive. Breathing gas distribution Breathing gas supplies for the bell comprise a primary gas supply, a reserve gas supply and an emergency gas supply carried on the bell. The divers will also carry bailout gas in scuba cylinders, or as a semi-closed circuit rebreather, sufficient to get them back to the bellin the event of an umbilical supply failure. Primary gas, or main gas supply may be compressed air, which is usually supplied by a low pressure breathing air compressor, or mixed gas, which is usually provided in manifolded clusters of high-pressure storage cylinders, commonly referred to as "quads". Primary gas is connected to the main gas panel throughout the diving operation except when it fails or a problem is being corrected, during which time the divers are switched over to reserve gas. Reserve gas, or secondary gas, which is connected to the main gas panel and available for immediate use by opening the supply valve, may also be supplied by low pressure compressor, or from high pressure storage. It has the same composition as the main gas supply. Decompression gas, when used, is also supplied via the main gas panel. It may be the same gas as the primary gas, or an oxygen enriched mixture, or pure oxygen. Gas switching for in-water decompression in a wet bell is not the preferred procedure for commercial diving, as the entire breathing gas delivery system must be oxygen clean, and as a decompression chamber is required on site when a specified limit of obligatory decompression is planned, it is more convenient to do surface decompression on oxygen (SurDO2)in the chamber. The relative safety of surface decompression and in-water decompression is uncertain. Both procedures are accepted by health and safety regulatory bodies. Emergency gas is carried on the bell, usually in a small number of 50 litre high-pressure cylinders connected to the bell gas panel. This should be the same gas as the primary gas. On closed bells there is an additional supply of pure oxygen if the bell has a carbon dioxide scrubber for the bell atmosphere. On a type 2 wet bell or a closed bell this emergency gas can be distributed to the divers from the bell gas panel operated by the bellman, through the excursion umbilicals, . Each diver carries an emergency gas supply (bailout gas) sufficient to get back to the bell under any reasonably foreseeable circumstances of umbilical supply failure of primary, reserve, and bell emergency gas supplies. The main gas distribution panel is located at the control point for the diving operation, and operated by the gas man, who may also be a diver, or if the gas is air, it may be directly operated by the diving supervisor. Bell gas panel The bell gas panel is a manifold of valves, pipes, hoses and gauges mounted inside a closed bell, and under the canopy of a type 2 wet bell, and is operated by the bellman. When a helium reclaim system is in use, the return hose for the reclaimed gas passes through the bell gas panel and a back-pressure regulator on its way to the surface. The bell gas panel is supplied with primary and secondary gas supplies from the main gas panel through the bell umbilical, and with on-board emergency gas from the cylinders carried on the bell. Deployment of a modern diving bell Diving bells are deployed over the side of the vessel or platform, or through a moonpool, using a gantry or A-frame from which the clump weight and the bell are suspended. On dive support vessels with in-built saturation systems the bell may be deployed through a moon pool. The bell handling system is also known as the launch and recovery system (LARS). The bell umbilical supplies gas to the bell gas panel, and is separate from the divers' excursion umbilicals, which are connected to the gas panel on the inside of the bell. The bell umbilical is deployed from a large drum or umbilical basket and care is taken to keep the tension in the umbilical low but sufficient to remain near vertical in use and to roll up neatly during recovery, as this reduces the risk of the umbilical snagging on underwater obstructions. Wet bell handling differs from closed bell handling in that there is no requirement to transfer the bell to and from the chamber system to make a pressure-tight connection, and that a wet bell will be required to maintain a finely controlled speed of descent and ascent and remain at a fixed depth within fairly close tolerances for the occupants to decompress at a specific ambient pressure, whereas a closed bell can be removed from the water without delay and the speed of ascent and descent is not critical. A bell diving team will usually include two divers in the bell, designated as the working diver and bellman, though they may alternate these roles during the dive. The bellman is a stand-by diver and umbilical tender from the bell to the working diver, the operator of the on-board gas distribution panel, and has an umbilical about 2 m longer than the working diver to ensure that the working diver can be reached in an emergency. This can be adjusted by tying off the umbilicals inside the bell to limit deployment length, which must often be done in any case, to prevent the divers from approaching known hazards in the water. Depending on circumstances, there may also be a surface stand-by diver, with attendant, in case there is an emergency where a surface oriented diver could assist. The team will be under the direct control of the diving supervisor, will include a winch operator, and may include a dedicated surface gas panel operator. Clump weight Deployment of a diving bell usually starts by lowering the clump weight, which is a large ballast weight suspended in the bight of a cable which runs from a winch, over a sheave on one side of the gantry, down to the weight, round a pair of sheaves on the sides of the weight, and back up to the other side of the gantry, where it is fastened. The weight hangs freely between the two parts of the cable, and due to its weight, hangs horizontally and keeps the cable under tension. The bell hangs between the parts of the clump weight cable, and has a fairlead on each side which slides along the cable as it is lowered or lifted. Deployment of the bell is by a separate cable attached to the top, which runs over a sheave in the middle of the gantry. As the bell is lowered, the fairleads prevent it from rotating on the deployment cable, which would put twist into the umbilical and risk loops or snagging. The clump weight cables therefore act as guidelines or rails along which the bell is lowered to the workplace, and raised back to the platform. If the lifting winch or cable fails, and the bell ballast is released, a positively buoyant bell can float up and the cables will guide it to the surface to a position where it can be recovered relatively easily. The clump weight cable can also be used as an emergency recovery system, in which case both bell and weight are lifted together. An alternative system for preventing rotation on the lifting cable is the use of a cross-haul system, which may also be used as a means of adjusting the lateral position of the bell at working depth, and as an emergency recovery system. Bell stage A bell stage is an open framework below the bell which prevents the bell lower lock from getting too close to the clump weight or seabed, ensuring that there is space for the divers to safely exit and enter the bell. This can be deployed either as part of the bell, or as part of the clump weight. The bell stage may be fitted with baskets for carrying tools and equipment. Bell handling system A closed bell handling system is used to move the bell from the position where it is locked on to the chamber system into the water, lower it to the working depth and hold it in position without excessive movement, and recover it to the chamber system. The system used to transfer the bell on deck may be a deck trolley system, an overhead gantry or a swinging A-frame. The system must constrain movement of the supported bell sufficiently to allow accurate location on the chamber trunking even in bad weather. A bell cursor may be used to control movement through and above the splash zone, and heave compensation gear may be used to limit vertical movement when in the water and clear of the cursor, particularly at working depth when the diver may be locked out and the bell is open to ambient pressure. Bell cursor A bell cursor is a device used to guide and control the motion of the bell through the air and the splash zone near the surface, where waves can move the bell significantly. It can either be a passive system which relies on additional ballast weight or an active system which uses a controlled drive system to provide vertical motion. The cursor has a cradle which locks onto the bell and which moves vertically on rails to constrain lateral movement. The bell is released and locked onto the cursor in the relatively still water below the splash zone. Heave compensation Heave compensation equipment is used to stabilise the depth of the bell by counteracting vertical movement of the handling system caused by movements of the platform, and usually also maintains correct tension on the guide wires. It is not usually essential, depending on the stability of the platform. Cross-hauling Cross-hauling systems are cables from an independent lifting device which are intended to be used to move the bell laterally from a point directly below the LARS, and may also be used to limit rotation and as an emergency bell recovery system. Use with hyperbaric chambers Commercial diving contractors generally use a closed bell in conjunction with a surface hyperbaric chamber, These have safety and ergonomic advantages and allow decompression to be carried out after the bell has been raised to the surface and back on board the diving support vessel. Closed bells are often used in saturation diving and undersea rescue operations. The diving bell would be connected via the mating flange of an airlock to the deck decompression chamber or saturation system for transfer under pressure of the occupants. Use of hyperbaric chambers underwater can be dangerous, hyperbaric chambers are prone to lighting on fire from the inside. Air-lock diving bells The air lock diving-bell plant was a purpose-built barge for the laying, examination and repair of moorings for battleships at Gibraltar harbour. It was designed by Siebe Gorman of Lambeth and Forrestt & Co. Ltd of Wivenhoe in Essex, who built and supplied it in 1902 to the British Admiralty. The vessel came about from the specific conditions at Gibraltar. The heavy harbour moorings have three chains extending out radially along the seabed from a central ring, each terminating in a large anchor. Most harbours have a soft seabed, and it is usual to lay down moorings by settling anchors in the mud, clay or sand but this could not be done in Gibraltar harbour, where the seabed is hard rock. In operation the barge would be towed over the work site, moored in place with anchors, and the bell would be lowered vertically to the bottom. and the water displaced by pumping. The work teams entered the bell through an airlock in the central access shaft. Working in ordinary clothes they could dig out anchorings for the moorings. The German service barge Carl Straat is similar in concept, but the bell is lowered by swinging the access tube. Carl Straat was built in 1963 for the Waterways and Shipping Directorate West in Münster. The 6 m × 4 m × 2.5 m bell is accessible through a 2 m diameter tube and an airlock. A pantograph system keeps the bell and internal stairs level at all depths. Maximum working depth is 10 m. The vessel is used on those inland waterways which have locks large enough to accommodate its 52 m length overall, 11.8 m beam and 1.6 m draft. Rescue bell Diving bells have been used for submarine rescue. The closed, dry bell is designed to seal against the deck of the submarine above an escape hatch. Water in the space between the bell and the submarine is pumped out, and the pressure difference holds the bell against the submarine, so the hatches can be opened to allow occupants to leave the submarine and enter the bell. The hatches are then closed, the bell skirt flooded to release it from the submarine, and the bell with its load of survivors is hoisted back to the surface, where the survivors exit and the bell may return for the next group. The internal pressure in the bell is usually kept at atmospheric pressure to minimise run time by eliminating the need for decompression, so the seal between the bell skirt and the submarine deck is critical to the safety of the operation. This seal is provided by using a flexible sealing material, usually a type of rubber, which is pressed firmly against the smooth hatch surround by the pressure differential when the skirt is pumped out. Observation bell An observation bell is a closed bell, generally operated with internal pressure at atmospheric pressure, which provides an observation platform that can be lowered to depth with one or more occupants who can observe the environment through viewports, but are generally not provided with a means of interacting physically with the outside environment. The first observation bell was one of the first modern bells constructed in the late 19th century. The bathysphere and observation bell are similar structures. A steel bathysphere created in 1930 by William Beebe and Otis Barton had three crystal glass windows made for observation. Observation bells for shallower depths generally use different designs to bathyspheres. Bell diving skills and procedures Routine procedures for bell diving include preparation of the bell for the dive, descent and ascent, and monitoring of the working diver by the bellman. The bellman is responsible for ensuring that the bell and its occupants are ready for descent or ascent, and for communications with the surface for tenting the working diver's umbilical and for operation of the bell gas panel. A wet bell ascent usually includes decompression stops in the water, and sometimes surface decompression. Closed bell procedures also include locking in and locking out at depth, and transfer under pressure between bell and the saturation system or a deck decompression chamber. Emergency bell procedures include dynamic positioning alarm and runout response, emergency bell gas panel operations, such as surface gas supply failure or contaminated surface gas supply, both of which require bailout to onboard gas, hot water supply failure, and rescue of the working diver by the bellman. Voice communications failure requires appropriate use of emergency light and gas signals. Bell abandonment may be necessary if a wet bell cannot be raised, but saturation divers in a closed bell must be rescued in the bell or to another bell as they cannot be surfaced in-water. Hazards A closed bell that has been depressurised for maintenance access will probably retain residual diving breathing gas mixture, which will usually be hypoxic at normal atmospheric pressure, and could cause anyone who enters to lose consciousness quite rapidly. Helium based mixtures are buoyant and require active flushing with a strong flow of air, followed by testing for oxygen partial pressure before entry. The bell atmosphere can be contaminated by materials brought in by a diver who was exposed to the contaminants during the lock-out. These will depend on the working environment, and may include petrochemicals. This is a greater problem with closed bells. Diver training Divers qualified to work from bells are trained in the skills and procedures relevant to the type of bell they will be expected to work from. Open bells are generally used for surface oriented surface-supplied deep air diving, and closed bells are used for saturation diving and surface oriented mixed gas diving. These skills include the standard procedures for the deployment of the working diver from the bell, the tending of the working diver from the bell by the bellman, and the emergency and rescue procedures for both working diver and bellman. There is considerable similarity and significant differences in these procedures between open and closed bell diving. Underwater habitats As noted above, further extension of the wet bell concept is the moon-pool-equipped underwater habitat, where divers may spend long periods in dry comfort while acclimated to the increased pressure experienced underwater. By not needing to return to the surface between excursions into the water, they can reduce the necessity for decompression (gradual reduction of pressure), after each excursion, required to avoid problems with nitrogen bubbles releasing from the bloodstream (the bends, also known as caisson disease). Such problems can occur at pressures greater than , corresponding to a depth of of water. Divers in an ambient pressure habitat will require decompression when they return to the surface. This is a form of saturation diving. In nature The diving bell spider, Argyroneta aquatica, is a spider which lives entirely under water, even though it could survive on land. Since the spider must breathe air, it constructs from silk a habitat like an open diving bell which it attaches to an underwater plant. The spider collects air in a thin layer around its body, trapped by dense hairs on its abdomen and legs. It transports this air to its diving bell to replenish the air supply in the bell. This allows the spider to remain in the bell for long periods, where it waits for its prey.
Technology
Naval transport
null
2323622
https://en.wikipedia.org/wiki/Dental%20plaque
Dental plaque
Dental plaque is a biofilm of microorganisms (mostly bacteria, but also fungi) that grows on surfaces within the mouth. It is a sticky colorless deposit at first, but when it forms tartar, it is often brown or pale yellow. It is commonly found between the teeth, on the front of teeth, behind teeth, on chewing surfaces, along the gumline (supragingival), or below the gumline cervical margins (subgingival). Dental plaque is also known as microbial plaque, oral biofilm, dental biofilm, dental plaque biofilm or bacterial plaque biofilm. Bacterial plaque is one of the major causes for dental decay and gum disease. Interestingly, it has been observed that differences in the composition of dental plaque microbiota exist between men and women, particularly in the presence of periodontitis. Progression and build-up of dental plaque can give rise to tooth decay – the localised destruction of the tissues of the tooth by acid produced from the bacterial degradation of fermentable sugar – and periodontal problems such as gingivitis and periodontitis; hence it is important to disrupt the mass of bacteria and remove it. Plaque control and removal can be achieved with correct daily or twice-daily tooth brushing and use of interdental aids such as dental floss and interdental brushes. Oral hygiene is important as dental biofilms may become acidic causing demineralization of the teeth (also known as dental caries) or harden into dental calculus (also known as tartar). Calculus cannot be removed through tooth brushing or with interdental aids, but only through professional cleaning. Plaque formation Dental plaque is a biofilm that attaches to tooth surfaces, restorations and prosthetic appliances (including dentures and bridges) if left undisturbed. Understanding the formation, composition and characteristics of plaque helps in its control. An acquired pellicle is a layer of saliva that is composed of mainly glycoproteins and forms shortly after cleaning of the teeth or exposure of new teeth. Bacteria then attach to the pellicle layer, form micro-colonies, and mature on the tooth, which can result in oral diseases. The following table provides a more detailed (six-step) explanation of biofilm formation: Components of plaque Different types of bacteria are normally present in the mouth. These bacteria, as well as leukocytes, neutrophils, macrophages, and lymphocytes, are part of the normal oral cavity and contribute to the individual's health. Approximately 80–90% of the weight of plaque is water. While 70% of the dry weight is bacteria, the remaining 30% consists of polysaccharides and glycoproteins. Bacteria The bulk of the microorganisms that form the biofilm are Streptococcus mutans and other anaerobes, though the precise composition varies by location in the mouth. Examples of such anaerobes include fusobacterium and actinobacteria. S. mutans and other anaerobes are the initial colonisers of the tooth surface, and play a major role in the establishment of the early biofilm community. Streptococcus mutans uses the enzyme glucansucrase to convert sucrose into a sticky, extracellular, dextran-based polysaccharide that allows the bacteria to cohere, forming plaque. (Sucrose is the only sugar that bacteria can use to form this sticky polysaccharide). These microorganisms all occur naturally in the oral cavity and are normally harmless. However, failure to remove plaque by regular tooth-brushing allows them to proliferate unchecked and thereby build up in a thick layer, which can by virtue of their ordinary metabolism cause any of various dental diseases for the host. Those microorganisms nearest the tooth surface typically obtain energy by fermenting dietary sucrose; during fermentation they begin to produce acids. The bacterial equilibrium position varies at different stages of formation. Below is a summary of the bacteria that may be present during the phases of plaque maturation: Early biofilm: primarily Gram-positive cocci Older biofilm (3–4 days): increased numbers of filaments and fusiforms 4–9 days undisturbed: more complex flora with rods, filamentous forms 7–14 days: Vibrio species, spirochetes, more Gram-negative organisms Dental plaque as a biofilm Dental plaque is considered a biofilm adhered to the tooth surface. It is a meticulously formed microbial community, that is organised to a particular structure and function. Plaque is rich in species, given the fact that about 1000 different bacterial species have been recognised using modern techniques. A clean tooth surface would immediately be colonised by salivary pellicles, which acts as an adhesive. This allows the first bacteria (early colonisers) to attach to the tooth, then colonise and grow. After some growth of early colonisers, the biofilm becomes more compliant to other species of bacteria, known as late colonisers. Early colonisers Source: mainly Streptococcus species (60–90%) Eikenella spp. Haemophilus spp. Prevotella spp. Propionibacterium spp. Capnocytophaga spp. Veillonella spp. Late colonisers Source: Aggregatibacter actinomycetemcomitans Prevotella intermedia Eubacterium spp. Treponema spp. Porphyromonas gingivalis Fusobacterium nucleatum is found between the early and late colonisers, linking them together. Some salivary components are crucial for plaques ecosystem, such as salivary alpha-amylase which plays a role in binding and adhesion. Proline-rich proteins (PRP) and statherins are also involved in the formation of plaque. Supragingival biofilm Supragingival biofilm is dental plaque that forms above the gums, and is the first kind of plaque to form after the brushing of the teeth. It commonly forms in between the teeth, in the pits and grooves of the teeth and along the gums. It is made up of mostly aerobic bacteria, meaning these bacteria need oxygen to survive. If plaque remains on the tooth for a longer period of time, anaerobic bacteria begin to grow in this plaque. Subgingival biofilm Subgingival biofilm is plaque that is located under the gums. It occurs after the formation of the supragingival biofilm by a downward growth of the bacteria from above the gums to below. This plaque is mostly made up of anaerobic bacteria, meaning that these bacteria will only survive if there is no oxygen. As this plaque attaches in a pocket under the gums, they are not exposed to oxygen in the mouth and will therefore thrive if not removed. The extracellular matrix contains proteins, long-chain polysaccharides and lipids. The most common reasons for ecosystem disruption are the ecological factors discussed in the environment section. The bacteria that exhibit the most fit plasticity for the change in environment dominate the given environment. Often, this leads to opportunistic pathogens which may cause dental caries and periodontal disease. Pathogenic bacteria that have the potential to cause dental caries flourish in acidic environments; those that have the potential to cause periodontal disease flourish in a slightly alkaline environment. Antibodies to the oral pathogens Campylobacter rectus, Veillonella parvula, Prevotella melaninogenica were associated with hypertension. Environment Unlike other parts of the body, tooth surfaces are uniquely hard and non shedding. Therefore, the warm and moist environment of the mouth and the presence of teeth, makes a good environment for growth and development of dental plaque. The main ecological factors that contribute to plaque formation are pH, saliva, temperature and redox reactions. The normal pH range of saliva is between 6 and 7 and plaque biofilm is known to flourish in a pH between 6.7 and 8.3. This indicates that the natural environment of the mouth provided by saliva is ideal for the growth of bacteria in the dental plaque. Saliva acts as a buffer, which helps to maintain the pH in the mouth between 6 and 7. In addition to acting as a buffer, saliva and gingival crevicular fluid contain primary nutrients including amino acids, proteins and glycoproteins. This feeds the bacteria involved in plaque formation. The host diet plays only a minor role in providing nutrients for the resident microflora. The normal temperature of the mouth ranges between 35 and 36 °C, and a two-degree (°C) change has been shown to drastically shift the dominant species in the plaque. Redox reactions are carried out by aerobic bacteria. This keeps the oxygen levels in the mouth at a semi-stable homeostatic condition, which allows the bacteria to survive. Consequences of plaque build-up Gingivitis Gingivitis is an inflammatory lesion, mediated by host-parasite interactions that remains localised to the gingival tissue, it is a common result of plaque build-up around the gingival tissues. The bacteria found in the biofilm elicit a host response resulting in localized inflammation of the tissue. This is characterized by the cardinal signs of inflammation including a red, puffy appearance of the gums and bleeding due to brushing or flossing. Gingivitis due to plaque can be reversible by removal of the plaque. However, if left for an extended period of time, the inflammation may begin to affect the supporting tissues, in a progression referred to as periodontitis. The gingivitis response is a protective mechanism, averting periodontitis in many cases. Periodontitis Periodontitis is an infection of the gums which leads to bone destruction around the teeth in the jaw. Periodontitis occurs after gingivitis has been established, but not all individuals who have gingivitis will get periodontitis. Plaque accumulation is vital in the progression of periodontitis as the bacteria in plaque release enzymes which attack the bone and cause it to break down, and at the same time osteoclasts in the bone break down the bone as a way to prevent further infection. This can be treated with strict oral hygiene such as tooth brushing and cleaning in between the teeth as well as surgical debridement completed by a dental professional. Diseases linked to periodontitis Accumulated bacteria, due to the onset of periodontitis from dental plaque, may gain access to distant sites in the body through the circulatory and respiratory system, potentially contributing to various systematic diseases and conditions. Due to the infectious nature of bacteria hosted within the oral cavity, bacteria produced cavity can spread within the system of the human body and causes adverse health conditions. Bacteria access comes from the ulcerated epithelium of the periodontal pocket that results from accumulation of infection within the gingiva. Conditions and diseases can include: Atheromas Cardiovascular disease Respiratory disease Diabetes mellitus Caries Dental caries is an infectious disease caused primarily by Streptococcus mutans, characterized by acid demineralization of the enamel, which can progress to further breakdown of the more organic, inner dental tissue (dentin). The bacterial community would mainly consist of acidogenic and acid-tolerating species (e.g. Mutans streptococci and lactobacilli), while other species with relevant characteristics may also be involved. Everybody is susceptible to caries but the probability of development depends on the patient's individual disease indicators, risk factors, and preventive factors. Factors that are considered high-risk for developing carious lesions on the teeth include: Low fluoride exposure Time, length, and frequency of sugar consumption Quality of tooth cleaning Fluctuations in salivary flow rates and composition Behavior of the individual Quality and composition of biofilms Organic acids released from dental plaque lead to demineralization of the adjacent tooth surface, and consequently to dental caries. Saliva is also unable to penetrate the build-up of plaque and thus cannot act to neutralize the acid produced by the bacteria and remineralize the tooth surface. Detection of plaque build-up There are two main methods of detecting dental plaque in the oral cavity: through the application of a disclosing gel or tablet, and/or visually through observation. Plaque detection is usually detected clinically by plaque disclosing agents. Disclosing agents contain dye which turns bright red to indicate plaque build-up. It is important for an individual to be aware of what to look for when doing a self-assessment for dental plaque. It is important to be aware that everyone has dental plaque, however, the severity of the build-up and the consequences of not removing the plaque can vary. Plaque disclosing gel Plaque disclosing products, also known as disclosants, make plaque clinically visible. Clean surfaces of the teeth do not absorb the disclosant, only rough surfaces. Plaque disclosing gels can be either completed at home or in the dental clinic. Before using these at home or in the dental clinic check with your general practitioners for any allergies to iodine, food colouring or any other ingredients that may be present in these products. These gels provide a visual aid in assessing plaque biofilm presence and can also show the maturity of the dental plaque. Disclosing tablets Disclosing tablets are similar to that of disclosing gels, except that they are placed in the mouth and chewed on for approximately one minute. The remaining tablet or saliva is then spit out. Disclosing gels will show the presence of the plaque, but will often not show the level of maturity of the plaque. Disclosing tablets are often prescribed or given to patients with orthodontic appliances for use before and after tooth brushing to ensure optimal cleaning. These are also helpful educational tools for young children or patients who are struggling to remove dental plaque in certain areas. Disclosing gels and tablets are useful for individuals of all ages in ensuring efficient dental plaque removal. Visual or tactile detection Dental biofilm begins to form on the tooth only minutes after brushing. It can be difficult to see dental plaque on the hard tissue surfaces, however it can be felt as a rough surface. It is often felt as a thick, fur-like deposit that may present as a yellow, tan or brown stain. These deposits are commonly found on teeth or dental appliances such as orthodontic brackets. The most common way dental plaque is assessed is through dental assessment in the dental clinic where dental instruments are able to scrape up some plaque. The most common areas where patients find plaque are between the teeth and along the cervical margins. Treatments Mouthwash has been a commonly used method for controlling dental plaque accumulation. Many studies have supported the fact that mouthwash containing alcohol might not be the best option. Alcohol-containing mouthwashes are not substantially more effective than alcohol-free mouthwashes, and there is some evidence to suggest that alcohol-containing mouthwashes increase the likelihood of oral cancers. The absence of alcohol in mouthwash has prompted many mouthwash brands to develop new mouthwashes with essential oils. In 2018, a study was done on the effectiveness of commercially available essential oil mouth-rinse. A placebo and a negative control was used, with the negative control being mouthwash without essential oils. Three groups of healthy volunteers were induced with experimental gingivitis, used their respective mouthwash, and monitored for three weeks. The results showed that the commercial mouthwash with essential oils did significantly better on plaque scores. Results showed that the plaque scores for the essential oil mouthwash was not low enough to prevent gingivitis. The researchers concluded that the benefit of essential oil mouthwash is questionable and requires further research. Research done by the US National Institute of Health in 2022 studied the antimicrobial properties and effects of a lemongrass essential oil mouthwash. They found that the lemongrass was a natural, herbal material that was a good substitute to alcohol in mouthwash. The stability of the lemongrass allows it to have antimicrobial properties against the organisms that cause plaque. A decrease in plaque formation lowers the chances of gingivitis occurring. A study conducted in 2022, with a sample of 209 participants, studied the effect of using a mouthwash that contained a mixture of four essential oils versus just brushing and flossing. It showed that after 12 weeks, those who rinsed with the essential oil mouthwash had significantly reduced plaque and improved their gingivitis compared to the groups that only brushed and flossed. A meta-analysis conducted in 2021, reviewed the effectiveness of various mouthwashes and their active ingredients on plaque. The American Dental Association database was used to collect studies. A total of 22 papers were selected for the overview. Four of the papers selected, all meta-analyses, showed that essential oils had substantial antiplaque activity. The researchers concluded that essential oils and chlorhexidine are the two ingredients that are most useful in having good oral health. A study involving 20 participants found that mouthwash containing Magnolia grandiflora bark extract performed significantly better than placebo at reducing the prevalence of Streptococcus mutans. Dental plaque in dogs and cats Dental plaque is extremely common in domestic animals such as dogs and cats. However, the bacteria associated with canine and feline plaque appear to be different from those in humans. It consists of causing periodontal inflammation and triggers the animal’s immune system. Two common distinctions that derive from periodontal are gingivitis which is the inflammation and periodontitis which involves the reactions of gingivitis causing severe gum disease such as periodontitis and can ultimately cause the loss of the tooth. Animals affected by periodontitis deal with irritation. Additionally, this disease is proven that with the focal point of gums, it can affect nearby organs. The periodontal disease is most attracted to the age and weight that the animals are currently in. The older and heavier they are the more likely they will catch the disease. A study of 9 female and 5 male dogs varying breeds and ages from 1-14 emphasizes the relation of Periodontal with age and no relation regarding breed or sex. A total of 50% of them were detected with the disease. Treatment and prevention The antibiotics that are commonly used for animals are antimicrobials ranging from clindamycin, amoxicillin-clavulate, and amoxicillin. These antimicrobials are commonly used in dental procedures for animals in the United States. A study was done were a total of 818,150 dogs and cats that resulted in the evident promotion of antimicrobials in retaining periodontitis. Other studies have been done that establish food debris as not being a concerning factor regarding the cause of dental plaque. In a 4-year study, a Beagle dog has been put to eat a strict diet without any oral hygiene which led to the accumulation of gingivitis in just a few weeks. However, some dogs were put on a similar diet with hygiene precautions of daily brushing their teeth and showed no signs of gingivitis. Owners' perspectives Animal owners have varying perspectives on what causes dental problems in their pets. Some believe that dry or moistened animal foods negatively impact dental health, while others believe that marrowbones can lead to fractures and discomfort or strengthen oral health. Differences in breeds are also a familiar occurrence where owners disagree. Many perceive that distinctive breeds are more prone to health issues compared to others, while periodontitis is considered a smaller factor. Some owners express sentiments like "my dog has good teeth for their age," supporting the idea that as animals grow older, their dental features worsen, along with their weight. Importance of oral care It is important to take care of pets by not only keeping them clean and providing them with healthy foods but also maintaining oral cleanliness to avoid discomfort and diseases. Hence, veterinarians often recommend oral healthcare products for affected pets.
Biology and health sciences
Hygiene and grooming: General
Health
2324040
https://en.wikipedia.org/wiki/Sperm
Sperm
Sperm (: sperm or sperms) is the male reproductive cell, or gamete, in anisogamous forms of sexual reproduction (forms in which there is a larger, female reproductive cell and a smaller, male one). Animals produce motile sperm with a tail known as a flagellum, which are known as spermatozoa, while some red algae and fungi produce non-motile sperm cells, known as spermatia. Flowering plants contain non-motile sperm inside pollen, while some more basal plants like ferns and some gymnosperms have motile sperm. Sperm cells form during the process known as spermatogenesis, which in amniotes (reptiles and mammals) takes place in the seminiferous tubules of the testicles. This process involves the production of several successive sperm cell precursors, starting with spermatogonia, which differentiate into spermatocytes. The spermatocytes then undergo meiosis, reducing their chromosome number by half, which produces spermatids. The spermatids then mature and, in animals, construct a tail, or flagellum, which gives rise to the mature, motile sperm cell. This whole process occurs constantly and takes around 3 months from start to finish. Sperm cells cannot divide and have a limited lifespan, but after fusion with egg cells during fertilization, a new organism begins developing, starting as a totipotent zygote. The human sperm cell is haploid, so that its 23 chromosomes can join the 23 chromosomes of the female egg to form a diploid cell with 46 paired chromosomes. In mammals, sperm is stored in the epididymis and released through the penis in semen during ejaculation. The word sperm is derived from the Greek word σπέρμα, sperma, meaning "seed". Evolution It is generally accepted that isogamy is the ancestor to sperm and eggs. Because there are no fossil records of the evolution of sperm and eggs from isogamy, there is a strong emphasis on mathematical models to understand the evolution of sperm. A widespread hypothesis states that sperm evolved rapidly, but there is no direct evidence that sperm evolved at a fast rate or before other male characteristics. Sperm in animals Function The main sperm function is to reach the ovum and fuse with it to deliver two sub-cellular structures: (i) the male pronucleus that contains the genetic material and (ii) the centrioles that are structures that help organize the microtubule cytoskeleton. The nuclear DNA in sperm cells is haploid, that is, they contribute only one copy of each paternal chromosome pair. Mitochondria in human sperm contain no or very little DNA because mtDNA is degraded while sperm cells are maturing, hence they typically do not contribute any genetic material to their offspring. Anatomy The mammalian sperm cell can be divided in 2 parts connected by a neck: Head: contains the nucleus with densely coiled chromatin fibers, surrounded anteriorly by a thin, flattened sac called the acrosome, which contains enzymes used for penetrating the female egg. It also contains vacuoles. Tail: also called the flagellum, is the longest part and capable of wave-like motion that propels sperm for swimming and aids in the penetration of the egg. The tail was formerly thought to move symmetrically in a helical shape. Neck: also called connecting piece contains one typical centriole and one atypical centriole such as the proximal centriole-like. The midpiece has a central filamentous core with many mitochondria spiralled around it, used for ATP production for the journey through the female cervix, uterus, and oviducts. During fertilization, the sperm provides three essential parts to the oocyte: (1) a signalling or activating factor, which causes the metabolically dormant oocyte to activate; (2) the haploid paternal genome; (3) the centriole, which is responsible for forming the centrosome and microtubule system. Origin The spermatozoa of animals are produced through spermatogenesis inside the male gonads (testicles) via meiotic division. The initial spermatozoon process takes around 70 days to complete. The process starts with the production of spermatogonia from germ cell precursors. These divide and differentiate into spermatocytes, which undergo meiosis to form spermatids. In the spermatid stage, the sperm develops the familiar tail. The next stage where it becomes fully mature takes around 60 days when it is called a spermatozoan. Sperm cells are carried out of the male body in a fluid known as semen. Human sperm cells can survive within the female reproductive tract for more than 5 days post coitus. Semen is produced in the seminal vesicles, prostate gland and urethral glands. In 2016, scientists at Nanjing Medical University claimed they had produced cells resembling mouse spermatids from mouse embryonic stem cells artificially. They injected these spermatids into mouse eggs and produced pups. Sperm quality Sperm quantity and quality are the main parameters in semen quality, which is a measure of the ability of semen to accomplish fertilization. Thus, in humans, it is a measure of fertility in a man. The genetic quality of sperm, as well as its volume and motility, all typically decrease with age. DNA double-strand breaks in sperm increase with age. Also apoptosis decreases with age suggesting that the increase in damaged DNA of sperm as men age occurs partly as a result of less efficient cell selection (apoptosis) operating during or after spermatogenesis. DNA damages present in sperm cells in the period after meiosis but before fertilization may be repaired in the fertilized egg, but if not repaired, can have serious deleterious effects on fertility and the developing embryo. Human sperm cells are particularly vulnerable to free radical attack and the generation of oxidative DNA damage, such as that from 8-Oxo-2'-deoxyguanosine. The postmeiotic phase of mouse spermatogenesis is very sensitive to environmental genotoxic agents, because as male germ cells form mature sperm they progressively lose the ability to repair DNA damage. Irradiation of male mice during late spermatogenesis can induce damage that persists for at least 7 days in the fertilizing sperm cells, and disruption of maternal DNA double-strand break repair pathways increases sperm cell-derived chromosomal aberrations. Treatment of male mice with melphalan, a bifunctional alkylating agent frequently employed in chemotherapy, induces DNA lesions during meiosis that may persist in an unrepaired state as germ cells progress through DNA repair-competent phases of spermatogenic development. Such unrepaired DNA damages in sperm cells, after fertilization, can lead to offspring with various abnormalities. Sperm size Related to sperm quality is sperm size, at least in some animals. For instance, the sperm of some species of fruit fly (Drosophila) are up to 5.8 cm long—about 20 times as long as the fly itself. Longer sperm cells are better than their shorter counterparts at displacing competitors from the female's seminal receptacle. The benefit to females is that only healthy males carry "good" genes that can produce long sperm in sufficient quantities to outcompete their competitors. Market for human sperm Some sperm banks hold up to of sperm. In addition to ejaculation, it is possible to extract sperm through testicular sperm extraction. On the global market, Denmark has a well-developed system of human sperm export. This success mainly comes from the reputation of Danish sperm donors for being of high quality and, in contrast with the law in the other Nordic countries, gives donors the choice of being either anonymous or non-anonymous to the receiving couple. Furthermore, Nordic sperm donors tend to be tall and highly educated and have altruistic motives for their donations, partly due to the relatively low monetary compensation in Nordic countries. More than 50 countries worldwide are importers of Danish sperm, including Paraguay, Canada, Kenya, and Hong Kong. However, the Food and Drug Administration (FDA) of the US has banned import of any sperm, motivated by a risk of transmission of Creutzfeldt–Jakob disease, although such a risk is insignificant, since artificial insemination is very different from the route of transmission of Creutzfeldt–Jakob disease. The prevalence of Creutzfeldt–Jakob disease for donors is at most one in a million, and if the donor was a carrier, the infectious proteins would still have to cross the blood-testis barrier to make transmission possible. History Sperm were first observed in 1677 by Antonie van Leeuwenhoek using a microscope. He described them as being animalcules (little animals), probably due to his belief in preformationism, which thought that each sperm contained a fully formed but small human. Forensic analysis Ejaculated fluids are detected by ultraviolet light, irrespective of the structure or colour of the surface. Sperm heads, e.g. from vaginal swabs, are still detected by microscopy using the "Christmas Tree Stain" method, i.e., Kernechtrot-Picroindigocarmine (KPIC) staining. Sperm in plants Sperm cells in algal and many plant gametophytes are produced in male gametangia (antheridia) via mitotic division. In flowering plants, sperm nuclei are produced inside pollen. Motile sperm cells Motile sperm cells typically move via flagella and require a water medium in order to swim toward the egg for fertilization. In animals most of the energy for sperm motility is derived from the metabolism of fructose carried in the seminal fluid. This takes place in the mitochondria located in the sperm's midpiece (at the base of the sperm head). These cells cannot swim backwards due to the nature of their propulsion. The uniflagellated sperm cells (with one flagellum) of animals are referred to as spermatozoa, and are known to vary in size. Motile sperm are also produced by many protists and the gametophytes of bryophytes, ferns and some gymnosperms such as cycads and ginkgo. The sperm cells are the only flagellated cells in the life cycle of these plants. In many ferns and lycophytes, cycads and ginkgo they are multi-flagellated (carrying more than one flagellum). In nematodes, the sperm cells are amoeboid and crawl, rather than swim, towards the egg cell. Non-motile sperm cells Non-motile sperm cells called spermatia lack flagella and therefore cannot swim. Spermatia are produced in a spermatangium. Because spermatia cannot swim, they depend on their environment to carry them to the egg cell. Some red algae, such as Polysiphonia, produce non-motile spermatia that are spread by water currents after their release. The spermatia of rust fungi are covered with a sticky substance. They are produced in flask-shaped structures containing nectar, which attract flies that transfer the spermatia to nearby hyphae for fertilization in a mechanism similar to insect pollination in flowering plants. Fungal spermatia (also called pycniospores, especially in the Uredinales) may be confused with conidia. Conidia are spores that germinate independently of fertilization, whereas spermatia are gametes that are required for fertilization. In some fungi, such as Neurospora crassa, spermatia are identical to microconidia as they can perform both functions of fertilization as well as giving rise to new organisms without fertilization. Sperm nuclei In almost all embryophytes, including most gymnosperms and all angiosperms, the male gametophytes (pollen grains) are the primary mode of dispersal, for example via wind or insect pollination, eliminating the need for water to bridge the gap between male and female. Each pollen grain contains a spermatogenous (generative) cell. Once the pollen lands on the stigma of a receptive flower, it germinates and starts growing a pollen tube through the carpel. Before the tube reaches the ovule, the nucleus of the generative cell in the pollen grain divides and gives rise to two sperm nuclei, which are then discharged through the tube into the ovule for fertilization. In some protists, fertilization also involves sperm nuclei, rather than cells, migrating toward the egg cell through a fertilization tube. Oomycetes form sperm nuclei in a syncytical antheridium surrounding the egg cells. The sperm nuclei reach the eggs through fertilization tubes, similar to the pollen tube mechanism in plants. Sperm centrioles Most sperm cells have centrioles in the sperm neck. Sperm of many animals has two typical centrioles, known as the proximal centriole and distal centriole. Some animals (including humans and bovines) have a single typical centriole, the proximal centriole, as well as a second centriole with atypical structure. Mice and rats have no recognizable sperm centrioles. The fruit fly Drosophila melanogaster has a single centriole and an atypical centriole named the proximal centriole-like. Sperm tail formation The sperm tail is a specialized type of cilium (aka flagella). In many animals the sperm tail is formed through the unique process of cytosolic ciliogenesis, in which all or part of the sperm tail's axoneme is formed in the cytoplasm or gets exposed to the cytoplasm.
Biology and health sciences
Animal reproduction
null
2325044
https://en.wikipedia.org/wiki/Impact%20%28mechanics%29
Impact (mechanics)
In mechanics, an impact is when two bodies collide. During this collision, both bodies decelerate. The deceleration causes a high force or shock, applied over a short time period. A high force, over a short duration, usually causes more damage to both bodies than a lower force applied over a proportionally longer duration. At normal speeds, during a perfectly inelastic collision, an object struck by a projectile will deform, and this deformation will absorb most or all of the force of the collision. Viewed from a conservation of energy perspective, the kinetic energy of the projectile is changed into heat and sound energy, as a result of the deformations and vibrations induced in the struck object. However, these deformations and vibrations cannot occur instantaneously. A high-velocity collision (an impact) does not provide sufficient time for these deformations and vibrations to occur. Thus, the struck material behaves as if it were more brittle than it would otherwise be, and the majority of the applied force goes into fracturing the material. Or, another way to look at it is that materials actually are more brittle on short time scales than on long time scales: this is related to time-temperature superposition. Impact resistance decreases with an increase in the modulus of elasticity, which means that stiffer materials will have less impact resistance. Resilient materials will have better impact resistance. Different materials can behave in quite different ways in impact when compared with static loading conditions. Ductile materials like steel tend to become more brittle at high loading rates, and spalling may occur on the reverse side to the impact if penetration doesn't occur. The way in which the kinetic energy is distributed through the section is also important in determining its response. Projectiles apply a Hertzian contact stress at the point of impact to a solid body, with compression stresses under the point, but with bending loads a short distance away. Since most materials are weaker in tension than compression, this is the zone where cracks tend to form and grow. Applications A nail is pounded with a series of impacts, each by a single hammer blow. These high velocity impacts overcome the static friction between the nail and the substrate. A pile driver achieves the same end, although on a much larger scale, the method being commonly used during civil construction projects to make building and bridge foundations. An impact wrench is a device designed to impart torque impacts to bolts to tighten or loosen them. At normal speeds, the forces applied to the bolt would be dispersed, via friction, to the mating threads. However, at impact speeds, the forces act on the bolt to move it before they can be dispersed. In ballistics, bullets utilize impact forces to puncture surfaces that could otherwise resist substantial forces. A rubber sheet, for example, behaves more like glass at typical bullet speeds. That is, it fractures, and does not stretch or vibrate. The field of applications of impact theory ranges from the optimization of material processing, impact testing, dynamics of granular media to medical applications related to the biomechanics of the human body, especially the hip- and knee-joints. Also, it has vast applications in the automotive and military industries. Impacts causing damage Road traffic accidents usually involve impact loading, such as when a car hits a traffic bollard, water hydrant or tree, the damage being localized to the impact zone. When vehicles collide, the damage increases with the relative velocity of the vehicles, the damage increasing as the square of the velocity since it is the impact kinetic energy (1/2 mv2) which is the variable of importance. Much design effort is made to improve the impact resistance of cars so as to minimize user injury. It can be achieved in several ways: by enclosing the driver and passengers in a safety cell for example. The cell is reinforced so it will survive in high speed crashes, and so protect the users. Parts of the body shell outside the cell are designed to crumple progressively, absorbing most of the kinetic energy which must be dissipated by the impact. Various impact test are used to assess the effects of high loading, both on products and standard slabs of material. The Charpy test and Izod test are two examples of standardized methods which are used widely for testing materials. Ball or projectile drop tests are used for assessing product impacts. The Columbia disaster was caused by impact damage when a chunk of polyurethane foam impacted the carbon fibre composite wing of the Space Shuttle. Although tests had been conducted before the disaster, the test chunks were much smaller than the chunk that fell away from the booster rocket and hit the exposed wing. When fragile items are shipped, impacts and drops can cause product damage. Protective packaging and cushioning help reduce the peak acceleration by extending the duration of the shock or impact.
Physical sciences
Classical mechanics
Physics
2326042
https://en.wikipedia.org/wiki/Bifurcation%20theory
Bifurcation theory
Bifurcation theory is the mathematical study of changes in the qualitative or topological structure of a given family of curves, such as the integral curves of a family of vector fields, and the solutions of a family of differential equations. Most commonly applied to the mathematical study of dynamical systems, a bifurcation occurs when a small smooth change made to the parameter values (the bifurcation parameters) of a system causes a sudden 'qualitative' or topological change in its behavior. Bifurcations occur in both continuous systems (described by ordinary, delay or partial differential equations) and discrete systems (described by maps). The name "bifurcation" was first introduced by Henri Poincaré in 1885 in the first paper in mathematics showing such a behavior. Bifurcation types It is useful to divide bifurcations into two principal classes: Local bifurcations, which can be analysed entirely through changes in the local stability properties of equilibria, periodic orbits or other invariant sets as parameters cross through critical thresholds; and Global bifurcations, which often occur when larger invariant sets of the system 'collide' with each other, or with equilibria of the system. They cannot be detected purely by a stability analysis of the equilibria (fixed points). Local bifurcations A local bifurcation occurs when a parameter change causes the stability of an equilibrium (or fixed point) to change. In continuous systems, this corresponds to the real part of an eigenvalue of an equilibrium passing through zero. In discrete systems (described by maps), this corresponds to a fixed point having a Floquet multiplier with modulus equal to one. In both cases, the equilibrium is non-hyperbolic at the bifurcation point. The topological changes in the phase portrait of the system can be confined to arbitrarily small neighbourhoods of the bifurcating fixed points by moving the bifurcation parameter close to the bifurcation point (hence 'local'). More technically, consider the continuous dynamical system described by the ordinary differential equation (ODE) A local bifurcation occurs at if the Jacobian matrix has an eigenvalue with zero real part. If the eigenvalue is equal to zero, the bifurcation is a steady state bifurcation, but if the eigenvalue is non-zero but purely imaginary, this is a Hopf bifurcation. For discrete dynamical systems, consider the system Then a local bifurcation occurs at if the matrix has an eigenvalue with modulus equal to one. If the eigenvalue is equal to one, the bifurcation is either a saddle-node (often called fold bifurcation in maps), transcritical or pitchfork bifurcation. If the eigenvalue is equal to −1, it is a period-doubling (or flip) bifurcation, and otherwise, it is a Hopf bifurcation. Examples of local bifurcations include: Saddle-node (fold) bifurcation Transcritical bifurcation Pitchfork bifurcation Period-doubling (flip) bifurcation Hopf bifurcation Neimark–Sacker (secondary Hopf) bifurcation Global bifurcations Global bifurcations occur when 'larger' invariant sets, such as periodic orbits, collide with equilibria. This causes changes in the topology of the trajectories in the phase space which cannot be confined to a small neighbourhood, as is the case with local bifurcations. In fact, the changes in topology extend out to an arbitrarily large distance (hence 'global'). Examples of global bifurcations include: Homoclinic bifurcation in which a limit cycle collides with a saddle point. Homoclinic bifurcations can occur supercritically or subcritically. The variant above is the "small" or "type I" homoclinic bifurcation. In 2D there is also the "big" or "type II" homoclinic bifurcation in which the homoclinic orbit "traps" the other ends of the unstable and stable manifolds of the saddle. In three or more dimensions, higher codimension bifurcations can occur, producing complicated, possibly chaotic dynamics. Heteroclinic bifurcation in which a limit cycle collides with two or more saddle points; they involve a heteroclinic cycle. Heteroclinic bifurcations are of two types: resonance bifurcations and transverse bifurcations. Both types of bifurcation will result in the change of stability of the heteroclinic cycle. At a resonance bifurcation, the stability of the cycle changes when an algebraic condition on the eigenvalues of the equilibria in the cycle is satisfied. This is usually accompanied by the birth or death of a periodic orbit. A transverse bifurcation of a heteroclinic cycle is caused when the real part of a transverse eigenvalue of one of the equilibria in the cycle passes through zero. This will also cause a change in stability of the heteroclinic cycle. Infinite-period bifurcation in which a stable node and saddle point simultaneously occur on a limit cycle. As the limit of a parameter approaches a certain critical value, the speed of the oscillation slows down and the period approaches infinity. The infinite-period bifurcation occurs at this critical value. Beyond the critical value, the two fixed points emerge continuously from each other on the limit cycle to disrupt the oscillation and form two saddle points. Blue sky catastrophe in which a limit cycle collides with a nonhyperbolic cycle. Global bifurcations can also involve more complicated sets such as chaotic attractors (e.g. crises). Codimension of a bifurcation The codimension of a bifurcation is the number of parameters which must be varied for the bifurcation to occur. This corresponds to the codimension of the parameter set for which the bifurcation occurs within the full space of parameters. Saddle-node bifurcations and Hopf bifurcations are the only generic local bifurcations which are really codimension-one (the others all having higher codimension). However, transcritical and pitchfork bifurcations are also often thought of as codimension-one, because the normal forms can be written with only one parameter. An example of a well-studied codimension-two bifurcation is the Bogdanov–Takens bifurcation. Applications in semiclassical and quantum physics Bifurcation theory has been applied to connect quantum systems to the dynamics of their classical analogues in atomic systems, molecular systems, and resonant tunneling diodes. Bifurcation theory has also been applied to the study of laser dynamics and a number of theoretical examples which are difficult to access experimentally such as the kicked top and coupled quantum wells. The dominant reason for the link between quantum systems and bifurcations in the classical equations of motion is that at bifurcations, the signature of classical orbits becomes large, as Martin Gutzwiller points out in his classic work on quantum chaos. Many kinds of bifurcations have been studied with regard to links between classical and quantum dynamics including saddle node bifurcations, Hopf bifurcations, umbilic bifurcations, period doubling bifurcations, reconnection bifurcations, tangent bifurcations, and cusp bifurcations.
Mathematics
Dynamical systems
null
2326501
https://en.wikipedia.org/wiki/Galeaspida
Galeaspida
Galeaspida (from Latin, 'Helmet shields') is an extinct taxon of jawless marine and freshwater fish. The name is derived from galea, the Latin word for helmet, and refers to their massive bone shield on the head. Galeaspida lived in shallow, fresh water and marine environments during the Silurian and Devonian times (430 to 370 million years ago) in what is now Southern China, Tibet and Vietnam. Superficially, their morphology appears more similar to that of Heterostraci than Osteostraci, there being currently no evidence that the galeaspids had paired fins. A galeaspid Tujiaaspis vividus from the Silurian period of China was described in 2022 as having a precursor condition to the form of paired fins seen in Osteostraci and gnathostomes. Earlier than this, Galeaspida were already in fact regarded as being more closely related to Osteostraci, based on the closer similarity of the morphology of the braincase. Morphology The defining characteristic of all galeaspids was a large opening on the dorsal surface of the head shield, which was connected to the pharynx and gill chamber, and a scalloped pattern of the sensory-lines. The opening appears to have served both the olfaction and the intake of the respiratory water similar to the nasopharyngeal duct of hagfishes. Galeaspids are also the vertebrates which have the largest number of gills, as some species of the order Polybranchiaspidida (literally "many gills shields") had up to 45 gill openings. The body is covered with minute scales arranged in oblique rows and there is no other fin besides the caudal fin. The mouth and gill openings are situated on the ventral side of the head, which is flat or flattened and suggests that they were bottom-dwellers. Taxonomy There are around 76 + described species of galeaspids in at least 53 genera. If the families Hanyangaspidae and Xiushuiaspidae can be ignored as basal galeaspids, the rest of Galeaspida can be sorted into two main groups: the first being the order Eugaleaspidiformes, which comprises the genera Sinogaleaspis, Meishanaspis, and Anjianspis, and the family Eugaleaspididae, and the second being the Supraorder Polybranchiaspidida, which comprises the order Polybranchiaspidiformes, which is the sister taxon of the family Zhaotongaspididae and the order Huananaspidiformes, and the family Geraspididae, which is the sister taxon of Polybranchiaspidiformes + Zhaotongaspididae + Huananaspidiformes. Some experts demote Galeaspida to the rank of subclass, and unite it with Pituriaspida and Osteostraci to form the class Monorhina. Fossil record The oldest known galeaspids, such as those of the genera Hanyangaspis and Dayongaspis, first appear near the start of the Telychian age, of the latter half of the Llandovery epoch of the Silurian, about 436 million years ago. During the transition from the Llandovery to the Wenlock, the Eugaleaspids underwent a diversification event. By the time the Wenlock epoch transitioned into the Ludlow Epoch, all of the eugaleaspids, save for the Eugaleaspidae, were extinct. The Eugaleaspidae lived from the Wenlock, and were fairly long-lived, especially the genus Eugaleaspis. The last of the Eugaleaspididae disappeared by the end of the Pragian Epoch of the Lower Devonian. The first genus of Geraspididae, the eponymous Geraspis, appears during the middle of the Telychian. The other genera of Polybranchiaspidida appear in the fossil record a little after the beginning of the Lochkovian Epoch, at the start of the Devonian. The vast majority of the supraorder's genera either date from the Pragian epoch, or have their ranges end there. By the time the Emsian epoch starts, only a few genera, such as Duyunolepis and Wumengshanaspis, survive, with most others already extinct. The last galeaspid is an as yet undescribed species and genus from the Fammenian epoch of the Late Devonian, found in association with the tetrapod Sinostega and the antiarch placoderm Remigolepis, in strata from the Northern Chinese province of Ningxia. Taxa Jiaoyu Family Hanyangaspididae Pan & Liu 1975 Pan & Liu 1975 Nanjiangaspis Wang et al. 2002 Kalpinolepis Wang, Wang & Zhu 1996 Konoceras Pan 1992 Hongshanaspis Latirostraspis Wang, Xia & Chen 1980 Family Xiushuiaspididae Pan & Liu 1975 Changxingaspis Wan 1991 Microphymaspis Wang et al. 2002 Xiushuiaspis Pan & Wang 1983 Xiyuaspis Family Dayongaspididae Pan & Zen 1985 Dayongaspis Pan & Zen 1985 Wang et al. 2002 Order Eugaleaspidiformes Liu 1965 Yongdongaspis Chen et al., 2022 Family Shuyuidae Shan et al. 2020 Meishanaspis Wang 1991 Shuyu Gai et al. 2011 Qingshuiaspis Family Tridenaspididae Liu 1986 Zhu 1992 Tridenaspis Liu 1986 Family Sinogaleaspidae Pan & Wang 1980 Anjiaspis Gai & Zhu 2005 Sinogaleaspis Pan & Wang 1980 Shan et al. 2020 Family Eugaleaspididae Liu 1980 ?Liuaspis Whitley 1976 non Borchsenius 1960 Dunyu Zhu et al. 2012 Eugaleaspis Liu 1965 [Galeaspis Liu 1965 non Ivshin ex Borukaev 1955] Nochelaspis Zhu 1992 Pseudoduyunaspis Wang, Wang & Zhu 1996 Yunnanogaleaspis Pan & Wang 1980 Xitunaspis Super order Polybranchiaspidida Liu 1965 Order Polybranchiaspidiformes Janvier 1996 Family Gumuaspididae Gai et al. 2018 Gumuaspis Wang & Wang 1992 Platylomaspis Gai et al. 2018 Gai et al. 2018 Laxaspis Liu 1975 Pseudolaxaspis Family Geraspididae Pan & Chen 1993 Geraspis Pan & Chen 1993 Kwangnanaspis Cao 1979 Family Pentathyraspididae Pan 1992 Pentathyraspis Pan 1992 Microhoplonaspis Pan 1992 Family Duyunolepididae Pan & Wang 1978a Pan & Wang 1982 Pseudoduyunolepis Paraduyunolepis Pan & Wang 1978 Neoduyunolepis Pan & Wang 1978 Lopadaspis Wang et al. 2002 Foxaspis Family Hyperaspididae Pan 1992 Hyperaspis Pan 1992 Family Polybranchiaspididae Liu 1965 Altigibbaspis Liu, Gai & Zhu 2017 Polybranchiaspis Liu 1965 Bannhuanaspis Janvier, Than & Phuon 1993 Clororbis Pan & Ji 1993 Dongfangaspis Liu 1975 Siyingia Wang & Wang 1982 Diandongaspis Liu 1975 Damaspis Wang & Wang 1982 Cyclodiscaspis Liu 1975 Order Huananaspidiformes Janvier 1975 Family Sanqiaspididae Liu 1975 Sanqiaspis Liu 1975 Family Zhaotongaspididae Wang & Zhu 1994 Zhaotongaspis Wang & Zhu 1994 Wenshanaspis Zhao, Zhu & Jia 2002 Family Sanchaspididae Pan & Wang 1981 Pan & Wanao 1981 (not to be confused with Sanqiaspis) Antiquisagittaspis Liu 1985 Family Gantarostrataspididae Wang & Wang 1992 Gantarostrataspis Wang & Wang 1992 Wumengshanaspis Wang & Lan 1984 Gai et al. 2015 Family Huananaspidae Liu 1973 Huanaspis Liu 1973 (sister-taxon of Macrothyraspinae) Asiaspis Pan ex Pan, Wang & Liu 1975 Liu 1965 Stephaspis Gai & Zhu 2007 Subfamily Macrothyraspinae Macrothyraspis Pan 1992 Lungmenshanaspis Pan & Wang 1975 Qingmenaspis Pan & Wang 1981 Sinoszechuanaspis Pan & Wang 1975
Biology and health sciences
Prehistoric agnathae and early chordates
Animals
7555884
https://en.wikipedia.org/wiki/Plant%20anatomy
Plant anatomy
Plant anatomy or phytotomy is the general term for the study of the internal structure of plants. Originally, it included plant morphology, the description of the physical form and external structure of plants, but since the mid-20th century, plant anatomy has been considered a separate field referring only to internal plant structure. Plant anatomy is now frequently investigated at the cellular level, and often involves the sectioning of tissues and microscopy. Structural divisions Some studies of plant anatomy use a systems approach, organized on the basis of the plant's activities, such as nutrient transport, flowering, pollination, embryogenesis or seed development. Others are more classically divided into the following structural categories: Flower anatomy, including study of the calyx, corolla, androecium, and gynoecium Leaf anatomy, including study of the epidermis, stomata and palisade cells Stem anatomy, including stem structure and vascular tissues, buds and shoot apex Fruit/Seed anatomy, including structure of the ovule, seed, pericarp and accessory fruit Wood anatomy, including structure of the bark, cork, xylem, phloem, vascular cambium, heartwood and sapwood and branch collar Root anatomy, including structure of the root, root tip, endodermis History About 300 BC, Theophrastus wrote a number of plant treatises, only two of which survive, Enquiry into Plants (), and On the Causes of Plants (). He developed concepts of plant morphology and classification, which did not withstand the scientific scrutiny of the Renaissance. A Swiss physician and botanist, Gaspard Bauhin, introduced binomial nomenclature into plant taxonomy. He published Pinax theatri botanici in 1596, which was the first to use this convention for naming of species. His criteria for classification included natural relationships, or 'affinities', which in many cases were structural. It was in the late 1600s that plant anatomy became refined into a modern science. Italian doctor and microscopist, Marcello Malpighi, was one of the two founders of plant anatomy. In 1671, he published his Anatomia Plantarum, the first major advance in plant physiogamy since Aristotle. The other founder was the British doctor Nehemiah Grew. He published An Idea of a Philosophical History of Plants in 1672 and The Anatomy of Plants in 1682. Grew is credited with the recognition of plant cells, although he called them 'vesicles' and 'bladders'. He correctly identified and described the sexual organs of plants (flowers) and their parts. In the eighteenth century, Carl Linnaeus established taxonomy based on structure, and his early work was with plant anatomy. While the exact structural level which is to be considered to be scientifically valid for comparison and differentiation has changed with the growth of knowledge, the basic principles were established by Linnaeus. He published his master work, Species Plantarum in 1753. In 1802, French botanist Charles-François Brisseau de Mirbel, published (Treatise on Plant Anatomy and Physiology) establishing the beginnings of the science of plant cytology. In 1812, Johann Jacob Paul Moldenhawer published , describing microscopic studies of plant tissues. In 1813, a Swiss botanist, Augustin Pyrame de Candolle, published Théorie élémentaire de la botanique, in which he argued that plant anatomy, not physiology, ought to be the sole basis for plant classification. Using a scientific basis, he established structural criteria for defining and separating plant genera. In 1830, Franz Meyen published Phytotomie, the first comprehensive review of plant anatomy.In 1838, German botanist Matthias Jakob Schleiden, published Contributions to Phytogenesis, stating, "the lower plants all consist of one cell, while the higher plants are composed of (many) individual cells" thus confirming and continuing Mirbel's work. A German-Polish botanist, Eduard Strasburger, described the mitotic process in plant cells and further demonstrated that new cell nuclei can only arise from the division of other pre-existing nuclei. His Studien über Protoplasma was published in 1876. Gottlieb Haberlandt, a German botanist, studied plant physiology and classified plant tissue based upon function. On this basis, in 1884, he published (Physiological Plant Anatomy), in which he described twelve types of tissue systems (absorptive, mechanical, photosynthetic, etc.). British paleobotanists Dunkinfield Henry Scott and William Crawford Williamson described the structures of fossilized plants at the end of the nineteenth century. Scott's Studies in Fossil Botany was published in 1900. Following Charles Darwin's Origin of Species a Canadian botanist, Edward Charles Jeffrey, who was studying the comparative anatomy and phylogeny of different vascular plant groups, applied the theory to plants using the form and structure of plants to establish a number of evolutionary lines. He published his The Anatomy of Woody Plants in 1917. The growth of comparative plant anatomy was spearheaded by British botanist Agnes Arber. She published Water Plants: A Study of Aquatic Angiosperms in 1920, Monocotyledons: A Morphological Study in 1925, and The Gramineae: A Study of Cereal, Bamboo and Grass in 1934. Following World War II, Katherine Esau published, Plant Anatomy (1953), which became the definitive textbook on plant structure in North American universities and elsewhere, it was still in print as of 2006. She followed up with her Anatomy of seed plants in 1960.
Biology and health sciences
Plant: General
null
7556348
https://en.wikipedia.org/wiki/Plant%20morphology
Plant morphology
Phytomorphology is the study of the physical form and external structure of plants. This is usually considered distinct from plant anatomy, which is the study of the internal structure of plants, especially at the microscopic level. Plant morphology is useful in the visual identification of plants. Recent studies in molecular biology started to investigate the molecular processes involved in determining the conservation and diversification of plant morphologies. In these studies transcriptome conservation patterns were found to mark crucial ontogenetic transitions during the plant life cycle which may result in evolutionary constraints limiting diversification. Scope Plant morphology "represents a study of the development, form, and structure of plants, and, by implication, an attempt to interpret these on the basis of similarity of plan and origin". There are four major areas of investigation in plant morphology, and each overlaps with another field of the biological sciences. First of all, morphology is comparative, meaning that the morphologist examines structures in many different plants of the same or different species, then draws comparisons and formulates ideas about similarities. When structures in different species are believed to exist and develop as a result of common, inherited genetic pathways, those structures are termed homologous. For example, the leaves of pine, oak, and cabbage all look very different, but share certain basic structures and arrangement of parts. The homology of leaves is an easy conclusion to make. The plant morphologist goes further, and discovers that the spines of cactus also share the same basic structure and development as leaves in other plants, and therefore cactus spines are homologous to leaves as well. This aspect of plant morphology overlaps with the study of plant evolution and paleobotany. Secondly, plant morphology observes both the vegetative (somatic) structures of plants, as well as the reproductive structures. The vegetative structures of vascular plants includes the study of the shoot system, composed of stems and leaves, as well as the root system. The reproductive structures are more varied, and are usually specific to a particular group of plants, such as flowers and seeds, fern sori, and moss capsules. The detailed study of reproductive structures in plants led to the discovery of the alternation of generations found in all plants and most algae. This area of plant morphology overlaps with the study of biodiversity and plant systematics. Thirdly, plant morphology studies plant structure at a range of scales. At the smallest scales are ultrastructure, the general structural features of cells visible only with the aid of an electron microscope, and cytology, the study of cells using optical microscopy. At this scale, plant morphology overlaps with plant anatomy as a field of study. At the largest scale is the study of plant growth habit, the overall architecture of a plant. The pattern of branching in a tree will vary from species to species, as will the appearance of a plant as a tree, herb, or grass. Fourthly, plant morphology examines the pattern of development, the process by which structures originate and mature as a plant grows. While animals produce all the body parts they will ever have from early in their life, plants constantly produce new tissues and structures throughout their life. A living plant always has embryonic tissues. The way in which new structures mature as they are produced may be affected by the point in the plant's life when they begin to develop, as well as by the environment to which the structures are exposed. A morphologist studies this process, the causes, and its result. This area of plant morphology overlaps with plant physiology and ecology. A comparative science A plant morphologist makes comparisons between structures in many different plants of the same or different species. Making such comparisons between similar structures in different plants tackles the question of why the structures are similar. It is quite likely that similar underlying causes of genetics, physiology, or response to the environment have led to this similarity in appearance. The result of scientific investigation into these causes can lead to one of two insights into the underlying biology: Homology - the structure is similar between the two species because of shared ancestry and common genetics. Convergence - the structure is similar between the two species because of independent adaptation to common environmental pressures. Understanding which characteristics and structures belong to each type is an important part of understanding plant evolution. The evolutionary biologist relies on the plant morphologist to interpret structures, and in turn provides phylogenies of plant relationships that may lead to new morphological insights. Homology When structures in different species are believed to exist and develop as a result of common, inherited genetic pathways, those structures are termed homologous. For example, the leaves of pine, oak, and cabbage all look very different, but share certain basic structures and arrangement of parts. The homology of leaves is an easy conclusion to make. The plant morphologist goes further, and discovers that the spines of cactus also share the same basic structure and development as leaves in other plants, and therefore cactus spines are homologous to leaves as well. Convergence When structures in different species are believed to exist and develop as a result of common adaptive responses to environmental pressure, those structures are termed convergent. For example, the fronds of Bryopsis plumosa and stems of Asparagus setaceus both have the same feathery branching appearance, even though one is an alga and one is a flowering plant. The similarity in overall structure occurs independently as a result of convergence. The growth form of many cacti and species of Euphorbia is very similar, even though they belong to widely distant families. The similarity results from common solutions to the problem of surviving in a hot, dry environment. Vegetative and reproductive characteristics Plant morphology treats both the vegetative structures of plants, as well as the reproductive structures. The vegetative (somatic) structures of vascular plants include two major organ systems: (1) a shoot system, composed of stems and leaves, and (2) a root system. These two systems are common to nearly all vascular plants, and provide a unifying theme for the study of plant morphology. By contrast, the reproductive structures are varied, and are usually specific to a particular group of plants. Structures such as flowers and fruits are only found in the angiosperms; sori are only found in ferns; and seed cones are only found in conifers and other gymnosperms. Reproductive characters are therefore regarded as more useful for the classification of plants than vegetative characters. Use in identification Plant biologists use morphological characters of plants which can be compared, measured, counted and described to assess the differences or similarities in plant taxa and use these characters for plant identification, classification and descriptions. When characters are used in descriptions or for identification they are called diagnostic or key characters which can be either qualitative and quantitative. Quantitative characters are morphological features that can be counted or measured for example a plant species has flower petals 10–12 mm wide. Qualitative characters are morphological features such as leaf shape, flower color or pubescence. Both kinds of characters can be very useful for the identification of plants. Alternation of generations The detailed study of reproductive structures in plants led to the discovery of the alternation of generations, found in all plants and most algae, by the German botanist Wilhelm Hofmeister. This discovery is one of the most important made in all of plant morphology, since it provides a common basis for understanding the life cycle of all plants. Pigmentation in plants The primary function of pigments in plants is photosynthesis, which uses the green pigment chlorophyll along with several red and yellow pigments that help to capture as much light energy as possible the other pigments ic carotenoids'. Pigments are also an important factor in attracting insects to flowers to encourage pollination. Plant pigments include a variety of different kinds of molecule, including porphyrins, carotenoids, anthocyanins and betalains. All biological pigments selectively absorb certain wavelengths of light while reflecting others. The light that is absorbed may be used by the plant to power chemical reactions, while the reflected wavelengths of light determine the color the pigment will appear to the eye. Morphology in development Plant development is the process by which structures originate and mature as a plant grows. It is a subject studies in plant anatomy and plant physiology as well as plant morphology. The process of development in plants is fundamentally different from that seen in vertebrate animals. When an animal embryo begins to develop, it will very early produce all of the body parts that it will ever have in its life. When the animal is born (or hatches from its egg), it has all its body parts and from that point will only grow larger and more mature. By contrast, plants constantly produce new tissues and structures throughout their life from meristems located at the tips of organs, or between mature tissues. Thus, a living plant always has embryonic tissues. The properties of organisation seen in a plant are emergent properties which are more than the sum of the individual parts. "The assembly of these tissues and functions into an integrated multicellular organism yields not only the characteristics of the separate parts and processes but also quite a new set of characteristics which would not have been predictable on the basis of examination of the separate parts." In other words, knowing everything about the molecules in a plant are not enough to predict characteristics of the cells; and knowing all the properties of the cells will not predict all the properties of a plant's structure. Growth A vascular plant begins from a single celled zygote, formed by fertilisation of an egg cell by a sperm cell. From that point, it begins to divide to form a plant embryo through the process of embryogenesis. As this happens, the resulting cells will organise so that one end becomes the first root, while the other end forms the tip of the shoot. In seed plants, the embryo will develop one or more "seed leaves" (cotyledons). By the end of embryogenesis, the young plant will have all the parts necessary to begin in its life. Once the embryo germinates from its seed or parent plant, it begins to produce additional organs (leaves, stems, and roots) through the process of organogenesis. New roots grow from root meristems located at the tip of the root, and new stems and leaves grow from shoot meristems located at the tip of the shoot. Branching occurs when small clumps of cells left behind by the meristem, and which have not yet undergone cellular differentiation to form a specialised tissue, begin to grow as the tip of a new root or shoot. Growth from any such meristem at the tip of a root or shoot is termed primary growth and results in the lengthening of that root or shoot. Secondary growth results in widening of a root or shoot from divisions of cells in a cambium. In addition to growth by cell division, a plant may grow through cell elongation. This occurs when individual cells or groups of cells grow longer. Not all plant cells will grow to the same length. When cells on one side of a stem grow longer and faster than cells on the other side, the stem will bend to the side of the slower growing cells as a result. This directional growth can occur via a plant's response to a particular stimulus, such as light (phototropism), gravity (gravitropism), water, (hydrotropism), and physical contact (thigmotropism). Plant growth and development are mediated by specific plant hormones and plant growth regulators (PGRs) (Ross et al. 1983). Endogenous hormone levels are influenced by plant age, cold hardiness, dormancy, and other metabolic conditions; photoperiod, drought, temperature, and other external environmental conditions; and exogenous sources of PGRs, e.g., externally applied and of rhizospheric origin. Morphological variation Plants exhibit natural variation in their form and structure. While all organisms vary from individual to individual, plants exhibit an additional type of variation. Within a single individual, parts are repeated which may differ in form and structure from other similar parts. This variation is most easily seen in the leaves of a plant, though other organs such as stems and flowers may show similar variation. There are three primary causes of this variation: positional effects, environmental effects, and juvenility. Evolution of plant morphology Transcription factors and transcriptional regulatory networks play key roles in plant morphogenesis and their evolution. During plant landing, many novel transcription factor families emerged and are preferentially wired into the networks of multicellular development, reproduction, and organ development, contributing to more complex morphogenesis of land plants. Positional effects Although plants produce numerous copies of the same organ during their lives, not all copies of a particular organ will be identical. There is variation among the parts of a mature plant resulting from the relative position where the organ is produced. For example, along a new branch the leaves may vary in a consistent pattern along the branch. The form of leaves produced near the base of the branch will differ from leaves produced at the tip of the plant, and this difference is consistent from branch to branch on a given plant and in a given species. This difference persists after the leaves at both ends of the branch have matured, and is not the result of some leaves being younger than others. Environmental effects The way in which new structures mature as they are produced may be affected by the point in the plant's life when they begin to develop, as well as by the environment to which the structures are exposed. This can be seen in aquatic plants. Temperature Temperature has a multiplicity of effects on plants depending on a variety of factors, including the size and condition of the plant and the temperature and duration of exposure. The smaller and more succulent the plant, the greater the susceptibility to damage or death from temperatures that are too high or too low. Temperature affects the rate of biochemical and physiological processes, rates generally (within limits) increasing with temperature. However, the Van’t Hoff relationship for monomolecular reactions (which states that the velocity of a reaction is doubled or trebled by a temperature increase of 10 °C) does not strictly hold for biological processes, especially at low and high temperatures. When water freezes in plants, the consequences for the plant depend very much on whether the freezing occurs intracellularly (within cells) or outside cells in intercellular (extracellular) spaces. Intracellular freezing usually kills the cell regardless of the hardiness of the plant and its tissues. Intracellular freezing seldom occurs in nature, but moderate rates of decrease in temperature, e.g., 1 °C to 6 °C/hour, cause intercellular ice to form, and this "extraorgan ice" may or may not be lethal, depending on the hardiness of the tissue. At freezing temperatures, water in the intercellular spaces of plant tissues freezes first, though the water may remain unfrozen until temperatures fall below 7 °C. After the initial formation of ice intercellularly, the cells shrink as water is lost to the segregated ice. The cells undergo freeze-drying, the dehydration being the basic cause of freezing injury. The rate of cooling has been shown to influence the frost resistance of tissues, but the actual rate of freezing will depend not only on the cooling rate, but also on the degree of supercooling and the properties of the tissue. Sakai (1979a) demonstrated ice segregation in shoot primordia of Alaskan white and black spruces when cooled slowly to 30 °C to -40 °C. These freeze-dehydrated buds survived immersion in liquid nitrogen when slowly rewarmed. Floral primordia responded similarly. Extraorgan freezing in the primordia accounts for the ability of the hardiest of the boreal conifers to survive winters in regions when air temperatures often fall to -50 °C or lower. The hardiness of the winter buds of such conifers is enhanced by the smallness of the buds, by the evolution of faster translocation of water, and an ability to tolerate intensive freeze dehydration. In boreal species of Picea and Pinus, the frost resistance of 1-year-old seedlings is on a par with mature plants, given similar states of dormancy. Juvenility The organs and tissues produced by a young plant, such as a seedling, are often different from those that are produced by the same plant when it is older. This phenomenon is known as juvenility or heteroblasty. For example, young trees will produce longer, leaner branches that grow upwards more than the branches they will produce as a fully grown tree. In addition, leaves produced during early growth tend to be larger, thinner, and more irregular than leaves on the adult plant. Specimens of juvenile plants may look so completely different from adult plants of the same species that egg-laying insects do not recognise the plant as food for their young. Differences are seen in rootability and flowering and can be seen in the same mature tree. Juvenile cuttings taken from the base of a tree will form roots much more readily than cuttings originating from the mid to upper crown. Flowering close to the base of a tree is absent or less profuse than flowering in the higher branches especially when a young tree first reaches flowering age. The transition from early to late growth forms is referred to as 'vegetative phase change', but there is some disagreement about terminology. Modern Innovations Rolf Sattler has revised fundamental concepts of comparative morphology such as the concept of homology. He emphasised that homology should also include partial homology and quantitative homology. This leads to a continuum morphology that demonstrates a continuum between the morphological categories of root, shoot, stem (caulome), leaf (phyllome), and hair (trichome). How intermediates between the categories are best described has been discussed by Bruce K. Kirchoff et al. A recent study conducted by Stalk Institute extracted coordinates corresponding to each plant's base and leaves in 3D space. When plants on the graph were placed according to their actual nutrient travel distances and total branch lengths, the plants fell almost perfectly on the Pareto curve. "This means the way plants grow their architectures also optimises a very common network design tradeoff. Based on the environment and the species, the plant is selecting different ways to make tradeoffs for those particular environmental conditions." Honoring Agnes Arber, author of the partial-shoot theory of the leaf, Rutishauser and Isler called the continuum approach Fuzzy Arberian Morphology (FAM). “Fuzzy” refers to fuzzy logic, “Arberian” to Agnes Arber. Rutishauser and Isler emphasised that this approach is not only supported by many morphological data but also by evidence from molecular genetics. More recent evidence from molecular genetics provides further support for continuum morphology. James (2009) concluded that "it is now widely accepted that... radiality [characteristic of most stems] and dorsiventrality [characteristic of leaves] are but extremes of a continuous spectrum. In fact, it is simply the timing of the KNOX gene expression!." Eckardt and Baum (2010) concluded that "it is now generally accepted that compound leaves express both leaf and shoot properties.” Process morphology describes and analyses the dynamic continuum of plant form. According to this approach, structures do not have process(es), they are process(es). Thus, the structure/process dichotomy is overcome by "an enlargement of our concept of 'structure' so as to include and recognise that in the living organism it is not merely a question of spatial structure with an 'activity' as something over or against it, but that the concrete organism is a spatio-temporal structure and that this spatio-temporal structure is the activity itself". For Jeune, Barabé and Lacroix, classical morphology (that is, mainstream morphology, based on a qualitative homology concept implying mutually exclusive categories) and continuum morphology are sub-classes of the more encompassing process morphology (dynamic morphology). Classical morphology, continuum morphology, and process morphology are highly relevant to plant evolution, especially the field of plant evolutionary biology (plant evo-devo) that tries to integrate plant morphology and plant molecular genetics. In a detailed case study on unusual morphologies, Rutishauser (2016) illustrated and discussed various topics of plant evo-devo such as the fuzziness (continuity) of morphological concepts, the lack of a one-to-one correspondence between structural categories and gene expression, the notion of morphospace, the adaptive value of bauplan features versus patio ludens, physiological adaptations, hopeful monsters and saltational evolution, the significance and limits of developmental robustness, etc. Rutishauser (2020) discussed the past and future of plant evo-devo. Our conception of the gynoecium and the search for a fossil ancestor of Angiosperms changes fundamentally from the perspective of evo-devo. Whether we like it or not, morphological research is influenced by philosophical assumptions such as either/or logic, fuzzy logic, structure/process dualism or its transcendence. And empirical findings may influence the philosophical assumptions. Thus there are interactions between philosophy and empirical findings. These interactions are the subject of what has been referred to as philosophy of plant morphology. One important and unique event in plant morphology of the 21st century was the publication of Kaplan's Principles of Plant Morphology by Donald R. Kaplan, edited by Chelsea D. Specht (2020). It is a well illustrated volume of 1305 pages in a very large format that presents a wealth of morphological data. Unfortunately, all of these data are only interpreted in terms of classical morphology and the qualitative homology concept, disregarding modern conceptional innovations. Including continuum and process morphology as well as molecular genetics would provide an enlarged scope.
Biology and health sciences
Plant: General
null
1091918
https://en.wikipedia.org/wiki/Baryonyx
Baryonyx
Baryonyx () is a genus of theropod dinosaur which lived in the Barremian stage of the Early Cretaceous period, about 130–125 million years ago. The first skeleton was discovered in 1983 in the Smokejack Clay Pit, of Surrey, England, in sediments of the Weald Clay Formation, and became the holotype specimen of Baryonyx walkeri, named by palaeontologists Alan J. Charig and Angela C. Milner in 1986. The generic name, Baryonyx, means "heavy claw" and alludes to the animal's very large claw on the first finger; the specific name, walkeri, refers to its discoverer, amateur fossil collector William J. Walker. The holotype specimen is one of the most complete theropod skeletons from the UK (and remains the most complete spinosaurid), and its discovery attracted media attention. Specimens later discovered in other parts of the United Kingdom and Iberia have also been assigned to the genus, though many have since been moved to new genera. The holotype specimen, which may not have been fully grown, was estimated to have been between long and to have weighed between . Baryonyx had a long, low, and narrow snout, which has been compared to that of a gharial. The tip of the snout expanded to the sides in the shape of a rosette. Behind this, the upper jaw had a notch which fitted into the lower jaw (which curved upwards in the same area). It had a triangular crest on the top of its nasal bones. Baryonyx had a large number of finely serrated, conical teeth, with the largest teeth in front. The neck formed an S-shape, and the neural spines of its dorsal vertebrae increased in height from front to back. One elongated neural spine indicates it may have had a hump or ridge along the centre of its back. It had robust forelimbs, with the eponymous first-finger claw measuring about long. Now recognised as a member of the family Spinosauridae, Baryonyx affinities were obscure when it was discovered. Some researchers have suggested that Suchosaurus cultridens is a senior synonym (being an older name), and that Suchomimus tenerensis belongs in the same genus; subsequent authors have kept them separate. Baryonyx was the first theropod dinosaur demonstrated to have been piscivorous (fish-eating), as evidenced by fish scales in the stomach region of the holotype specimen. It may also have been an active predator of larger prey and a scavenger, since it also contained bones of a juvenile iguanodontid. The creature would have caught and processed its prey primarily with its forelimbs and large claws. Baryonyx may have had semi-aquatic habits, and coexisted with other theropod, ornithopod, and sauropod dinosaurs, as well as pterosaurs, crocodiles, turtles and fishes, in a fluvial environment. History of discovery In January 1983, the plumber and amateur fossil collector William J. Walker explored the Smokejack Clay Pit, a clay pit in the Weald Clay Formation near Ockley in Surrey, England. He found a rock wherein he discovered a large claw, but after piecing it together at home, he realised the tip of the claw was missing. Walker returned to the same spot in the pit some weeks later, and found the missing part after searching for an hour. He also found a and part of a . Walker's son-in-law later brought the claw to the Natural History Museum of London, where it was examined by the palaeontologists Alan J. Charig and Angela C. Milner, who identified it as belonging to a theropod dinosaur. The palaeontologists found more bone fragments at the site in February, but the entire skeleton could not be collected until May and June due to weather conditions at the pit. A team of eight museum staff members and several volunteers excavated of rock matrix in 54 blocks over a three-week period. Walker donated the claw to the museum, and the Ockley Brick Company (owners of the pit) donated the rest of the skeleton and provided equipment. The area had been explored for 200 years, but no similar remains had been found before. Most of the bones collected were encased in siltstone nodules surrounded by fine sand and silt, with the rest lying in clay. The bones were and scattered over a area, but most were not far from their natural positions. The position of some bones was disturbed by a bulldozer, and some were broken by mechanical equipment before they were collected. Preparing the specimen was difficult, due to the hardness of the siltstone matrix and the presence of siderite; acid preparation was attempted, but most of the matrix was removed mechanically. It took six years of almost constant preparation to get all the bones out of the rock, and by the end, dental tools and air mallets had to be used under a microscope. The specimen represents about 65 per cent of the skeleton, and consists of partial skull bones, including premaxillae (first bones of the upper jaw); the left maxillae (second bone of the upper jaw); both nasal bones; the left lacrimal; the left prefrontal; the left postorbital; the braincase including the occiput; both dentaries (the front bones of the lower jaw); various bones from the back of the lower jaw; teeth; cervical (neck), dorsal (back), and caudal (tail) ; ribs; a ; both (shoulder blades); both ; both (upper arm bones); the left and (lower arm bones); finger bones and (claw bones); hip bones; the upper end of the left (thigh bone) and lower end of the right; right (of the lower leg); and foot bones including an ungual. The original specimen number was BMNH R9951, but it was later re-catalogued as NHMUK PV R9951. In 1986, Charig and Milner named a new genus and species with the skeleton as holotype specimen: Baryonyx walkeri. The generic name derives from ancient Greek; βαρύς (barys) means "heavy" or "strong", and ὄνυξ (onyx) means "claw" or "talon". The specific name honours Walker, for discovering the specimen. At that time, the authors did not know if the large claw belonged to the hand or the foot (as in dromaeosaurs, which it was then assumed to be). The dinosaur had been presented earlier the same year during a lecture at a conference about dinosaur systematics in Drumheller, Canada. Due to ongoing work on the bones (70 per cent had been prepared at the time), they called their article preliminary and promised a more detailed description at a later date. Baryonyx was the first large Early Cretaceous theropod found anywhere in the world by that time. Before the discovery of Baryonyx the last significant theropod find in the United Kingdom was Eustreptospondylus in 1871, and in a 1986 interview Charig called Baryonyx "the best find of the century" in Europe. Baryonyx was widely featured in international media, and was nicknamed "Claws" by journalists punning on the title of the film Jaws. Its discovery was the subject of a 1987 BBC documentary, and a cast of the skeleton is mounted at the Natural History Museum in London. In 1997, Charig and Milner published a monograph describing the holotype skeleton in detail. The holotype specimen remains the most completely known spinosaurid skeleton. Assigned specimens Fossils from other parts of the UK and Iberia, mostly isolated teeth, have subsequently been attributed to Baryonyx or similar animals. Isolated teeth and bones from the Isle of Wight, including hand bones reported in 1998 and a vertebra reported by the palaeontologists Steve Hutt and Penny Newbery in 2004, have been attributed to this genus. A maxilla fragment from La Rioja, Spain, was attributed to Baryonyx by the palaeontologists Luis I. Viera and José Angel Torres in 1995 (although the palaeontologist Thomas R. Holtz and colleagues raised the possibility that it could have belonged to Suchomimus in 2004). In 1999, a postorbital, , tooth, vertebral remains, (hand bones), and a phalanx from the Salas de los Infantes deposit in Burgos Province, Spain, were attributed to an immature Baryonyx (though some of these elements are unknown in the holotype) by the palaeontologist Carolina Fuentes Vidarte and colleagues. Dinosaur tracks near Burgos have also been suggested to belong to Baryonyx or a similar theropod. In 2011, a specimen (Catalogued as ML1190 in Museu da Lourinhã) from the Papo Seco Formation in Boca do Chapim, Portugal, with a fragmentary dentary, teeth, vertebrae, ribs, hip bones, a scapula, and a phalanx bone, was attributed to Baryonyx by the palaeontologist Octávio Mateus and colleagues, the most complete Iberian remains of the animal. The skeletal elements of this specimen are also represented in the more complete holotype (which was of similar size), except for the mid-neck vertebrae. In 2018, the palaeontologist Thomas M. S. Arden and colleagues found that the Portuguese skeleton did not belong to Baryonyx, since the front of its dentary bone was not strongly upturned. This specimen was made the basis of the new genus Iberospinus by Mateus and Darío Estraviz-López in 2022. Multiple studies found that additional spinosaurid remains from Iberia may belong to taxa other than Baryonyx, such as Vallibonavenatrix and Protathlitis, or may be indeterminate. A 2024 article by the palaeontologist Erik Isasmendi and colleagues reviewed the spinosaurid fossil record of Iberia and concluded that no specimens from there could be assigned to Baryonyx. They moved a specimen formerly assigned to Baryonyx from La Rioja to the new genus Riojavenatrix. In 2021, the palaeontologist Chris T. Barker and colleagues described two new spinosaurid genera from the Wessex Formation of the Isle of Wight, Ceratosuchops and Riparovenator (the latter named R. milnerae honouring Milner for her contributions to spinosaurid research), and stated that spinosaurid material from there that had previously been attributed to the contemporary Baryonyx could have belonged to other taxa instead. These specimens had previously been assigned to Baryonyx in a 2017 conference abstract. Barker and colleagues stated that the recognition of the Wessex Formation specimens as new genera renders the presence of Baryonyx there ambiguous, and most of the previously assigned isolated material from the Wealden Supergroup is therefore indeterminate. A 2023 study of an isolated tooth by Barker and colleagues found that it and other teeth from the Wealden Supergroup that have previously been assigned to Baryonyx probably do not belong to the genus, based on their morphology and age. Possible synonyms In 2003, Milner noted that some teeth at the Natural History Museum previously identified as belonging to the genera Suchosaurus (the first named spinosaurid) and Megalosaurus probably belonged to Baryonyx. The type species of Suchosaurus, S. cultridens, was named by the biologist Richard Owen in 1841, based on teeth discovered by the geologist Gideon A. Mantell in Tilgate Forest, Sussex. Owen originally thought the teeth to have belonged to a crocodile; he was yet to name the group Dinosauria, which happened the following year. A second species, S. girardi, was named by the palaeontologist Henri Émile Sauvage in 1897, based on jaw fragments and a tooth from Boca do Chapim, Portugal. In 2007, the palaeontologist Éric Buffetaut considered the teeth of S. girardi very similar to those of Baryonyx (and S. cultridens) except for the stronger development of the flutes (or "ribs"; lengthwise ridges), suggesting that the remains belonged to the same genus. Buffetaut agreed with Milner that the teeth of S. cultridens were almost identical to those of B. walkeri, but with a ribbier surface. The former taxon might be a senior synonym of the latter (since it was published first), depending on whether the differences were within a taxon or between different ones. According to Buffetaut, since the holotype specimen of S. cultridens is a single tooth and that of B. walkeri is a skeleton, it would be more practical to retain the newer name. In 2011, Mateus and colleagues agreed that Suchosaurus was closely related to Baryonyx, but considered both species in the former genus nomina dubia (dubious names) since their holotype specimens were not considered diagnostic (lacking distinguishing features) and could not be definitely equated with other taxa. Barker and colleagues agreed with this in 2023. In 1997, Charig and Milner noted that two fragmentary spinosaurid snouts from the Elrhaz Formation of Niger (reported by the palaeontologist Philippe Taquet in 1984) were similar enough to Baryonyx that they considered them to belong to an indeterminate species of the genus (despite their much younger Aptian geological age). In 1998, these fossils became the basis of the genus and species Cristatusaurus lapparenti, named by Taquet and the palaeontologist Dale Russell. The palaeontologist Paul Sereno and colleagues named the new genus and species Suchomimus tenerensis later in 1998, based on more complete fossils from the Elrhaz Formation. In 2002, the German palaeontologist Hans-Dieter Sues and colleagues proposed that Suchomimus tenerensis was similar enough to Baryonyx walkeri to be considered a species within the same genus (as B. tenerensis), and that Suchomimus was identical to Cristatusaurus. Milner concurred that the material from Niger was indistinguishable from Baryonyx in 2003. In a 2004 conference abstract, Hutt and Newberry supported the synonymy based on a large theropod vertebra from the Isle of Wight which they attributed to an animal closely related to Baryonyx and Suchomimus. Later studies have kept Baryonyx and Suchomimus separate, whereas Cristatusaurus has been proposed to be either a nomen dubium or possibly distinct from both. A 2017 review paper by the palaeontologist Carlos Roberto A. Candeiro and colleagues stated that this debate was more in the realm of semantics than science, as it is generally agreed that B. walkeri and S. tenerensis are distinct, related species. Barker and colleagues found Suchomimus to be closer related to the British genera Riparovenator and Ceratosuchops than to Baryonyx in 2021. Description Baryonyx is estimated to have been between long, in hip height, and to have weighed between . The fact that elements of the skull and vertebral column of the B. walkeri holotype specimen (NHM R9951) do not appear to have co-ossified (fused) suggests that the individual was not fully grown, and the mature animal may have been much larger (as is the case for some other spinosaurids). On the other hand, the specimen's fused sternum indicates that it may have been mature. Skull The skull of Baryonyx is incompletely known, and much of the middle and hind portions are not preserved. The full length of the skull is estimated to have been long, based on comparison with that of the related genus Suchomimus (which was 20% larger). It was elongated, and the front of the premaxillae formed a long, narrow, and low snout (rostrum) with a smoothly rounded upper surface. The (bony nostrils) were long, low, and placed far back from the snout tip. The front of the snout expanded into a spatulate (spoon-like), "terminal rosette", a shape similar to the rostrum of the modern gharial. The front of the lower margin of the premaxillae was downturned (or hooked), whereas that of the front portion of the maxillae was upturned. This morphology resulted in a sigmoid or S-shaped margin of the lower upper tooth row, in which the teeth from the front of the maxilla were projecting forward. The snout was particularly narrow directly behind the rosette; this area received the large teeth of the mandible. The maxilla and premaxilla of Baryonyx fit together in a complex articulation, and the resulting gap between the upper and lower jaw is known as the . A downturned premaxilla and a sigmoid lower margin of the upper tooth row was also present in distantly related theropods such as Dilophosaurus. The snout had extensive (openings), which would have been exits for blood vessels and nerves, and the maxilla appears to have housed sinuses. Baryonyx had a rudimentary , similar to crocodiles but unlike most theropod dinosaurs. A rugose (roughly wrinkled) surface suggests the presence of a horny pad in the roof of the mouth. The nasal bones were fused, which distinguished Baryonyx from other spinosaurids, and a sagittal crest was present above the eyes, on the upper mid-line of the nasals. This crest was triangular, narrow, and sharp in its front part, and was distinct from those of other spinosaurids in ending hind wards in a cross-shaped process. The lacrimal bone in front of the eye appears to have formed a horn core similar to those seen, for example, in Allosaurus, and was distinct from other spinosaurids in being solid and almost triangular. The occiput was narrow, with the paroccipital processes pointing outwards horizontally, and the were lengthened, descending far below the (the lowermost bone of the occiput). Sereno and colleagues suggested that some of Baryonyx cranial bones had been misidentified by Charig and Milner, resulting in the occiput being reconstructed as too deep, and that the skull was instead probably as low, long and narrow as that of Suchomimus. The front of the dentary in the mandible sloped upwards towards the curve of the snout. The dentary was very long and shallow, with a prominent Meckelian groove on the inner side. The , where the two halves of the lower jaw connected at the front, was particularly short. The rest of the lower jaw was fragile; the hind third was much thinner than the front, with a blade-like appearance. The front part of the dentary curved outwards to accommodate the large front teeth, and this area formed the mandibular part of the rosette. The dentary–like the snout—had many foramina. Most of the teeth found with the holotype specimen were not in articulation with the skull; a few remained in the upper jaw, and only small replacement teeth were still borne by the lower jaw. The teeth had the shape of recurved cones, where slightly flattened from sideways, and their curvature was almost uniform. The roots were very long, and tapered towards their extremity. The carinae (sharp front and back edges) of the teeth were finely serrated with denticles on the front and back, and extended all along the crown. There were around six to eight denticles per mm (0.039 in), a much larger number than in large-bodied theropods like Torvosaurus and Tyrannosaurus. Some of the teeth were fluted, with six to eight ridges along the length of their inner sides and fine-grained (outermost layer of teeth), while others bore no flutes; their presence is probably related to position or ontogeny (development during growth). The inner side of each tooth row had a bony wall. The number of teeth was large compared to most other theropods, with six to seven teeth in each premaxilla and thirty-two in each dentary. Based on the closer packing and smaller size of the dentary teeth compared to those in the corresponding length of the premaxilla, the difference between the number of teeth in the upper and lower jaws appears to have been more pronounced than in other theropods. The terminal rosette in the upper jaw of the holotype had thirteen (tooth sockets), six on the left and seven on the right side, showing tooth count asymmetry. The first four upper teeth were large (with the second and third the largest), while the fourth and fifth progressively decreased in size. The diameter of the largest was twice that of the smallest. The first four alveoli of the dentary (corresponding to the tip of the upper jaw) were the largest, with the rest more regular in size. Small subtriangular were present between the alveoli. Postcranial skeleton Initially thought to have lacked the sigmoid curve typical of theropods, the neck of Baryonyx does appear to have formed an S shape, though straighter than in other theropods. The cervical vertebrae of the neck tapered towards the head and became progressively longer front to back. Their (the processes that connected the vertebrae) were flat, and their (processes to which neck muscles attached) were well developed. The (the second neck vertebra) was small relative to the size of the skull and had a well-developed . The of the cervical vertebrae were not always sutured to the (the bodies of the vertebrae), and the there were low and thin. The were short, similar to those of crocodiles, and possibly overlapped each other somewhat. The centra of the dorsal vertebrae of the back were similar in size. Like in other theropods, the skeleton of Baryonyx showed skeletal pneumaticity, reducing its weight through (openings) in the neural arches and (hollow depressions) in the centra (primarily near the ). From front to back, the neural spines of the dorsal vertebrae changed from short and stout to tall and broad. One isolated dorsal neural spine was moderately elongated and slender, indicating that Baryonyx may have had a hump or ridge along the centre of its back (though incipiently developed compared to those of other spinosaurids). Baryonyx was unique among spinosaurids in having a marked constriction from side to side in a vertebra that either belonged to the or front of the tail. The coracoid tapered hind-wards when viewed in profile, and, uniquely among spinosaurids, connected with the scapula in a peg-and-notch articulation. The scapulae were robust and the bones of the forelimb were short in relation to the animal's size, but broad and sturdy. The humerus was short and stout, with its ends broadly expanded and flattened—the upper side for the and muscle attachment and the lower for articulation with the radius and ulna. The radius was short, stout and straight, and less than half the length of the humerus, while the ulna was a little longer. The ulna had a powerful and an expanded lower end. The hands had three fingers; the first finger bore a large claw measuring about along its curve in the holotype specimen. The claw would have been lengthened by a keratin (horny) sheath in life. Apart from its size, the claw's proportions were fairly typical of a theropod, i.e. it was bilaterally symmetric, slightly compressed, smoothly rounded, and sharply pointed. A groove for the sheath ran along the length of the claw. The other claws of the hand were much smaller. The (main hip bone) of the pelvis had a prominent , an anterior process that was slender and vertically expanded, and a posterior process that was long and straight. The ilium also had a prominent and a deep grove that faced downwards. The (the socket for the femur) was long from front to back. The (lower and rearmost hip bone) had a well developed at the upper part. The margin of the blade at the lower end was turned outward, and the pubic foot was not expanded. The femur lacked a groove on the fibular condyle, and, uniquely among spinosaurids, the fibula had a very shallow fibular (depression). Classification In their original description, Charig and Milner found Baryonyx unique enough to warrant a new family of theropod dinosaurs: Baryonychidae. They found Baryonyx to be unlike any other theropod group, and considered the possibility that it was a thecodont (a grouping of early archosaurs now considered unnatural), due to having apparently primitive features, but noted that the articulation of the maxilla and premaxilla was similar to that in Dilophosaurus. They also noted that the two snouts from Niger (which later became the basis of Cristatusaurus), assigned to the family Spinosauridae by Taquet in 1984, appeared almost identical to that of Baryonyx and they referred them to Baryonychidae instead. In 1988, the palaeontologist Gregory S. Paul agreed with Taquet that Spinosaurus, described in 1915 based on fragmentary remains from Egypt that were destroyed in World War II, and Baryonyx were similar and (due to their kinked snouts) possibly late-surviving dilophosaurs. Buffetaut also supported this relationship in 1989. In 1990, Charig and Milner dismissed the spinosaurid affinities of Baryonyx, since they did not find their remains similar enough. In 1997, they agreed that Baryonychidae and Spinosauridae were related, but disagreed that the former name should become a synonym of the latter, because the completeness of Baryonyx compared to Spinosaurus made it a better type genus for a family, and because they did not find the similarities between the two significant enough. Holtz and colleagues listed Baryonychidae as a synonym of Spinosauridae in 2004. Discoveries in the 1990s shed more light on the relationships of Baryonyx and its relatives. In 1996, a snout from Morocco was referred to Spinosaurus, and Irritator and Angaturama from Brazil (the two are possible synonyms) were named. Cristatusaurus and Suchomimus were named based on fossils from Niger in 1998. In their description of Suchomimus, Sereno and colleagues placed it and Baryonyx in the new subfamily Baryonychinae within Spinosauridae; Spinosaurus and Irritator were placed in the subfamily Spinosaurinae. Baryonychinae was distinguished by the small size and larger number of teeth in the dentary behind the terminal rosette, the deeply keeled front dorsal vertebrae, and by having serrated teeth. Spinosaurinae was distinguished by their straight tooth crowns without serrations, small first tooth in the premaxilla, increased spacing of teeth in the jaws, and possibly by having their nostrils placed further back and the presence of a deep neural spine sail. They also united the spinosaurids and their closest relatives in the superfamily Spinosauroidea, but in 2010, the palaeontologist Roger Benson considered this a junior synonym of Megalosauroidea (an older name). In a 2007 conference abstract, the palaeontologist Denver W. Fowler suggested that since Suchosaurus is the first named genus in its group, the clade names Spinosauroidea, Spinosauridae, and Baryonychinae should be replaced by Suchosauroidea, Suchosauridae, and Suchosaurinae, regardless of whether or not the name Baryonyx is retained. A 2017 study by the palaeontologists Marcos A. F. Sales and Cesar L. Schultz found that the clade Baryonychinae was not well supported, since serrated teeth may be an ancestral trait among spinosaurids. Barker and colleagues found support for a Baryonychinae-Spinosaurinae split in 2021, and the following cladogram shows the position of Baryonyx within Spinosauridae according to their study: Evolution Spinosaurids appear to have been widespread from the Barremian to the Cenomanian stages of the Cretaceous period, about 130 to 95 million years ago, while the oldest known spinosaurid remains date to the Middle Jurassic. They shared features such as long, narrow, crocodile-like skulls; sub-circular teeth, with fine to no serrations; the terminal rosette of the snout; and a secondary palate that made them more resistant to torsion. In contrast, the primitive and typical condition for theropods was a tall, narrow snout with blade-like (ziphodont) teeth with serrated carinae. The skull adaptations of spinosaurids converged with those of crocodilians; early members of the latter group had skulls similar to typical theropods, later developing elongated snouts, conical teeth, and secondary palates. These adaptations may have been the result of a dietary change from terrestrial prey to fish. Unlike crocodiles, the post-cranial skeletons of baryonychine spinosaurids do not appear to have aquatic adaptations. Sereno and colleagues proposed in 1998 that the large thumb-claw and robust forelimbs of spinosaurids evolved in the Middle Jurassic, before the elongation of the skull and other adaptations related to fish-eating, since the former features are shared with their megalosaurid relatives. They also suggested that the spinosaurines and baryonychines diverged before the Barremian age of the Early Cretaceous. Several theories have been proposed about the biogeography of the spinosaurids. Since Suchomimus was more closely related to Baryonyx (from Europe) than to Spinosaurus—although that genus also lived in Africa—the distribution of spinosaurids cannot be explained as vicariance resulting from continental rifting. Sereno and colleagues proposed that spinosaurids were initially distributed across the supercontinent Pangea, but split with the opening of the Tethys Sea. Spinosaurines would then have evolved in the south (Africa and South America: in Gondwana) and baryonychines in the north (Europe: in Laurasia), with Suchomimus the result of a single north-to-south dispersal event. Buffetaut and the Tunisian palaeontologist Mohamed Ouaja also suggested in 2002 that baryonychines could be the ancestors of spinosaurines, which appear to have replaced the former in Africa. Milner suggested in 2003 that spinosaurids originated in Laurasia during the Jurassic, and dispersed via the Iberian land bridge into Gondwana, where they radiated. In 2007, Buffetaut pointed out that palaeogeographical studies had demonstrated that Iberia was near northern Africa during the Early Cretaceous, which he found to confirm Milner's idea that the Iberian region was a stepping stone between Europe and Africa, which is supported by the presence of baryonychines in Iberia. The direction of the dispersal between Europe and Africa is still unknown, and subsequent discoveries of spinosaurid remains in Asia and possibly Australia indicate that it may have been complex. Candeiro and colleagues suggested in 2017 that spinosaurids of northern Gondwana were replaced by other predators, such as abelisauroids, since no definite spinosaurid fossils are known from after the Cenomanian anywhere in the world. They attributed the disappearance of spinosaurids and other shifts in the fauna of Gondwana to changes in the environment, perhaps caused by transgressions in sea level. Malafaia and colleagues stated in 2020 that Baryonyx remains the oldest unquestionable spinosaurid, while acknowledging that older remains had also been tentatively assigned to the group. Barker and colleagues found support for a European origin for spinosaurids in 2021, with an expansion to Asia and Gondwana during the first half of the Early Cretaceous. In contrast to Sereno, these authors suggested there had been at least two dispersal events from Europe to Africa, leading to Suchomimus and the African part of Spinosaurinae. Palaeobiology Diet and feeding In 1986, Charig and Milner suggested that its elongated snout with many finely serrated teeth indicated that Baryonyx was piscivorous (fish-eating), speculating that it crouched on a riverbank and used its claw to gaff fish out of the water (similar to the modern grizzly bear). Two years earlier, Taquet pointed out that the spinosaurid snouts from Niger were similar to those of the modern gharial and suggested a behaviour similar to herons or storks. In 1987, the biologist Andrew Kitchener disputed the piscivorous behaviour of Baryonyx and suggested that it would have been a scavenger, using its long neck to feed on the ground, its claws to break into a carcass, and its long snout (with nostrils far back for breathing) for investigating the body cavity. Kitchener argued that Baryonyx jaws and teeth were too weak to kill other dinosaurs and too heavy to catch fish, with too many adaptations for piscivory. According to the palaeontologist Robin E. H. Reid, a scavenged carcass would have been broken up by its predator and large animals capable of doing so—such as grizzly bears—are also capable of catching fish (at least in shallow water). In 1997, Charig and Milner demonstrated direct dietary evidence in the stomach region of the B. walkeri holotype. It contained the first evidence of piscivory in a theropod dinosaur, acid-etched scales and teeth of the common fish Scheenstia mantelli (then classified in the genus Lepidotes), and abraded or etched bones of a young iguanodontid. They also presented circumstantial evidence for piscivory, such as crocodile-like adaptations for catching and swallowing prey: long, narrow jaws with their "terminal rosette", similar to those of a gharial, and the downturned tip and notch of the snout. In their view, these adaptations suggested that Baryonyx would have caught small to medium-sized fish in the manner of a crocodilian: gripping them with the notch of the snout (giving the teeth a "stabbing function"), tilting the head backwards, and swallowing them headfirst. Larger fish would be broken up with the claws. That the teeth in the lower jaw were smaller, more crowded and numerous than those in the upper jaw may have helped the animal grip food. Charig and Milner maintained that Baryonyx would primarily have eaten fish (although it would also have been an active predator and opportunistic scavenger), but it was not equipped to be a macro-predator like Allosaurus. They suggested that Baryonyx mainly used its forelimbs and large claws to catch, kill and tear apart larger prey. An apparent gastrolith (gizzard stone) was also found with the specimen. The German palaeontologist Oliver Wings suggested in 2007 that the low number of stones found in theropods like Baryonyx and Allosaurus could have been ingested by accident. In 2004, a pterosaur neck vertebra from Brazil with a spinosaurid tooth embedded in it reported by Buffetaut and colleagues confirmed that the latter were not exclusively piscivorous. A 2005 beam-theory study by the palaeontologist François Therrien and colleagues was unable to reconstruct force profiles of Baryonyx, but found that the related Suchomimus would have used the front part of its jaws to capture prey, and suggested that the jaws of spinosaurids were adapted for hunting smaller terrestrial prey in addition to fish. They envisaged that spinosaurids could have captured smaller prey with the rosette of teeth at the front of the jaws, and finished it by shaking it. Larger prey would instead have been captured and killed with their forelimbs instead of their bite, since their skulls would not be able to resist the bending stress. They also agreed that the conical teeth of spinosaurids were well-developed for impaling and holding prey, with their shape enabling them to withstand bending loads from all directions. A 2007 finite element analysis of CT scanned snouts by the palaeontologist Emily J. Rayfield and colleagues indicated that the biomechanics of Baryonyx were most similar to those of the gharial and unlike those of the American alligator and more-conventional theropods, supporting a piscivorous diet for spinosaurids. Their secondary palate helped them resist bending and torsion of their tubular snouts. A 2013 beam-theory study by the palaeontologists Andrew R. Cuff and Rayfield compared the biomechanics of CT-scanned spinosaurid snouts with those of extant crocodilians, and found the snouts of Baryonyx and Spinosaurus similar in their resistance to bending and torsion. Baryonyx was found to have relatively high resistance in the snout to dorsoventral bending compared with Spinosaurus and the gharial. The authors concluded (in contrast to the 2007 study) that Baryonyx performed differently than the gharial; spinosaurids were not exclusive piscivores, and their diet was determined by their individual size. In a 2014 conference abstract, the palaeontologist Danny Anduza and Fowler pointed out that grizzly bears do not gaff fish out of the water as was suggested for Baryonyx, and also ruled out that the dinosaur would not have darted its head like herons, since the necks of spinosaurids were not strongly S-curved, and their eyes were not well-positioned for binocular vision. Instead, they suggested the jaws would have made sideways sweeps to catch fish, like the gharial, with the hand claws probably used to stamp down and impale large fish, whereafter they manipulated them with their jaws, in a manner similar to grizzly bears and fishing cats. They did not find the teeth of spinosaurids suitable for dismembering prey, due to their lack of serrations, and suggested they would have swallowed prey whole (while noting they could also have used their claws for dismemberment). A 2016 study by the palaeontologist Christophe Hendrickx and colleagues found that adult spinosaurs could displace their mandibular rami (halves of the lower jaw) sideways when the jaw was depressed, which allowed the pharynx (opening that connects the mouth to the oesophagus) to be widened. This jaw-articulation is similar to that seen in pterosaurs and living pelicans, and would likewise have allowed spinosaurids to swallow large prey such as fish and other animals. They also reported that some possible Portuguese Baryonyx fossils were found associated with isolated Iguanodon teeth, and listed it along with other such associations as support for opportunistic feeding behaviour in spinosaurs. Another 2016 study by the palaeontologist Romain Vullo and colleagues found that the jaws of spinosaurids were convergent with those of pike conger eels; these fish also have jaws that are compressed side to side (whereas the jaws of crocodilians are compressed from top to bottom), an elongated snout with a "terminal rosette" that bears enlarged teeth, and a notch behind the rosette with smaller teeth. Such jaws likely evolved for grabbing prey in aquatic environments with low light, and may have helped in prey detection. A 2023 study by Barker and colleagues based on CT scans of the braincases of Baryonyx and Ceratosuchops found that the brain anatomy of these baryonychines was similar to that of other non-maniraptoriform theropods. Their neurosensory capabilities such as hearing and olfaction (sense of smell) were unexceptional, and their gaze stabilisation less developed than those of spinosaurines, so their behavioural adaptations were probably comparable to those of other large-bodied terrestrial theropods. This suggests that their transition from terrestrial hypercarnivores to semi-aquatic “generalists” during their evolution did not require substantial modification of their brain and sensory systems. This could mean that spinosaurids were either pre-adapted for detection and capture of aquatic prey, or that their transition to semi-aquatic lifestyles only required modifications to the bones associated with the mouth. Their reptile encephalization quotient values imply that the cognitive capacity and behavioural sophistication of baryonychines did not deviate much from that of other basal theropods. Motion and semi-aquatic habits In their original description, Charig and Milner did not consider Baryonyx to be aquatic (due to its nostrils being on the sides of its snout—far from the tip—and the form of the post-cranial skeleton), but thought it was capable of swimming, like most land vertebrates. They speculated that the elongated skull, long neck, and strong humerus of Baryonyx indicated that the animal was a facultative quadruped, unique among theropods. In their 1997 article they found no skeletal support for this, but maintained that the forelimbs would have been strong enough for a quadrupedal posture and it would probably have caught aquatic prey while crouching—or on all fours—near (or in) water. A 2014 re-description of Spinosaurus by the palaeontologist Nizar Ibrahim and colleagues based on new remains suggested that it was a quadruped, based on its anterior centre of body mass. The authors found quadrupedality unlikely for Baryonyx, since the better-known legs of the closely related Suchomimus did not support this posture. In 2017, the palaeontologists David E. Hone and Holtz hypothesized that the head crests of spinosaurids were probably used for sexual or threat display. The authors also pointed out that (like other theropods) there was no reason to believe that the forelimbs of Baryonyx were able to pronate (crossing the radius and ulna bones of the lower arm to turn the hand), and thereby make it able to rest or walk on its palms. Resting on or using the forelimbs for locomotion may have been possible (as indicated by tracks of a resting theropod), but if this was the norm, the forelimbs would probably have showed adaptations for this. Hone and Holtz furthermore suggested that the forelimbs of spinosaurids do not seem optimal for trapping prey, but instead appear similar to the forelimbs of digging animals. They suggested that the ability to dig would have been useful when excavating nests, digging for water, or to reach some kinds of prey. Hone and Holtz also believed that spinosaurids would have waded and dipped in water rather than submerging themselves, due to their sparsity of aquatic adaptations. A 2010 study by the palaeontologist Romain Amiot and colleagues proposed that spinosaurids were semi-aquatic, based on the oxygen isotope composition of spinosaurid teeth from around the world compared with that of other theropods and extant animals. Spinosaurids probably spent much of the day in water, like crocodiles and hippopotamuses, and had a diet similar to the former; both were opportunistic predators. Since most spinosaurids do not appear to have anatomical adaptations for an aquatic lifestyle, the authors proposed that submersion in water was a means of thermoregulation similar to that of crocodiles and hippopotamuses. Spinosaurids may also have turned to aquatic habitats and piscivory to avoid competition with large, more-terrestrial theropods. In 2016, Sales and colleagues statistically examined the fossil distribution of spinosaurids, abelisaurids, and carcharodontosaurids, and concluded that spinosaurids had the strongest support for association with coastal palaeoenvironments. Spinosaurids also appear to have inhabited inland environments (with their distribution there being comparable to carcharodontosaurids), which indicates they may have been more generalist than usually thought. Sales and Schultz agreed in 2017 that spinosaurids were semi-aquatic and partially piscivorous, based on skull features such as conical teeth, snouts that were compressed from side to side, and retracted nostrils. They interpreted the fact that histological data indicates some spinosaurids were more terrestrial than others as reflecting ecological niche partitioning among them. As some spinosaurids have smaller nostrils than others, their olfactory abilities were presumably lesser, as in modern piscivorous animals, and they may instead have used other senses (such as vision and mechanoreception) when hunting fish. Olfaction may have been more useful for spinosaurids that also fed on terrestrial prey, such as baryonychines. A 2022 study by the palaeontologist Matteo Fabbri and colleagues revealed that Baryonyx possessed dense bones that would have allowed it to dive underwater. This same adaptation was revealed in the related Spinosaurus, and they are believed to have been subaqueous foragers that dived after aquatic prey, while Suchomimus was better adapted to a non-diving lifestyle by comparison according to the provided analysis. This discovery also showcases the unique and ecologically disparate lifestyles spinosaurids had, with more hollow-boned genera preferring to hunt in shallower water. Palaeoenvironment The Weald Clay Formation consists of sediments of Hauterivian (Lower Weald Clay) to Barremian (Upper Weald Clay) age, about 130–125 million years old. The original Baryonyx specimen was found in the latter, in clay representing non-marine still water, which has been interpreted as a fluvial or mudflat environment with shallow water, lagoons, and marshes. During the Early Cretaceous, the Weald area of Surrey, Sussex, and Kent was partly covered by the large, fresh-to-brackish water Wealden Lake. Two large rivers drained the northern area (where London now stands), flowing into the lake through a river delta; the Anglo-Paris Basin was in the south. Its climate was sub-tropical, similar to the present Mediterranean region. Since the Smokejack Clay Pit consists of different stratigraphic levels, fossil taxa found there are not necessarily contemporaneous. Dinosaurs from the locality include the ornithopods Mantellisaurus, Iguanodon, and small sauropods. Other vertebrates from the Weald Clay include crocodiles, pterosaurs, lizards (such as Dorsetisaurus), amphibians, sharks (such as Hybodus), and bony fishes (including Scheenstia). Members of ten orders of insects have been identified, including Valditermes, Archisphex, and Pterinoblattina. Other invertebrates include ostracods, isopods, conchostracans, and bivalves. The plants Weichselia and the aquatic, herbaceous Bevhalstia were common. Other plants found include ferns, horsetails, club mosses, and conifers. Other dinosaurs from the Wessex Formation of the Isle of Wight where Baryonyx may have occurred include the theropods Riparovenator, Ceratosuchops, Neovenator, Eotyrannus, Aristosuchus, Thecocoelurus, Calamospondylus, and Ornithodesmus; the ornithopods Iguanodon, Hypsilophodon, and Valdosaurus; the sauropods Ornithopsis, Eucamerotus, and Chondrosteosaurus; and the ankylosaur Polacanthus. Barker and colleagues stated in 2021 that the identification of the two additional spinosaurids from the Wealden Supergroup, Riparovenator and Ceratosuchops, has implications for potential ecological separation within Spinosauridae if these and Baryonyx were contemporary and interacted. They cautioned that it is possible the Upper Weald Clay and Wessex Formations and the spinosaurids known from them were separated in time and distance. It is generally thought that large predators occur with small taxonomic diversity in any area due to ecological demands, yet many Mesozoic assemblages include two or more sympatric theropods that were comparable in size and morphology, and this also appears to have been the case for spinosaurids. Barker and colleagues suggested that high diversity within Spinosauridae in a given area may have been the result of environmental circumstances benefiting their niche. While it has been generally assumed that only identifiable anatomical traits related to resource partitioning allowed for coexistence of large theropods, Barker and colleagues noted that this does not preclude that similar and closely related taxa could coexist and overlap in ecological requirements. Possible niche partitioning could be in time (seasonal or daily), in space (between habitats in the same ecosystems), or depending on conditions, and they could also have been separated by their choice of habitat within their regions (which may have ranged in climate). Taphonomy Charig and Milner presented a possible scenario explaining the taphonomy (changes during decay and fossilisation) of the B. walkeri holotype specimen. The fine-grained sediments around the skeleton, and the fact that the bones were found close together (skull and forelimb elements at one end of the excavation area and the pelvis and hind-limb elements at the other), indicates that the environment was quiet at the time of deposition, and water currents did not carry the carcass far—possibly because the water was shallow. The area where the specimen died seems to have been suitable for a piscivorous animal. It may have caught fish and scavenged on the mud plain, becoming mired before it died and was buried. Since the bones are well-preserved and had no gnaw marks, the carcass appears to have been undisturbed by scavengers (suggesting that it was quickly covered by sediment). The disarticulation of the bones may have been the result of soft-tissue decomposition. Parts of the skeleton seem to have weathered to different degrees, perhaps because water levels changed or the sediments shifted (exposing parts of the skeleton). The girdle and limb bones, the dentary, and a rib were broken before fossilisation, perhaps from trampling by large animals while buried. Most of the tail appears to have been lost before fossilisation, perhaps due to scavenging, or having rotted and floated off. The orientation of the bones indicates that the carcass lay on its back (perhaps tilted slightly to the left, with the right side upwards), which may explain why all the lower teeth had fallen out of their sockets and some upper teeth were still in place.
Biology and health sciences
Theropods
Animals
1091927
https://en.wikipedia.org/wiki/Coelophysis
Coelophysis
Coelophysis ( traditionally; or , as heard more commonly in recent decades) is a genus of coelophysid theropod dinosaur that lived approximately 215 to 208.5 million years ago during the Late Triassic period from the middle to late Norian age in what is now the southwestern United States. Megapnosaurus was once considered to be a species within this genus, but this interpretation has been challenged since 2017 and the genus Megapnosaurus is now considered valid. Coelophysis was a small, slenderly built, ground-dwelling, bipedal carnivore that could grow up to long. It is one of the earliest known dinosaur genera. Scattered material representing similar animals has been found worldwide in some Late Triassic and Early Jurassic formations. The type species C. bauri, originally given to the genus Coelurus by Edward Drinker Cope in 1887, was described by the latter in 1889. The names Longosaurus and Rioarribasaurus are synonymous with Coelophysis. Coelophysis is one of the most specimen-rich dinosaur genera. History of discovery The type species of Coelophysis was originally named as a species of Coelurus. Edward Drinker Cope first named Coelophysis in 1889 to name a new genus, outside of Coelurus and Tanystropheus, which C. bauri was previously classified in, for C. bauri, C. willistoni, and C. longicollis. David Baldwin, an amateur fossil collector working for Cope, had found the first remains of the dinosaur in 1881 in the Chinle Formation in northwestern New Mexico. Early in 1887, Cope referred the specimens collected to two new species, C. bauri and C. longicollis of the genus Coelurus. Later on in 1887, Cope reassigned the material to a yet another genus, Tanystropheus. Two years later, Cope corrected his classification after realizing differences in the vertebrae and named Coelophysis, with C. bauri as the type species, which was named for Georg Baur, a comparative anatomist whose ideas were similar to Cope's. The name Coelophysis comes from the Greek words κοῖλος/koilos (meaning 'hollow') and φύσις/physis (meaning 'form'), together "hollow form", which is a reference to its hollow vertebrae. However, the first finds were too poorly preserved to give a complete picture of the new dinosaur. In 1947, a substantial "graveyard" of Coelophysis fossils was found by George Whitaker, the assistant of Edwin H. Colbert, in New Mexico, at Ghost Ranch, close to the original find. American Museum of Natural History paleontologist Edwin H. Colbert conducted a comprehensive study of all the fossils found up to that date and assigned them to Coelophysis. The Ghost Ranch specimens were so numerous, including many well-preserved and fully articulated specimens, that one of them has since become the diagnostic, or type specimen, for the entire genus, replacing the original, poorly preserved specimen. In the early 1990s, there was debate over the diagnostic characteristics of the first specimens collected, compared to the material excavated at the Ghost Ranch Coelophysis quarry. Some paleontologists were of the opinion that the original specimens were not diagnostic beyond themselves and, therefore, that the name C. bauri could not be applied to any additional specimens. They therefore applied a different name, Rioarribasaurus, to the Ghost Ranch quarry specimens. Since the numerous well-preserved Ghost Ranch specimens were used as Coelophysis in most of the scientific literature, the use of Rioarribasaurus would have been very inconvenient for researchers. So, a petition was given to have the type specimen of Coelophysis transferred from the poorly preserved original specimen to one of the well-preserved Ghost Ranch specimens. This would make Rioarribasaurus a definite synonym of Coelophysis, specifically a junior objective synonym. In the end, the International Commission on Zoological Nomenclature (ICZN) voted to make one of the Ghost Ranch samples the actual type specimen for Coelophysis and dispose of the name Rioarribasaurus altogether (declaring it a nomen rejectum, or "rejected name"), thus resolving the confusion. The name Coelophysis became a nomen conservandum ("conserved name"). In a situation affecting many dinosaur taxa, some more recently discovered fossils were originally classified as new genera, but may be species of Coelophysis. For example, Prof. Mignon Talbot's 1911 discovery, which she named Podokesaurus holyokensis, has long been considered to be related to Coelophysis and some modern scientists consider Podokesaurus a synonym of Coelophysis. Another specimen from the Portland Formation of the Hartford Basin, now at the Boston Museum of Science, has also been referred to Coelophysis. The specimen consists of sandstone casts of a pubis, tibia, three ribs, and a possible vertebra that probably originated in a quarry in Middletown, Connecticut. However, both the type specimen of Podokesaurus and the Middletown specimen are typically considered indeterminate theropods today. Sullivan & Lucas (1999) referred one specimen from Cope's original material of Coelophysis (AMNH 2706) to what they thought was a newly discovered theropod, Eucoelophysis. However, subsequent studies have shown that Eucoelophysis was misidentified and is actually a silesaurid, a type of non-dinosaurian ornithodiran closely related to Silesaurus. Formerly assigned species "Syntarsus" rhodesiensis was first described by Raath (1969) and assigned to Podokesauridae. The taxon "Podokesauridae", was abandoned because its type specimen was destroyed in a fire and can no longer be compared to new finds. Over the years, paleontologists assigned the genus to Ceratosauridae (Welles, 1984), Procompsognathidae (Parrish and Carpenter, 1986), and Ceratosauria (Gauthier, 1986). In 2004, "Syntarsus" was found to be synonymous with Coelophysis by Tykoski and Rowe (2004). Ezcurra and Novas (2007) and Ezcurra (2007) also concluded that "Syntarsus" was synonymous with Coelophysis. In a phylogenetic analysis by Ezcurra (2017), Megapnosaurus was recovered in a clade with Segisaurus and Camposaurus, supporting the generic distinction of Megapnosaurus. This was supported by Barta and colleagues in 2018, noting that Coelophysis still bears the vestigial 5th metacarpal, a feature absent in Megapnosaurus. The genus Syntarsus was named by Raath in 1969 for the type species Syntarsus rhodesiensis from Africa and later applied to the North American Syntarsus kayentakatae. It was renamed by American entomologist Dr. Michael Ivie (Montana State University of Bozeman), Polish Australian Dr. Adam Ślipiński, and Polish Dr. Piotr Węgrzynowicz (Muzeum Ewolucji Instytutu Zoologii PAN of Warsaw), the three scientists who discovered that the genus name Syntarsus was already taken by a colydiine beetle described in 1869. Many paleontologists did not like the naming of Megapnosaurus, partially because taxonomists are generally expected to allow original authors of a name to correct any mistakes in their work. Raath was aware of the homonymy between the dinosaur Syntarsus and beetle Syntarsus, but the group who published Megapnosaurus have claimed that they believed Raath was deceased and unable to correct his mistake, so they proceeded accordingly. Mortimer (2012) pointed out that "Paleontologists might have reacted more positively if the replacement name (Megapnosaurus) hadn't been facetious, translating to "big dead lizard". Yates (2005) analyzed Coelophysis and Megapnosaurus and concluded that the two genera are almost identical, suggesting that Megapnosaurus was possibly synonymous with Coelophysis. In 2004, Raath co-authored two papers in which he argued that Megapnosaurus (formerly Syntarsus) was a junior synonym of Coelophysis. Megapnosaurus was regarded by Paul (1988) and Downs (2000) as being congeneric with Coelophysis. In 1993, Paul then suggested that Coelophysis should be placed in Megapnosaurus (then known as Syntarsus) to get around the above-mentioned taxonomic confusion. Downs (2000) examined Camposaurus and concluded that it was a junior synonym of Coelophysis because of its similarity to some of the Coelophysis Ghost Ranch specimens. However, a reassessment of the Camposaurus holotype by Martin Ezcurra and Stephen Brusatte, published in 2011, revealed a pair of autapomorphies in the holotype, indicating that C. arizonensis was not a synonym of C. bauri, although it was a close relative of M. rhodesiensis. Barta et al. (2018) concluded that C. bauri differed from M. rhodesiensis in that it bears its 5th metacarpal and several features in the musculature of the limbs according to Griffin (2018). Description Coelophysis is known from a number of complete fossil skeletons of the species C. bauri. This lightly built dinosaur measured up to long and was more than a meter tall at the hips. Gregory S. Paul (1988) estimated the weight of the gracile form at and the weight of the robust form at , but later presented a higher estimate of . Coelophysis was a bipedal, carnivorous, theropod dinosaur and a fast, agile runner. Despite its basal position within Theropoda, the bauplan of Coelophysis differed from those of other basal theropods, such as Herrerasaurus, showing more derived traits common in theropods that superseded it. The torso of Coelophysis conforms to the basic theropod bauplan, but the pectoral girdle displays some special characteristics. C. bauri had a furcula (wishbone), the earliest known example in a dinosaur. Coelophysis also preserves the ancestral condition of possessing four digits on the hand (manus). It had only three functional digits, with the fourth being embedded in the flesh of the hand. Coelophysis had narrow hips, arms adapted for grasping prey, and narrow feet. Its neck and tail were long and slender. The pelvis and hindlimbs of C. bauri are also slight variations on the theropod body plan. It has the open acetabulum and straight ankle hinge that define Dinosauria. The leg ended in a three-toed foot (pes) with a raised dewclaw (hallux). The tail had an unusual structure within its interlocking prezygapophysis of its vertebrae, which formed a semi-rigid lattice, apparently to stop the tail from moving up and down. Coelophysis had a long and narrow head (approximately ), with large, forward-facing eyes that afforded it stereoscopic vision and, as a result, excellent depth perception. Rinehart et al. (2004) described the complete sclerotic ring found for a juvenile Coelophysis bauri (specimen NMMNH P-4200) and compared it to data on the sclerotic rings of reptiles (including birds), concluding that Coelophysis was a diurnal, visually oriented predator. The study found that the vision of Coelophysis was superior to most lizards' vision and ranked with that of modern birds of prey. The eyes of Coelophysis appear to be the closest to those of eagles and hawks, with a high power of accommodation. The data also suggested poor night vision, which would mean this dinosaur had a round pupil rather than a split pupil. Coelophysis had an elongated snout with large fenestrae that helped to reduce skull weight, while narrow struts of bones preserved the structural integrity of the skull. The neck had a pronounced sigmoid curve. The braincase is known in Coelophysis bauri, but little data could be derived because the skull was crushed. Unlike some other theropods, the cranial ornamentation of Coelophysis was not located at the top of its skull. Low, laterally raised bony ridges were present on the dorsolateral margin of the nasal and lacrimal bones in the skull, directly above the antorbital fenestra. Distinguishing anatomical features A diagnosis is a statement of the anatomical features of an organism (or group) that collectively distinguish it from all other organisms. Some, but not all, of the features in a diagnosis are also autapomorphies. An autapomorphy is a distinctive anatomical feature that is unique to a given organism or group. According to Ezcurra (2007) and Bristowe and Raath (2004), Coelophysis can be distinguished based on the absence of an offset rostral process of the maxilla, the quadrate being strongly caudally, a small external mandibular fenestra (which is 9–10% of the mandibular length), and the anteroposterior length of the ventral lacrimal process is greater than 30% of its height. Several paleontologists consider Coelophysis bauri to be the same dinosaur as Megapnosaurus rhodesiensis (formerly Syntarsus). However, this has been refuted by many paleontologists. Downs (2000) concluded that C. bauri differs from C. rhodesiensis in cervical length, proximal and distal leg proportions, and proximal caudal vertebral anatomy. Tykoski and Rowe (2004) concluded that C. bauri differs from M. rhodesiensis in that it lacks a pit at the base of the nasal process of the premaxilla. Bristowe and Raath (2004) concluded that C. bauri differs from M. rhodesiensis in having a longer maxillary tooth row. Barta et al. (2018) concluded that C. bauri differed from M. rhodesiensis in that it bears its 5th metacarpal. Griffin (2018) concluded that C. bauri differs from M. rhodesiensis in several differences in the musculature of the limbs. Classification Coelophysis is a distinct taxonomic unit (genus), composed of the single species C. bauri. Two additional originally described species, C. longicollis and C. willistoni, are now considered dubious and undiagnostic. M. rhodesiensis was referred to Coelophysis for several years, but it is likely its own genus and is known from the early Jurassic of southern Africa. A third possible species is Coelophysis kayentakatae, previously referred to the genus Megapnosaurus, from the Kayenta Formation of the southwestern US. In recent phylogenetic analyses, "Syntarsus" kayentakatae has been shown to be distantly related to Coelophysis and Megapnosaurus, suggesting that it belongs to its own genus. Paleobiology Feeding The teeth of Coelophysis were typical of predatory dinosaurs, as they were blade-like, recurved, sharp, jagged, and finely serrated on both the anterior and posterior edges. Its dentition shows that it was carnivorous, probably preying on the small, lizard-like animals that were discovered with it. It may also have hunted in packs to tackle larger prey. Coelophysis bauri has approximately 26 teeth on the maxillary bone of the upper jaw and 27 teeth on the dentary bone of the lower jaw. Kenneth Carpenter (2002) examined the bio-mechanics of theropod arms and attempted to evaluate their usefulness in predation. He concluded that the arm of Coelophysis was flexible and had a good range of motion, but its bone structure suggested that it was comparatively weak. The "weak" arms and small teeth in this genus suggested that Coelophysis preyed upon animals that were substantially smaller than itself. Rinehart et al. agreed that Coelophysis was a "hunter of small, fast-moving prey". Carpenter also identified three distinct models of theropod arm use and noted that Coelophysis was a "combination grasper-clutcher", as compared to other dinosaurs that were "clutchers" or "long armed graspers". It has been suggested that C. bauri was a cannibal, based on supposed juvenile specimens found "within" the abdominal cavities of some Ghost Ranch specimens. However, Robert J. Gay showed in 2002 that these specimens were misinterpreted. Several specimens of "juvenile coelophysids" were actually small crurotarsan reptiles, such as Hesperosuchus. Gay's position was lent support in a 2006 study by Nesbitt et al. In 2009, new evidence of cannibalism came to light when additional preparation of previously excavated matrix revealed regurgitate material in and around the mouth of Coelophysis specimen NMMNH P-44551. This material included tooth and jaw bone fragments that Rinehart et al. considered "morphologically identical" to a juvenile Coelophysis. In 2010, Gay examined the bones of juveniles found within the thoracic cavity of AMNH 7224 and calculated that the total volume of these bones was 17 times greater than the maximum estimated stomach volume of the Coelophysis specimen. Gay observed that the total volume would be even greater when considering that there would have been flesh on these bones. This analysis also noted the absence of tooth marks on the bones as would be expected in defleshing and the absence of expected pitting by stomach acids. Finally, Gay demonstrated that the alleged cannibalized juvenile bones were deposited stratigraphically below the larger animal that had supposedly cannibalized them. Taken together, these data suggested that the Coelophysis specimen AMNH 7224 was not a cannibal and that the bones of the juvenile and adult specimens were found in their final position as a result of "coincidental superposition of different sized individuals. Pack behavior The discovery of over 1,000 specimens of Coelophysis at the Whitaker quarry at Ghost Ranch has suggested gregarious behavior to researchers like Schwartz and Gillette. There is a tendency to see this massive congregation of animals as evidence for huge packs of Coelophysis roaming the land. No direct evidence for flocking exists because the deposits only indicate that large numbers of Coelophysis, along with various other Triassic animals, were buried together. Some of the evidence from the taphonomy of the site indicates that these animals may have been gathered together to feed or drink from a depleted water hole or to feed on a spawning run of fish, being later buried in a catastrophic flash flood or a drought. With 30 specimens of C. rhodesiensis found together in Zimbabwe, some palaeontologists have suggested that Coelophysis was indeed gregarious. Again, there is no direct evidence of flocking in this case and it has also been suggested that these individuals were also victims of flash flooding as it appears to have been commonplace during this period. Growth and sexual dimorphism Rinehart (2009) assessed the ontogenic growth of this genus using data gathered from the length of its upper leg bone (femur) and concluded that Coelophysis juveniles grew rapidly, especially during the first year of life. Coelophysis likely reached sexual maturity between the second and third year of life and reached its full size, just above 10 feet in length, by its eighth year. This study identified four distinct growth stages: 1-year, 2-year, 4-year, and 7+ year. It was also thought that, as soon as they were hatched, they would have to fend for themselves. Two "morphs" of Coelophysis have been identified. One is a more gracile form, as in specimen AMNH 7223, and the other is a slightly more robust form, as in specimens AMNH 7224 and NMMNH P-42200. Skeletal proportions were different between these two forms. The gracile form has a longer skull, a longer neck, shorter arms, and has sacral neural spines that are fused. The robust form has a shorter skull, a shorter neck, longer arms, and unfused sacral neural spines. Historically, many arguments have been made that this represents some sort of dimorphism in the population of Coelophysis, probably sexual dimorphism. Raath agreed that dimorphism in Coelophysis is evidenced by the size and structure of the arm. Rinehart et al. studied 15 individuals, and agreed that two morphs were present, even in juvenile specimens, and suggested that sexual dimorphism was present early in life, prior to sexual maturity. Rinehart concluded that the gracile form was female and the robust form was male based on differences in the sacral vertebrae of the gracile form, which allowed for greater flexibility for egg laying. Further support for this position was provided by an analysis showing that each morph comprised 50% of the population, as would be expected in a 50/50 sex ratio. However, more recent research has found that C. bauri and C. rhodesiensis had highly variable growth between individuals, with some specimens being larger in their immature phase than smaller adults were when completely mature. This indicates that the supposed presence of distinct morphs is simply the result of individual variation. This highly variable growth was likely ancestral to dinosaurs but later lost and may have given such early dinosaurs an evolutionary advantage in surviving harsh environmental challenges. Reproduction Through the compilation and analysis of a database of nearly three dozen reptiles (including birds) and comparison with existing data about the anatomy of Coelophysis, Rinehart et al. (2009) drew the following conclusions. It was estimated that average egg of Coelophysis was 31–33.5 millimeters across its minor diameter and that each female would lay between 24 and 26 eggs in each clutch. The evidence suggested that some parental care was necessary to nurture the relatively small hatchlings during the first year of life, where they would reach 1.5 meters in length by the end of their first growth stage. Coelophysis bauri invested as much energy in reproduction as other extinct reptiles of its approximate size. Paleopathology In a 2001 study conducted by Bruce Rothschild and other paleontologists, 14-foot bones referred to Coelophysis were examined for signs of stress fracture, but none were found. In C. rhodesiensis, healed fractures of the tibia and metatarsus have been observed, but are very rare. "[T]he supporting butresses of the second sacral rib" in one specimen of Syntarsus rhodesiensis showed signs of fluctuating asymmetry. Fluctuating asymmetry results from developmental disturbances and is more common in populations under stress and can therefore be informative about the quality of conditions a dinosaur lived under. Ichnology Edwin H. Colbert has suggested that the theropod footprints referred to the ichnogenus Grallator, located in the Connecticut River Valley across Connecticut and Massachusetts, may have been made by Coelophysis. The footprints are from the Late Triassic to Early Jurassic aged Newark Supergroup. They clearly show digits II, III, and IV, but not I or V. That condition is strange for footprints of their age. The digits I and V were presumed to be stubby and ineffective, not touching the ground when the dinosaur was walking or running. They have been thought to be from an unidentified, primitive saurischian similar to Coelophysis by David B. Weishampel and L. Young more recently. Skeletal remains resembling Coelophysis have also been found in the valley, supporting the idea that a species similar to Coelophysis is responsible for the footprints. Paleoenvironment Specimens of Coelophysis have been recovered from the Chinle Formation of New Mexico and Arizona, more famously at the Ghost Ranch (Whitaker) quarry in the Rock Point member among other quarries in the underlying Petrified Forest member, the sediments of which have been dated to approximately 212 million years ago, making them part of the middle Norian stage of the Late Triassic, but Thomas Holtz Jr. interpreted that it was during the Rhaetian stage from approximately 204 to 201.6 million years ago. C. rhodesiensis has been recovered in the Upper Elliott Formation in the Cape and Free State provinces of South Africa, as well as the Chitake River bonebed quarry at the Forest Sandstone Formation in Zimbabwe. Ghost Ranch was located close to the equator over 200 million years ago, and had a warm, monsoon-like climate with heavy seasonal precipitation. Hayden Quarry, a new excavation site at Ghost Ranch, New Mexico, has yielded a diverse collection of fossil material that included the first evidence of dinosaurs and less-advanced dinosauromorphs from the same time period. The discovery indicates that the two groups lived together during the early Triassic period 235 million years ago. Therrien and Fastovsky (2001) examined the paleoenvironment of Coelophysis and other early theropods from Petrified Forest National Park in Arizona and determined that this genus lived in an environment that consisted of floodplains marked by distinct dry and wet seasons. There was a great deal of competition during drier times when animals struggled for water in riverbeds that were drying up. In the upper sections of the Chinle Formation where Coelophysis is found, dinosaurs were rare. So far, only Chindesaurus and Daemonosaurus are known, the terrestrial fauna being dominated instead by other reptiles like the rhynchocephalian Whitakersaurus, the pseudosuchian Revueltosaurus, the aetosaurs Desmatosuchus, Typothorax and Heliocanthus, the crocodilomorph Hesperosuchus, the "rauisuchians" Shuvosaurus, Effigia, and Vivaron, along with other rare components like the dinosauriform Eucoelophysis and the amniote Kraterokheirodon. In the waterways there are the phytosaur Machaeroprosopus, the archosauromorph Vancleavea, the amphibians Apachesaurus and Koskinonodon, and the fishes Reticulodus, Arganodus, and Lasalichthyes. Taphonomy The multitude of specimens deposited so closely together at Ghost Ranch was probably the result of a flash flood that swept away a large number of Coelophysis and buried them quickly and simultaneously. In fact, it seems that such flooding was commonplace during this period of the Earth's history and, indeed, the Petrified Forest of nearby Arizona is the result of a preserved log jam of tree trunks that were caught in one such flood. Whitaker quarry at Ghost Ranch is considered a monotaxic site because it features multiple individuals of a single taxon. The quality of preservation and the ontogenic (age) range of the specimens helped make Coelophysis one of the best known of all genera. In 2009, Rinehart et al. noted that in one case the Coelophysis specimens were "washed into a topographic low containing a small pond, where they probably drowned and were buried by a sheet flood event from a nearby river." The 30 specimens of C. rhodesiensis found together in Zimbabwe was also probably the result of a flash flood that swept away a large number of Coelophysis and buried them quickly and simultaneously as well. Cultural significance Coelophysis was the second dinosaur in space, following Maiasaura (STS-51-F). A Coelophysis skull from the Carnegie Museum of Natural History was aboard the Space Shuttle Endeavour mission STS-89 when it left the atmosphere on 22 January 1998. It was also taken onto the space station Mir before being returned to Earth. Since the discovery of Coelophysis fossils more than 100 years ago, it is one of the best-known dinosaurs in literature. It was designated as the official state fossil of New Mexico in 1981 and is now the logo of the New Mexico Museum of Natural History.
Biology and health sciences
Theropods
Animals
1091928
https://en.wikipedia.org/wiki/Hadrosauridae
Hadrosauridae
Hadrosaurids (), or hadrosaurs, are members of the ornithischian family Hadrosauridae. This group is known as the duck-billed dinosaurs for the flat duck-bill appearance of the bones in their snouts. The ornithopod family, which includes genera such as Edmontosaurus and Parasaurolophus, was a common group of herbivores during the Late Cretaceous Period. Hadrosaurids are descendants of the Late Jurassic/Early Cretaceous iguanodontian dinosaurs and had a similar body layout. Hadrosaurs were among the most dominant herbivores during the Late Cretaceous in Asia and North America, and during the close of the Cretaceous several lineages dispersed into Europe, Africa, and South America. Like other ornithischians, hadrosaurids had a predentary bone and a pubic bone which was positioned backwards in the pelvis. Unlike more primitive iguanodonts, the teeth of hadrosaurids are stacked into complex structures known as dental batteries, which acted as effective grinding surfaces. Hadrosauridae is divided into two principal subfamilies: the lambeosaurines (Lambeosaurinae), which had hollow cranial crests or tubes; and the saurolophines (Saurolophinae), identified as hadrosaurines (Hadrosaurinae) in most pre-2010 works, which lacked hollow cranial crests (solid crests were present in some forms). Saurolophines tended to be bulkier than lambeosaurines. Lambeosaurines included the aralosaurins, tsintaosaurins, lambeosaurins and parasaurolophins, while saurolophines included the brachylophosaurins, kritosaurins, saurolophins and edmontosaurins. Hadrosaurids were facultative bipeds, with the young of some species walking mostly on two legs and the adults walking mostly on four. History of discovery Ferdinand Vandeveer Hayden, during expeditions near the Judith River in 1854 through 1856, discovered the very first dinosaur fossils recognized from North America. These specimens were obtained by Joseph Leidy, who described and named them in 1856; two of the several species named were Trachodon mirabilis of the Judith River Formation and Thespesius occidentalis of the "Great Lignite Formation". The former was based on a collection of teeth whilst the latter on two and a . Although most of the Trachodon teeth turned out to belong to ceratopsids, the holotype and remains of T. occidentalis would come to be recognized as the first recognized hadrosaur specimens. Around the same time in Philadelphia, on the other side of the continent, geologist William Parker Foulke was informed of numerous large bones accidentally uncovered by farmer John E. Hopkins some twenty years earlier. Foulke obtained permission to investigate the now scattered fossils in 1858, and these specimens as well were given to Leidy. They were described in the same year as Hadrosaurus foulkii, giving a slightly better picture of the form of a hadrosaur. Leidy provided additional description in a 1865 paper. Among his 1858 work Leidy briefly suggested that the animal was likely amphibious in nature; this school of thought about hadrosaurs would come to be dominant for over a century to come. Further discoveries such as "Hadrosaurus minor" and "Ornithotarsus immanis" would come from the East, and Edward Drinker Cope led an expedition to the Judith River Formation where Trachodon was found. Upon the fragments discovered he named seven new species in two genera, as well as assigning material to Hadrosaurus. Cope had studied the jaws of hadrosaurs and come to the conclusion that the teeth were fragile and could have been dislodged incredibly easily. As such, he supposed the animals must have fed largely on soft water plants; he presented this idea to the Philadelphia Academy in 1883, and this idea would come to be very influential on future study. Research would continue in the Judith River area for years to come, but the formation never yielded much more than fragmentary remains, and Cope's species as well as Trachodon itself would in time be seen as of doubtful validity. The Eastern states, too, would never yield particularly informative specimens. Instead, other sites in the American West would come to provide many very complete specimens that would form the backbone of hadrosaur research. One such specimen was the very complete AMNH 5060 (belonging to Edmontosaurus annectens), recovered in 1908 by the fossil collector Charles Hazelius Sternberg and his three sons in Converse County, Wyoming. It was described by Henry Osborn in 1912, who dubbed it the "Dinosaur mummy". This specimen's skin was almost completely preserved in the form of impressions. The skin around its hands, thought to represent webbing, was seen as further bolstering the idea that hadrosaurs were very aquatic animals. Cope had planned to write a monograph about the group Ornithopoda, but never made much progress towards it before his death. This unrealized endeavor would come to be the inspiration for Richard Swann Lull and Nelda Wright to work on a similar project decades later. Eventually they realized the whole of Ornithopoda was too broad of a scope, until eventually it was narrowed down to specifically North American hadrosaurs. Their monograph, Hadrosaurian Dinosaurs of North America, was published in 1942, and looked back at the whole of understanding about the family. It was designed as a definitive work, covering all aspects of their biology and evolution, and as part of it every known species was re-evaluated and many of them redescribed. They agreed with prior authors on the semi-aquatic nature of hadrosaurs, but re-evaluated Cope's idea of weak jaws and found quite the opposite. The teeth were rooted in strong batteries and would be continuously replaced to prevent them getting worn down. Such a system seemed incredibly overbuilt for the job of eating soft Mesozoic plants, and this fact confused the authors. Though they still proposed a diet of water plants, they considered it likely this would be supplemented by occasional forrays into browsing on land plants. Twenty years later, in 1964, another very important work would be published, this time by John H. Ostrom. It challenged the idea that hadrosaurs were semi-aquatic animals, which had been held since the work of Leidy back in the 1850s. This new approach was backed using evidence of the environment and climate they lived in, co-existing flora and fauna, physical anatomy, and preserved stomach contents from mummies. Based on evaluation of all this data, Ostrom found the idea that hadrosaurs were adapted for aquatic life incredibly lacking, and instead proposed they were capable terrestrial animals that browsed on plants such as conifers. He remained uncertain, however, as to the purpose of the paddle-like hand Osborn had described, as well as their long and somewhat paddle-like tails. Thus he agreed with the idea that hadrosaurs would have taken refuge from predators in water. Numerous important studies would follow this; Ostrom's student Peter Dodson published a paper about lambeosaur skull anatomy that made enormous changes to hadrosaur taxonomy in 1975, and Michael K. Brett-Surman conducted a full revision of the group as part of his Graduate studies through the 1970s and 1980s. John R. Horner would also begin to leave his impact on the field, including with the naming of Maiasaura in 1979. Hadrosaur research experienced a surge in the decade of the 2000s, similar to the research of other dinosaurs. In response to this, the Royal Ontario Museum and the Royal Tyrrell Museum collaborated to arrange the International Hadrosaur Symposium, a professional meeting about ongoing hadrosaur research that was held at the latter institution on September 22 and 23 in 2011. Over fifty presentations were made at the event, thirty-six of which were later incorporated into a book, titled Hadrosaurs, published in 2015. The volume was brought together primarily by palaeontologists David A. Eberth and David C. Evans, and featured an afterword from John R. Horner, all of whom also contributed to one or more of the studies published therein. The first chapter of the volume was a study by David B. Weishampel about the rate of ornithopod research over history, and the interest in different aspects of it over that history, using the 2004 volume The Dinosauria as the source of data on the amount of works published in each decade. Various periods of high and low activity were found, but the twenty-first century was found to overwhelmingly be the most prolific time, with over two-hundred papers published. The advent of the internet was cited as a likely catalyst for this boom. Hadrosaur research experienced high levels of diversity within the decade, with previously uncommon subjects such as growth, phylogeny, and biogeography experiencing more attention, though the functional morphology of hadrosaurids was found to have declined in study since the Dinosaur Renaissance. Distribution Hadrosaurids likely originated in North America, before shortly dispersing into Asia. During the late Campanian-Maastrichtian, a saurolophine hadrosaurid migrated into South America from North America, giving rise to the clade Austrokritosauria, which is closely related to the tribe Kritosaurini. During the late early Maastrichtian, several lineages of Lambeosaurinae from Asia migrated into the European Ibero-Armorican Island (what is now France and Spain), including Arenysaurini and Tsintaosaurini. One of these lineages later dispersed from Europe into North Africa, as evidenced by Ajnabia, a member of Arenysaurini. Classification The family Hadrosauridae was first used by Edward Drinker Cope in 1869, then containing only Hadrosaurus. Since its creation, a major division has been recognized in the group between the hollow-crested subfamily Lambeosaurinae and the subfamily Saurolophinae, historically known as Hadrosaurinae. Both of these have been robustly supported in all recent literature. Phylogenetic analysis has increased the resolution of hadrosaurid relationships considerably, leading to the widespread usage of tribes (a taxonomic unit below subfamily) to describe the finer relationships within each group of hadrosaurids. Lambeosaurines have also been traditionally split into Parasaurolophini and Lambeosaurini. These terms entered the formal literature in Evans and Reisz's 2007 redescription of Lambeosaurus magnicristatus. Lambeosaurini is defined as all taxa more closely related Lambeosaurus lambei than to Parasaurolophus walkeri, and Parasaurolophini as all those taxa closer to P. walkeri than to L. lambei. In recent years Tsintaosaurini and Aralosaurini have also emerged. The use of the term Hadrosaurinae was questioned in a comprehensive study of hadrosaurid relationships by Albert Prieto-Márquez in 2010. Prieto-Márquez noted that, though the name Hadrosaurinae had been used for the clade of mostly crestless hadrosaurids by nearly all previous studies, its type species, Hadrosaurus foulkii, has almost always been excluded from the clade that bears its name, in violation of the rules for naming animals set out by the ICZN. Prieto-Márquez defined Hadrosaurinae as just the lineage containing H. foulkii, and used the name Saurolophinae instead for the traditional grouping. Phylogeny Hadrosauridae was first defined as a clade, by Forster, in a 1997 abstract, as simply "Lambeosaurinae plus Hadrosaurinae and their most recent common ancestor". In 1998, Paul Sereno defined the clade Hadrosauridae as the most inclusive possible group containing Saurolophus (a well-known saurolophine) and Parasaurolophus (a well-known lambeosaurine), later emending the definition to include Hadrosaurus, the type genus of the family. According to Horner et al. (2004), Sereno's definition would place a few other well-known hadrosaurs (such as Telmatosaurus and Bactrosaurus) outside the family, which led them to define the family to include Telmatosaurus by default. Prieto-Marquez reviewed the phylogeny of Hadrosauridae in 2010, including many taxa potentially within the family. The family is now formally defined in the PhyloCode as "the smallest clade containing Hadrosaurus foulkii, Lambeosaurus lambei, and Saurolophus osborni". The two main subfamilies of Lambeosaurinae and Saurolophinae belong to the clade Euhadrosauria (sometimes called Saurolophidae), defined as "the smallest clade containing Lambeosaurus lambei and Saurolophus osborni, provided it does not include Hadrosaurus foulkii". This clade excludes basal hadrosaurids such as Hadrosaurus and Yamatosaurus but self-destructs if Hadrosaurus is descended from the last common ancestor of Lambeosaurus and Saurolophus. Below is a cladogram from Prieto-Marquez et al. 2016. This cladogram is a recent modification of the original 2010 analysis, including more characters and taxa. The resulting cladistic tree of their analysis was resolved using Maximum-Parsimony. 61 hadrosauroid species were included, characterized for 273 morphological features: 189 for cranial features and 84 for postcranial features. When characters had multiple states that formed an evolutionary scheme, they were ordered to account for the evolution of one state into the next. The final tree was run through TNT version 1.0. The following cladogram is from Ramírez-Velasco (2022), including most recently named taxa. Anatomy The most recognizable aspect of hadrosaur anatomy is the flattened and laterally stretched rostral bones, which gives the distinct duck-bill look. Some members of the hadrosaurs also had massive crests on their heads, probably for display and/or to make noises. In some genera, including Edmontosaurus, the whole front of the skull was flat and broadened out to form a beak, which was ideal for clipping leaves and twigs from the forests of Asia, Europe and North America. However, the back of the mouth contained thousands of teeth suitable for grinding food before it was swallowed. This has been hypothesized to have been a crucial factor in the success of this group in the Cretaceous compared to the sauropods. Skin impressions of multiple hadrosaurs have been found. From these impressions, the hadrosaurs were determined to be scaled, and not feathered like some dinosaurs of other groups. Hadrosaurs, much like sauropods, are noted for having their manus united in a fleshy, often nail-less pad. The two major divisions of hadrosaurids are differentiated by their cranial ornamentation. While members of the Lambeosaurinae subfamily have hollow crests that differ depending on species, members of the Saurolophinae (Hadrosaurinae) subfamily have solid crests or none at all. Lambeosaurine crests had air chambers that may have produced a distinct sound and meant that their crests could have been used for both an audio and visual display. Paleobiology Diet While studying the chewing methods of hadrosaurids in 2009, the paleontologists Vincent Williams, Paul Barrett, and Mark Purnell found that hadrosaurs likely grazed on horsetails and vegetation close to the ground, rather than browsing higher-growing leaves and twigs. This conclusion was based on the evenness of scratches on hadrosaur teeth, which suggested the hadrosaur used the same series of jaw motions over and over again. As a result, the study determined that the hadrosaur diet was probably made of leaves and lacked the bulkier items, such as twigs or stems, that might have required a different chewing method and created different wear patterns. However, Purnell said these conclusions were less secure than the more conclusive evidence regarding the motion of teeth while chewing. The hypothesis that hadrosaurs were likely grazers rather than browsers appears to contradict previous findings from preserved stomach contents found in the fossilized guts in previous hadrosaur studies. The most recent such finding before the publication of the Purnell study was conducted in 2008, when a team led by University of Colorado at Boulder graduate student Justin S. Tweet found a homogeneous accumulation of millimeter-scale leaf fragments in the gut region of a well-preserved partially grown Brachylophosaurus. As a result of that finding, Tweet concluded in September 2008 that the animal was likely a browser, not a grazer. In response to such findings, Purnell said that preserved stomach contents are questionable because they do not necessarily represent the usual diet of the animal. The issue remains a subject of debate. Mallon et al. (2013) examined herbivore coexistence on the island continent of Laramidia, during the Late Cretaceous. It was concluded that hadrosaurids could reach low-growing trees and shrubs that were out of the reach of ceratopsids, ankylosaurs, and other small herbivores. Hadrosaurids were capable of feeding up to a height of when standing quadrupedally, and up to a height of bipedally. Coprolites (fossilized droppings) of some Late Cretaceous hadrosaurs show that the animals sometimes deliberately ate rotting wood. Wood itself is not nutritious, but decomposing wood would have contained fungi, decomposed wood material and detritus-eating invertebrates, all of which would have been nutritious. Examination of hadrosaur coprolites from the Grand Staircase-Escalante indicates that shellfish such as crustaceans were also an important component of the hadrosaur diet. It's thought that chewing behaviour may have changed throughout life in hadrosaurids. Very young specimens show simple cup shapes occlusion zones, or areas where the teeth contact one another in chewing, whereas in adulthood there is a "dual function" arrangement with two distinct areas of different tooth wear. This change during growth may have helped transition from a diet of softer plants when young to more tough and fibrous ones during adulthood. It's thought the transition between states, characterized by a more gradual transition from one wear state to another, occurred at different times in the growth of different species; in Hypacrosaurus stebingeri it did not occur until a nearly adult stage, whereas in a saurolophine specimen likely less than a year old from the Dinosaur Park Formation the transition had already begun. Neurology Hadrosaurs have been noted as having the most complex brains among ornithopods, and indeed among ornithischian dinosaurs as a whole. The brains of hadrosaurid dinosaurs have been studied as far back at the late 19th century, when Othniel Charles Marsh made an endocast of a specimen then referred to Claosaurus annectens; only basic remarks were possible but it was noted that the organ was proportionally small. John Ostrom would give a more informed analysis and review in 1961, pulling on data from Edmontosaurus regalis, E. annectens, and Gryposaurus notabilis (then considered a synonym of Kritosaurus). Though still obviously small, Ostrom recognized that the brains may be more significantly developed than expected, but supported the view that dinosaur brains would have only filled some of the endocranial cavity, limiting possibility of analysis. In 1977 James Hopson introduced the use of estimated encephalization quotients to the topic of dinosaur intelligence, finding Edmontosaurus to have an EQ of 1.5, above that of other ornithischians including earlier relatives like Camptosaurus and Iguanodon and similar to that of carnosaurian theropods and modern crocodilians but below that of coelurosaurian theropods. Reasonings suggested for their comparably high intelligence were the need for acute senses in the lack of defensive weapons, and more complex intraspecific behaviours as indicated by their acoustic and visual display structures. The advent of CT scanning for use in palaeontology has allowed for more widespread application of this without the need for specimen destruction. Modern research using these methods has focused largely on hadrosaurs. In a 2009 study by palaeontologist David C. Evans and colleagues, the brains of lambeosaurine hadrosaur genera Hypacrosaurus (adult specimen ROM 702), Corythosaurus (juvenile specimen ROM 759 and subadult specimen CMN 34825), and Lambeosaurus (juvenile specimen ROM 758) were scanned and compared to each other (on a phylogenetic and ontogenetic level), related taxa, and previous predictions, the first such large-scale look into the neurology of the subfamily. Contra the early works, Evans' studies indicate that only some regions of the hadrosaur brain (the dorsal portion and much of the hindbrain) were loosely correlated to the brain wall, like modern reptiles, with the ventral and lateral regions correlating fairly closely. Also unlike modern reptiles, the brains of the juveniles did not seem to correlate any closer to the brain wall than those of adults. It was cautioned, however, that very young individuals were not included in the study. As with previous studies, EQ values were investigated, although a wider number range was given to account for uncertainty in brain and body mass. The range for the adult Hypacrosaurus was 2.3 to 3.7; the lowest end of this range was still higher than modern reptiles and most non-maniraptoran dinosaurs (nearly all having EQs below two), but fell well short of maniraptorans themselves, which had quotients higher than four. The size of the cerebral hemispheres was, for the first time, remarked upon. It was found to taking up around 43% of endocranial volume (not considering olfactory bulbs) in ROM 702. This is comparable to their size in saurolophine hadrosaurs, but far larger than in any ornithischians outside of Hadrosauriformes, and all large saurischian dinosaurs; maniraptors Conchoraptor and Archaeopteryx, an early bird, had very similar proportions. This lends further support to the idea of complex behaviours and relatively high intelligence, for non-avian dinosaurs, in hadrosaurids. Amurosaurus, a close relative of the taxa from the 2009 study, was the subject of a 2013 paper once again looking into a cranial endocast. A nearly identical EQ range of 2.3 to 3.8 was found, and it was again noted this was higher than that of living reptiles, sauropods and other ornithischians, but different EQ estimates for theropods were cited, placing the hadrosaur numbers significantly below even more basal theropods like Ceratosaurus (with an EQ range of 3.31 to 5.07) and Allosaurus (with a range of 2.4 to 5.24, compared to only 1.6 in the 2009 study); more bird-like coelurosaurians theropods such as Troodon had stated EQs higher than seven. Additionally, the relative cerebral volume was only 30% in Amurosaurus, significantly lower than in Hypacrosaurus, closer to that of theropods like Tyrannosaurus (with 33%), though still distinctly larger than previously estimated numbers for more primitive iguanodonts like Lurdusaurus and Iguanodon (both at 19%). This demonstrated a previously unrecognized level of variation in neuro-anatomy within Hadrosauridae. Reproduction Neonate sized hadrosaur fossils have been documented in the scientific literature. Tiny hadrosaur footprints have been discovered in the Blackhawk Formation of Utah. In a 2001 review of hadrosaur eggshell and hatchling material from Alberta's Dinosaur Park Formation, Darren Tanke and M. K. Brett-Surman concluded that hadrosaurs nested in both the ancient upland and lowlands of the formation's depositional environment. The upland nesting grounds may have been preferred by the less common hadrosaurs, like Brachylophosaurus and Parasaurolophus. However, the authors were unable to determine what specific factors shaped nesting ground choice in the formation's hadrosaurs. They suggested that behavior, diet, soil condition, and competition between dinosaur species all potentially influenced where hadrosaurs nested. Sub-centimeter fragments of pebbly-textured hadrosaur eggshell have been reported from the Dinosaur Park Formation. This eggshell is similar to the hadrosaur eggshell of Devil's Coulee in southern Alberta as well as that of the Two Medicine and Judith River Formations in Montana, United States. While present, dinosaur eggshell is very rare in the Dinosaur Park Formation and is only found in two different microfossil sites. These sites are distinguished by large numbers of pisidiid clams and other less common shelled invertebrates, like unionid clams and snails. This association is not a coincidence, as the invertebrate shells would have slowly dissolved and released enough basic calcium carbonate to protect the eggshells from naturally occurring acids that otherwise would have dissolved them and prevented fossilization. In contrast with eggshell fossils, the remains of very young hadrosaurs are somewhat common. Tanke has observed that an experienced collector could discover multiple juvenile hadrosaur specimens in a single day. The most common remains of young hadrosaurs in the Dinosaur Park Formation are dentaries, bones from limbs and feet, as well as vertebral centra. The material showed little or none of the abrasion that would have resulted from transport, meaning the fossils were buried near their point of origin. Bonebeds 23, 28, 47, and 50 are productive sources of young hadrosaur remains in the formation, especially bonebed 50. The bones of juvenile hadrosaurs and fossil eggshell fragments are not known to have been preserved in association with each other, despite both being present in the formation. Growth and development The limbs of the juvenile hadrosaurs are anatomically and proportionally similar to those of adult animals. However, the joints often show "predepositional erosion or concave articular surfaces", which was probably due to the cartilaginous cap covering the ends of the bones. The pelvis of a young hadrosaur was similar to that of an older individual. Evidence suggests that young hadrosaurs would have walked on only their two hind legs, while adults would have walked on all four. As the animal aged, the front limbs became more robust in order to take on weight, while the back legs became less robust as they transitioned to walking on all four legs. Furthermore, the animals' front limbs were shorter than their back limbs. Daily activity patterns Comparisons between the scleral rings of several hadrosaur genera (Corythosaurus, Prosaurolophus, and Saurolophus) and modern birds and reptiles suggest that they may have been cathemeral, active throughout the day at short intervals. Pathology Spondyloarthropathy has been documented in the spine of a 78-million year old hadrosaurid. Other examples of pathologies in hadrosaurs include healed wounds from predators, such as those found in Edmontosaurus annectens, and tumors such as Langerhans cell histiocytosis, hemangiomas, desmoplastic fibroma, metastatic cancer, and osteoblastomas, found in genera such as Brachylophosaurus and Edmontosaurus. Osteochondrosis is also commonly found in hadrosaurs.
Biology and health sciences
Ornitischians
Animals
1092110
https://en.wikipedia.org/wiki/Bulk%20modulus
Bulk modulus
The bulk modulus ( or or ) of a substance is a measure of the resistance of a substance to bulk compression. It is defined as the ratio of the infinitesimal pressure increase to the resulting relative decrease of the volume. Other moduli describe the material's response (strain) to other kinds of stress: the shear modulus describes the response to shear stress, and Young's modulus describes the response to normal (lengthwise stretching) stress. For a fluid, only the bulk modulus is meaningful. For a complex anisotropic solid such as wood or paper, these three moduli do not contain enough information to describe its behaviour, and one must use the full generalized Hooke's law. The reciprocal of the bulk modulus at fixed temperature is called the isothermal compressibility. Definition The bulk modulus (which is usually positive) can be formally defined by the equation where is pressure, is the initial volume of the substance, and denotes the derivative of pressure with respect to volume. Since the volume is inversely proportional to the density, it follows that where is the initial density and denotes the derivative of pressure with respect to density. The inverse of the bulk modulus gives a substance's compressibility. Generally the bulk modulus is defined at constant temperature as the isothermal bulk modulus, but can also be defined at constant entropy as the adiabatic bulk modulus. Thermodynamic relation Strictly speaking, the bulk modulus is a thermodynamic quantity, and in order to specify a bulk modulus it is necessary to specify how the pressure varies during compression: constant-temperature (isothermal ), constant-entropy (isentropic ), and other variations are possible. Such distinctions are especially relevant for gases. For an ideal gas, an isentropic process has: where is the heat capacity ratio. Therefore, the isentropic bulk modulus is given by Similarly, an isothermal process of an ideal gas has: Therefore, the isothermal bulk modulus is given by . When the gas is not ideal, these equations give only an approximation of the bulk modulus. In a fluid, the bulk modulus and the density determine the speed of sound (pressure waves), according to the Newton-Laplace formula In solids, and have very similar values. Solids can also sustain transverse waves: for these materials one additional elastic modulus, for example the shear modulus, is needed to determine wave speeds. Measurement It is possible to measure the bulk modulus using powder diffraction under applied pressure. It is a property of a fluid which shows its ability to change its volume under its pressure. Selected values A material with a bulk modulus of 35 GPa loses one percent of its volume when subjected to an external pressure of 0.35 GPa (~) (assumed constant or weakly pressure dependent bulk modulus). Microscopic origin Interatomic potential and linear elasticity Since linear elasticity is a direct result of interatomic interaction, it is related to the extension/compression of bonds. It can then be derived from the interatomic potential for crystalline materials. First, let us examine the potential energy of two interacting atoms. Starting from very far points, they will feel an attraction towards each other. As they approach each other, their potential energy will decrease. On the other hand, when two atoms are very close to each other, their total energy will be very high due to repulsive interaction. Together, these potentials guarantee an interatomic distance that achieves a minimal energy state. This occurs at some distance r0, where the total force is zero: Where U is interatomic potential and r is the interatomic distance. This means the atoms are in equilibrium. To extend the two atoms approach into solid, consider a simple model, say, a 1-D array of one element with interatomic distance of r, and the equilibrium distance is r0. Its potential energy-interatomic distance relationship has similar form as the two atoms case, which reaches minimal at r0, The Taylor expansion for this is: At equilibrium, the first derivative is 0, so the dominant term is the quadratic one. When displacement is small, the higher order terms should be omitted. The expression becomes: Which is clearly linear elasticity. Note that the derivation is done considering two neighboring atoms, so the Hook's coefficient is: This form can be easily extended to 3-D case, with volume per atom(Ω) in place of interatomic distance.
Physical sciences
Solid mechanics
Physics
1092359
https://en.wikipedia.org/wiki/Cretoxyrhina
Cretoxyrhina
Cretoxyrhina (; meaning 'Cretaceous sharp-nose') is an extinct genus of large mackerel shark that lived about 107 to 73 million years ago during the late Albian to late Campanian of the Late Cretaceous. The type species, C. mantelli, is more commonly referred to as the Ginsu shark, first popularized in reference to the Ginsu knife, as its theoretical feeding mechanism is often compared with the "slicing and dicing" when one uses the knife. Cretoxyrhina is traditionally classified as the likely sole member of the family Cretoxyrhinidae but other taxonomic placements have been proposed, such as within the Alopiidae and Lamnidae. Measuring up to in length and weighing over , Cretoxyrhina was one of the largest sharks of its time. Having a similar appearance and build to the modern great white shark, it was an apex predator in its ecosystem and preyed on a large variety of marine animals including mosasaurs, plesiosaurs, sharks and other large fish, pterosaurs, and occasionally dinosaurs. Its teeth, up to long, were razor-like and had thick enamel built for stabbing and slicing prey. Cretoxyrhina was also among the fastest-swimming sharks, with hydrodynamic calculations suggesting burst speeds of up to . It has been speculated that Cretoxyrhina hunted by lunging at its prey at high speeds to inflict powerful blows, similar to the great white shark today, and relied on strong eyesight to do so. Since the late 19th century, several fossils of exceptionally well-preserved skeletons of Cretoxyrhina have been discovered in Kansas. Studies have successfully calculated its life history using vertebrae from some of the skeletons. Cretoxyrhina grew rapidly during early ages and reached sexual maturity at around four to five years of age. Its lifespan has been calculated to extend to nearly forty years. Anatomical analysis of the Cretoxyrhina skeletons revealed that the shark possessed facial and optical features most similar to that in thresher sharks and crocodile sharks and had a hydrodynamic build that suggested the use of regional endothermy. As an apex predator, Cretoxyrhina played a critical role in the marine ecosystems it inhabited. It was a cosmopolitan genus and its fossils have been found worldwide, although most frequently in the Western Interior Seaway area of North America. It preferred mainly subtropical to temperate pelagic environments but was known in waters as cold as . Cretoxyrhina saw its peak in size by the Coniacian, but subsequently experienced a continuous decline until its extinction during the Campanian. One factor in this demise may have been increasing pressure from competition with predators that arose around the same time, most notably the giant mosasaur Tylosaurus. Other possible factors include the gradual disappearance of the Western Interior Seaway. Taxonomy Research history Cretoxyrhina was first described by the English paleontologist Gideon Mantell from eight C. mantelli teeth he collected from the Southerham Grey Pit near Lewes, East Sussex. In his 1822 bookThe fossils of the South Downs, co-written with his partner Mary Ann Mantell, he identified them as teeth pertaining to two species of locally-known modern sharks. Mantell identified the smaller teeth as from the common smooth-hound and the larger teeth as from the smooth hammerhead, expressing some hesitation to the latter. In the third volume of his book Recherches sur les poissons fossiles, published in 1843, Swiss naturalist Louis Agassiz reexamined Mantell's eight teeth. Using them and another tooth from the collection of the Strasbourg Museum (whose exact location was unspecified but also came from England), he concluded that the fossils actually pertained to a single species of extinct shark that held strong dental similarities with the three species then classified in the now-invalid genus Oxyrhina, O. hastalis, O. xiphodon, and O. desorii. Agassiz placed the species in the genus Oxyrhina but noted that the much thicker root of its teeth made enough of a difference to be a distinct species and scientifically classified the shark under the taxon Oxyrhina mantellii  and named in honor of Mantell. During the late 19th century, paleontologists described numerous species that are now synonymized as Cretoxyrhina mantelli. According to some, there may have been as much as almost 30 different synonyms of O. mantelli at the time. Most of these species were derived from teeth that represented variations of C. mantelli but deviated from the exact characteristics of the syntypes. For example, in 1870, French paleontologist Henri Sauvage identified teeth from Sarthe, France, that greatly resembled the O. mantelli syntypes from England. The teeth also included lateral cusplets (small enameled cusps that appear at the base of the tooth's main crown), which are not present in the syntypes, which led him to describe the teeth under the species name Otodus oxyrinoides based on the lateral cusplets. In 1873, American paleontologist Joseph Leidy identified teeth from Kansas and Mississippi and described them under the species name Oxyrhina extenta. These teeth were broader and more robust than the O. mantelli syntypes from England. This all changed with the discoveries of some exceptionally well-preserved skeletons of the shark in the Niobrara Formation in West Kansas. Charles H. Sternberg discovered the first skeleton in 1890, which he described in a 1907 paper: Charles R. Eastman published his analysis of the skeleton in 1894. In the paper, he reconstructed the dentition based on the skeleton's disarticulated tooth set. Using the reconstruction, Eastman identified the many extinct shark species and found that their fossils are actually different tooth types of O. mantelli, which he all moved into the species. This skeleton, which Sternberg had sold to the Ludwig Maximilian University of Munich, was destroyed in 1944 by allied bombing during World War II. In 1891, Sternberg's son George F. Sternberg discovered a second O. mantelli skeleton now housed in the University of Kansas Museum of Natural History as KUVP 247. This skeleton was reported to measure in length and consists of a partial vertebral column with skeletal remains of a Xiphactinus as stomach contents and partial jaws with about 150 teeth visible. This skeleton was considered to be one of the greatest scientific discoveries of that year due to the unexpected preservation of cartilage. George F. Sternberg would later discover more O. mantelli skeletons throughout his career. His most notable finds were FHSM VP-323 and FHSM VP-2187, found in 1950 and 1965 respectively. The former is a partial skeleton consisting of a well-preserved set of jaws, a pair of five gills, and some vertebra while the latter is a near-complete skeleton with an almost complete vertebral column and an exceptionally preserved skull holding much of the cranial elements, jaws, teeth, a set of scales, and fragments of pectoral girdles and fins in their natural positions. Both skeletons are currently housed in the Sternberg Museum of Natural History. In 1968, a collector named Tim Basgall discovered another notable skeleton that, similar to FHSM VP-2187, also consisted of a near-complete vertebral column and a partially preserved skull. This fossil is housed in the University of Kansas Museum of Natural History as KUVP 69102. In 1958, Soviet paleontologist Leonid Glickman found that the dental design of O. mantelli reconstructed by Eastman made it distinct enough to warrant a new genus—Cretoxyrhina. He also identified a second species of Cretoxyrhina based on some of the earlier Cretoxyrhina teeth, which he named Cretoxyrhina denticulata. Originally, Glickman designated C. mantelli as the type species, but he abruptly replaced the position with another taxon identified as 'Isurus denticulatus' without explanation in a 1964 paper, a move now rejected as an invalid taxonomic amendment. This nevertheless led Russian paleontologist Viktor Zhelezko to erroneously invalidate the genus Cretoxyrhina in a 2000 paper by synonymizing 'Isurus denticulatus' (and thus the genus Cretoxyrhina as a whole) with another taxon identified as 'Pseudoisurus tomosus'''. Zhelezko also described a new species congeneric with C. mantelli based on tooth material from Kazakhstan, which he identified as Pseudoisurus vraconensis accordingly to his taxonomic reassessment. A 2013 study led by Western Australian Museum curator and paleontologist Mikael Siverson corrected the taxonomic error, reinstating the genus Cretoxyrhina and moving 'P'. vraconensis into it. In 2010, British and Canadian paleontologists Charlie Underwood and Stephen Cumbaa described Telodontaspis agassizensis from teeth found in Lake Agassiz in Manitoba that were previously identified as juvenile Cretoxyrhina teeth. This species was reaffirmed into the genus Cretoxyrhina by a 2013 study led by American paleontologist Michael Newbrey using additional fossil material of the same species found in Western Australia. Between 1997 and 2008, paleontologist Kenshu Shimada published a series of papers where he analyzed the skeletons of C. mantelli including those found by the Sternbergs using modernized techniques to extensively research the possible biology of Cretoxyrhina. Some of his works include the development of more accurate dental, morphological, physiological, and paleoecological reconstructions, ontogenetic studies, and morphological-variable based phylogenetic studies on Cretoxyrhina. Shimada's research on Cretoxyrhina helped shed new light on the understandings of the shark and, through his new methods, other extinct animals. EtymologyCretoxyrhina is a portmanteau of the word creto (short for Cretaceous) prefixed to the genus Oxyrhina, which is derived from the Ancient Greek ὀξύς (oxús, "sharp") and ῥίς (rhís, "nose"). When put together they mean "Cretaceous sharp-nose", although Cretoxyrhina is believed to have had a rather blunt snout. The type species name mantelli translates to "from Mantell", which Louis Agassiz named in honor of English paleontologist Gideon Mantell for supplying him the syntypes of the species. The species name denticulata is derived from the Latin word denticulus (small tooth) and suffix āta (possession of), together meaning "having small teeth". This is a reference to the appearance of lateral cusplets in most of the teeth in C. denticulata. The species vraconensis derived from the word vracon and the Latin ensis (from), meaning "from Vracon", which is a reference to the Vraconian substage of the Albian stage in which the species was discovered. The species name agassizensis is derived from the name Agassiz and the Latin ensis (from), meaning "from Agassiz", named after Lake Agassiz where the species was discovered. Coincidentally, the lake itself is named in honor of Louis Agassiz. The common name Ginsu shark, originally coined in 1999 by paleontologists Mike Everhart and Kenshu Shimada, is a reference to the Ginsu knife, as the theoretical feeding mechanisms of C. mantelli was often compared with the "slicing and dicing" when one uses the knife. Phylogeny and evolutionCretoxyrhina bore a resemblance to the modern great white shark in size, shape and ecology, but the two sharks are not closely related, and their similarities are a result of convergent evolution. Cretoxyrhina has been traditionally grouped within the Cretoxyrhinidae, a family of lamnoid sharks that traditionally included other genera resulting in a paraphyletic or polyphyletic family. Siverson (1999) remarked that Cretoxyrhinidae was used as a 'wastebasket taxon' for Cretaceous and Paleogene sharks and declared that Cretoxyrhina was the only valid member of the family.Cretoxyrhina contains four valid species: C. vraconensis, C. denticulata, C. agassizensis, and C. mantelli. These species represent a chronospecies. The earliest fossils of Cretoxyrhina are dated about 107 million years old and belong to C. vraconensis. The genus would progress by C. vraconensis evolving into C. denticulata during the Early Cenomanian which evolved into C. agassizensis during the Mid-Cenomanian which in turn evolved into C. mantelli during the Late-Cenomanian. It is notable that C. agassizensis continued until the Mid-Turonian and was briefly contemporaneous with C. mantelli. This progression was characterized by the reduction of lateral cusplets and the increasing size and robustness of teeth. The Late-Albian–Mid-Turonian interval sees mainly the reduction of lateral cusplets; C. vraconensis possessed lateral cusplets in all teeth except for a few in the anterior position, which would gradually become restricted only to the back lateroposteriors in adults by the end of the interval in C. mantelli. The Late Cenomanian–Coniacian interval was characterized by a rapid increase in tooth (and body) size, significant decrease of crown/height-crown/width ratio, and a transition from a tearing-type to a cutting-type tooth form. Tooth size of C. mantelli individuals inside the Western Interior Seaway peaked around 86 Ma during the latest Coniacian and then begins to slowly decline. In Europe, this peak takes place earlier during the Late Turonian. The youngest fossil of C. mantelli was found in the Bearpaw Formation of Alberta, dating as 73.2 million years old. A single tooth identified as Cretoxyrhina sp. was recovered from the nearby Horseshoe Canyon Formation and dated as 70.44 million years old, suggesting that Cretoxyrhina may have survived into the Maastrichtian. However, the Horseshoe Canyon Formation has only brackish water deposits despite Cretoxyrhina being a marine shark, making a likely possibility that the fossil was reworked from an older layer. Phylogenetic studies through morphological data conducted by Shimada in 2005 suggested that Cretoxyrhina may alternatively be congeneric with the genus of the modern thresher sharks; the study also stated that the results are premature and may be inaccurate and recommended that Cretoxyrhina is kept within the family Cretoxyrhinidae, mainly citing the lack of substantial data for it during the analysis. Another possible model for Cretoxyrhina evolution, proposed in 2014 by paleontologist Cajus Diedrich, suggests that C. mantelli was congeneric with the mako sharks of the genus Isurus and was part of an extended Isurus lineage beginning as far the Aptian stage in the Early Cretaceous. According to this model, the Isurus lineage was initiated by 'Isurus appendiculatus' (Cretolamna appendiculata), which evolved into Isurus denticulatus (Cretoxyrhina mantelli ) in the Mid-Cenomanian, then 'Isurus mantelli' (Cretoxyrhina mantelli ) at the beginning of the Coniacian, then Isurus schoutedenti during the Paleocene, then Isurus praecursor, where the rest of the Isurus lineage continues. The study claimed that the absence of corresponding fossils during the Maastrichtian (72–66 Ma) was not a result of a premature extinction of C. mantelli, but merely a gap in the fossil record. Shimada and fellow paleontologist Phillip Sternes published a poster in 2018 which voiced doubt over the credibility of this proposal, noting that the study's interpretation is largely based on data that had been arbitrarily selected and failed to cite either Shimada (1997) or Shimada (2005), which are key papers regarding the systematic position of C. mantelli. Biology Morphology Dentition Distinguishing characteristics of Cretoxyrhina teeth include a nearly symmetrical or slanted triangular shape, razor-like and non-serrated cutting edges, visible tooth necks (bourlette), and a thick enamel coating. The dentition of Cretoxyrhina possesses the basic dental characteristics of a mackerel shark, with tooth rows closely spaced without any overlap. Anterior teeth are straight and near-symmetrical, while lateroposterior teeth are slanted. The side of the tooth facing the mouth is convex and possesses massive protuberance and nutrient grooves on the root, whereas the labial side, which faces outwards, is flat or slightly swollen. Juveniles possessed lateral cusplets in all teeth, and C. vraconensis consistently retained them in adulthood. Lateral cusplets were retained only up to all lateroposterior teeth in adulthood in C. denticulata and C. agassizensis and only up to the back lateroposterior teeth in C. mantelli. The lateral cusplets of C. vraconensis and C. denticulata are round, while in C. agassizensis are sharpened to a point. The anterior teeth of C. vraconensis measure in height, while the largest known tooth of C. denticulata measures . C. mantelli teeth are larger, measuring in average slant height. The largest tooth discovered from this species may have measured up to . The dentition of C. mantelli is among the best-known of all extinct sharks, thanks to fossil skeletons like FHSM VP-2187, which consists of a near-complete articulated dentition. Other C. mantelli skeletons, such as KUVP-247 and KUVP-69102, also include partial jaws with some teeth in their natural positions, some of which were not present in more complete skeletons like FHSM VP-2187. Using these specimens, the dental formula was reconstructed by Shimada (1997) and revised by Shimada (2002), it was . This means that from front to back, C. mantelli had: four symphysials (small teeth located in the symphysis of a jaw), two anteriors, four intermediates, and eleven or more lateroposteriors for the upper jaw and possibly one symphysial, two anteriors, one intermediate, and fifteen or more lateroposteriors for the lower jaw. The structure of the tooth row shows a dental structure suited for a feeding behavior similar to modern mako sharks, having large spear-like anteriors to stab and anchor prey and curved lateroposteriors to cut it to bite-size pieces, a mechanism often informally described as "slicing and dicing" by paleontologists. In 2011, paleontologists Jim Bourdon and Mike Everhart reconstructed the dentition of multiple C. mantelli individuals based on their associated tooth sets. They discovered that two of these reconstructions show some notable differences in the size of the first intermediate (I1) tooth and lateral profiles, concluding that these differences could possibly represent sexual dimorphism or individual variations. Skull Analysis of skull and scale patterns suggests that C. mantelli had a conical head with a dorsally flat and wide skull. The rostrum does not extend much forward from the frontal margin of the braincase, suggesting that the snout was blunt. Similar to modern crocodile sharks and thresher sharks, C. mantelli had proportionally large eyes, with the orbit roughly taking up one-third of the entire skull length, giving it acute vision. As a predator, good eyesight was important when hunting the large prey Cretoxyrhina fed on. In contrast, the more smell-reliant contemporaneous anacoracid Squalicoraxs less advanced orbital but stronger olfactory processes was more suitable for an opportunistic scavenger. The jaws of C. mantelli were kinetically powerful. They have a slightly looser anterior curvature and a more robust build than that of the modern mako sharks, but otherwise were similar in general shape. The hyomandibula is elongated and was believed to swing laterally, which would allow jaw protrusion and deep biting. Skeletal anatomy Most species of Cretoxyrhina are represented only by fossil teeth and vertebra. Like all sharks, the skeleton of Cretoxyrhina was made of cartilage, which is less capable of fossilization than bone. However, fossils of C. mantelli from the Niobrara Formation have been found exceptionally preserved; this was due to the formation's chalk having high contents of calcium, allowing calcification to become more prevalent. When calcified, soft tissue hardens, making it more prone to fossilization. Numerous skeletons consisting of near-complete vertebral columns have been found. The largest vertebra were measured up to in diameter. Two specimens with the best-preserved vertebral columns (FHSM VP-2187 and KUVP 69102) have 218 and 201 centra, respectively, and nearly all vertebra in the column preserved; only portions of the tail tip are missing for both. Estimations of tail length calculates a total vertebral count of approximately 230 centra, which is unique as all known extant mackerel sharks possess a vertebral count of either less than 197 or greater than 282 with none in between. The vertebral centra in the trunk region were large and circular, creating an overall spindle-shaped body with a stocky trunk. An analysis of a partially complete tail fin fossil shows that Cretoxyrhina had a lunate (crescent-shaped) tail most similar with modern lamnid sharks, whale sharks, and basking sharks. The transition to tail vertebrae is estimated to be between the 140th and 160th vertebrae out of the total 230, resulting in a total tail vertebral count of 70–90 and making up approximately 30–39% of the vertebral column. The transition from precaudal (the set of vertebrae before the tail vertebrae) to tail vertebrae is also marked by a vertebral bend of about 45°, which is the highest possible angle known in extant sharks and is mostly found in fast-swimming sharks, such as lamnids. These properties of the tail, along with other features such as smooth scales parallel to the body axis, a plesodic pectoral fin (a pectoral fin in which cartilage extends throughout, giving it a more secure structure that helps decrease drag), and a spindle-shaped stocky build, show that C. mantelli was capable of fast swimming. Physiology ThermoregulationCretoxyrhina represents one of the earliest forms and possible origins of endothermy in mackerel sharks. Possessing regional endothermy (also known as mesothermy), it may have possessed a build similar to modern regionally endothermic sharks like members of the thresher shark and lamnid families, where red muscles are closer to the body axis compared to ectothermic sharks (whose red muscles are closer to the body circumference), and a system of specialized blood vessels called rete mirabile (Latin for "wonderful nets") is present, allowing metabolic heat to be conserved and exchanged to vital organs. This morphological build allows the shark to be partially warm-blooded, and thus efficiently function in the colder environments where Cretoxyrhina has been found. Fossils have been found in areas where paleoclimatic estimates show a surface temperature as low as . Regional endothermy in Cretoxyrhina may have been developed in response to increasing pressure from progressive global cooling and competition from mosasaurs and other marine reptiles that had also developed endothermy. Hydrodynamics and locomotionCretoxyrhina possessed highly dense overlapping placoid scales parallel to the body axis and patterned in parallel kneels separated by U-shaped grooves, each groove having a mean width of about 45 micrometers. This formation of scales allows efficient drag reduction and enhanced high-speed velocity capabilities, a morphotype seen only in the fastest known sharks. Cretoxyrhina also had the most extreme case of a "Type 4" tail fin, having the highest known Cobb's angle (curvature of tail vertebrae) and tail cartilage angle (49° and 133° respectively) ever recorded in mackerel sharks. A "Type 4" tail fin structure represents a build with maximum symmetry of the tail fin lobes, which is most efficient in fast swimming; among sharks, it is only found in lamnids. A 2017 study by PhD student Humberto Ferron analyzed the relationships between the morphological variables including the skeleton and tail fin of C. mantelli and modeled an average cruising speed of and a burst swimming speed of around , making Cretoxyrhina possibly one of the fastest sharks known. For comparison, the modern great white shark has been modeled to reaches speeds of up to and the shortfin mako, the fastest extant shark, has been modeled at a speed of . Life history Reproduction Although no fossil evidence for it has been found, it is inferred that Cretoxyrhina was ovoviviparous as all modern mackerel sharks are. In ovoviviparous sharks, young are hatched and grow inside the mother while competing against litter-mates through cannibalism such as oophagy (egg eating). As Cretoxyrhina inhabited oligotrophic and pelagic waters where predators such as large mosasaurs like Tylosaurus and macropredatory fish like Xiphactinus were common, it is likely that it also was an r-selected shark, where many infants are produced to offset high infant mortality rates. Similarly, pelagic sharks such as the great white shark, thresher sharks, mako sharks, porbeagle shark, and crocodile shark produce two to nine infants per litter. Growth and longevity Like all mackerel sharks, Cretoxyrhina grew a growth ring in its vertebrae every year and is aged through measuring each band; due to the rarity of well-preserved vertebrae, only a few Cretoxyrhina individuals have been aged. In Shimada (1997), a linear equation for calculating the total body length of Cretoxyrhina using the centrum (the body of a vertebra) diameter of a vertebra was developed and is shown below, with representing total body length and representing centrum diameter (the diameter of each band). Using this linear equation, measurements were first conducted on the best-preserved C. mantelli specimen, FHSM VP-2187, by Shimada (2008). The measurements showed a length of and weight of about at birth, and rapid growth in the first two years of life, doubling the length within 3.3 years. From then on, size growth became steady and gradual, growing a mean estimate of per year until its death at around 15 years of age, at which it had grown to . Using the von Bertalanffy growth model on FHSM VP-2187, the maximum lifespan of C. mantelli was estimated to be 38.2 years. By that age, it would have grown over long. Based on allometric scaling of a great white shark, Shimada found that such individual would have weighed as much as . A remeasurement conducted by Newbrey et al. (2013) found that C. mantelli and C. agassizensis reached sexual maturity at around four to five years of age and proposed a possible revision to the measurements of the growth rings in FHSM VP-2187. The lifespan of FHSM VP-2187 and maximum lifespan of C. mantelli was also proposed to be revised to 18 and 21 years respectively using the new measurements. A 2019 study led by Italian scientist Jacopo Amalfitano briefly measured the vertebrae from two C. mantelli fossils and found that the older individual died at around 26 years of age. Measurements were also conducted on other C. mantelli skeletons and a vertebra of C. agassizensis, yielding results of similar rates of rapid growth in early stages of life. Such rapid growth within mere years could have helped Cretoxyrhina better survive by quickly phasing out of infancy and its vulnerabilities, as a fully grown adult would have few natural predators. The 2013 study also identified a syntype tooth of C. mantelli from England and calculated the individual's maximum length of , making the tooth the largest known specimen yet. When applying the allometric scaling used in Shimada (2008), a C. mantelli of such length would yield an estimated body mass of around . The graph below represents the length growth per year of FHSM VP-2187 according to Shimada (2008): Other species were estimated to have been significantly smaller. C. denticulata and C. vraconensis reached a total body length of up to as an adult. Based on allometric scaling, individuals of such length would have weighed about . C. agassizensis was even smaller, with an estimated total body length of up to based on a tooth specimen (P2989.152) measuring tall. Paleobiology Prey relationships The powerful kinetic jaws, high-speed capabilities, and large size of Cretoxyrhina suggest a very aggressive predator. Cretoxyrhinas association with a diverse number of fossils showing signs of devourment confirms that it was an active apex predator that fed on much of the variety of marine megafauna in the Late Cretaceous. The highest trophic level it occupied was a position shared only with large mosasaurs such as Tylosaurus during the latter stages of the Late Cretaceous. It played a critical role in many marine ecosystems.Cretoxyrhina mainly preyed on other active predators including ichthyodectids (a type of large fish that includes Xiphactinus), plesiosaurs, turtles, mosasaurs, and other sharks. Most fossils of Cretoxyrhina feeding upon other animals consist of large and deep bite marks and punctures on bones, occasionally with teeth embedded in them. Isolated bones of mosasaurs and other marine reptiles that have been partially dissolved by digestion, or found in coprolites, are also examples of Cretoxyrhina feeding. There are a few skeletons of Cretoxyrhina containing stomach contents known, including a large C. mantelli skeleton (KUVP 247) which possesses skeletal remains of the large ichthyodectid Xiphactinus and a mosasaur in its stomach region. Cretoxyrhina may have occasionally fed on pterosaurs, evidenced by a set of cervical vertebrae of a Pteranodon from the Niobrara Formation with a C. mantelli tooth lodged deep between them. Although it cannot be certain if the fossil itself was a result of predation or scavenging, it was likely that Pteranodon and similar pterosaurs were natural targets for predators like Cretoxyrhina, as they would routinely enter water to feed and thus be well within reach. Although Cretoxyrhina was mainly an active hunter, it was also an opportunistic feeder and may have scavenged from time to time. Many fossils with Cretoxyrhina feeding marks show no sign of healing, an indicator of a deliberate predatory attack on a live animal, leading to the possibility that at least some of the feeding marks were made from scavenging. Remains of partial skeletons of dinosaurs like Claosaurus and Niobrarasaurus show signs of feeding and digestion by C. mantelli. They were likely scavenged carcasses swept into the ocean due to the paleogeographical location of the fossils being that of an open ocean. Hunting strategies The hunting strategies of Cretoxyrhina are not well documented because many fossils with Cretoxyrhina feeding marks cannot be distinguished between predation or scavenging. If they were indeed a result of the former, that would mean that Cretoxyrhina most likely employed hunting strategies involving a main powerful and fatal blow similar to ram feeding seen in modern requiem sharks and lamnids. A 2004 study by shark experts Vittorio Gabriotti and Alessandro De Maddalena observed that the modern great white shark reaching lengths of greater than commonly ram its prey with massive velocity and strength to inflict single fatal blows, sometimes so powerful that prey would be propelled out of the water by the impact's force. As Cretoxyrhina possessed a robust stocky build capable of fast swimming, powerful kinetic jaws like the great white shark, and reaches lengths similar to or greater than it, a hunting style like this would be likely. Paleoecology Range and distributionCretoxyrhina had a cosmopolitan distribution with fossils having been found worldwide. Notable locations include North America, Europe, Israel, and Kazakhstan. Cretoxyrhina mostly occurred in temperate and subtropical zones. It has been found in latitudes as far north as 55° N, where paleoclimatic estimates calculate an average sea surface temperature of . Fossils of Cretoxyrhina are most well known in the Western Interior Seaway area, which is now the central United States and Canada. In 2013, Mikael Siverson and colleagues noted that during the Turonian or early Coniacian, Cretoxyrhina individuals living offshore were usually larger than those inhabiting the Western Interior Seaway, with some of the offshore C. mantelli fossils like one of the syntypes yielding total lengths of up to , possibly . Additionally, C. mantelli may have been migratory, as evidenced by its δ18Op values. HabitatCretoxyrhina inhabited mainly temperate to subtropical pelagic oceans. A tooth of Cretoxyrhina found in the Horseshoe Canyon Formation in Alberta (a formation where the only water deposits found consist of brackish water and no oceans) suggests that it may have, on occasion, swum into partially fresh-water estuaries and similar bodies of water. However, a rework from an underlying layer may be a more likely explanation of such occurrence. The climate of marine ecosystems during the temporal range of Cretoxyrhina was generally much warmer than modern day due to higher atmospheric levels of carbon dioxide and other greenhouse gases influenced by the shape of the continents at the time. The interval during the Cenomanian and Turonian of 97–91 Ma saw a peak in sea surface temperatures averaging over and bottom water temperatures around , about warmer than modern day. Around this time, Cretoxyrhina coexisted with a radiating increase in diversity of fauna like mosasaurs. This interval also included a rise in global δ13C levels, which marks significant depletion of oxygen in the ocean, and caused the Cenomanian-Turonian anoxic event. Although this event led to the extinction of as much as 27% of marine invertebrates, vertebrates like Cretoxyrhina were generally unaffected. The rest of the Cretaceous saw a progressive global cooling of Earth's oceans, leading to the appearance of temperate ecosystems and possible glaciation by the Early Maastrichtian. Subtropical areas retained high biodiversity of all taxa, while temperate ecosystems generally had much lower diversity. In North America, subtropical provinces were dominated by sharks, turtles, and mosasaurs such as Tylosaurus and Clidastes, while temperate provinces were mainly dominated by plesiosaurs, hesperornithid seabirds, and the mosasaur Platecarpus. CompetitionCretoxyrhina lived alongside many predators that shared a similar trophic level in a diverse pelagic ecosystem during the Cretaceous. Most of these predators included large marine reptiles, some of which already occupied the highest trophic level when Cretoxyrhina first appeared. During the Albian to Turonian, about 107 to 91 Ma, Cretoxyrhina was contemporaneous and coexisted with Mid-Cretaceous pliosaurs. Some of these pliosaurs included Megacephalosaurus, which attained lengths of . By the Mid-Turonian, about 91 Ma, pliosaurs became extinct. It is thought that the radiation of sharks like Cretoxyrhina may have been a major contributing factor to the acceleration of their extinction. The ecological void they left was quickly filled by emerging mosasaurs, which also came to occupy the highest trophic levels. Large mosasaurs like Tylosaurus, which reached in excess of in length, may have competed with Cretoxyrhina, and evidence of interspecific interactions such as bite marks from either have been found. There were also many sharks that occupied a similar ecological role with Cretoxyrhina such as the cardabiodontids Cardabiodon and Dwardius, the latter showing evidence of direct competition with C. vraconensis based on intricate distribution patterns between the two. A 2010 study by paleontologists Corinne Myers and Bruce Lieberman on competition in the Western Interior Seaway used quantitative analytical techniques based on Geographical information systems and tectonic reconstructions to reconstruct the hypothetical competitive relationships between ten of the most prevalent and abundant marine vertebrates of the region, including Cretoxyrhina. Their calculations found negative correlations between the distribution of Cretoxyrhina and the three potential competitors Squalicorax kaupi, Tylosaurus proriger, and Platecarpus spp.; a statistically significant negative correlation implies that the distribution of one species was affected due to being outcompeted by another. However, none of the relationships were statistically significant, which instead indicates that the trends were unlikely the result of competition. Extinction The causes of the extinction of Cretoxyrhina are uncertain. What is known is that it declined slowly over millions of years. Since its peak in size during the Coniacian, the size and distribution of Cretoxyrhina fossils gradually declined until its eventual demise during the Campanian. Siverson and Lindgren (2005) noted that the age of the youngest fossils of Cretoxyrhina differed between continents. In Australia, the youngest Cretoxyrhina fossils were dated approximately 83 Ma during the Santonian, while the youngest North American fossils known at the time (which were dated in the Early Campanian) were at least two million years older than the youngest fossils in Europe. The differences between ages suggests that Cretoxyrhina may have become locally extinct in such areas over time until the genus as a whole went extinct. It has been noted that the decline of Cretoxyrhina coincides with the rise of newer predators such as Tylosaurus, suggesting that increasing pressure from competition with the mosasaur and other predators of similar trophic levels may have played a major contribution to Cretoxyrhinas decline and eventual extinction. Another possible factor was the gradual shallowing and shrinking of the Western Interior Seaway, which would have led to the disappearance of the pelagic environments preferred by the shark; this factor does not explain the decline and extinction of Cretoxyrhina elsewhere. It has been suggested that the extinction of Cretoxyrhina'' may have helped the further increase the diversity of mosasaurs.
Biology and health sciences
Prehistoric chondrichthyans
Animals
1092713
https://en.wikipedia.org/wiki/Approximation%20theory
Approximation theory
In mathematics, approximation theory is concerned with how functions can best be approximated with simpler functions, and with quantitatively characterizing the errors introduced thereby. What is meant by best and simpler will depend on the application. A closely related topic is the approximation of functions by generalized Fourier series, that is, approximations based upon summation of a series of terms based upon orthogonal polynomials. One problem of particular interest is that of approximating a function in a computer mathematical library, using operations that can be performed on the computer or calculator (e.g. addition and multiplication), such that the result is as close to the actual function as possible. This is typically done with polynomial or rational (ratio of polynomials) approximations. The objective is to make the approximation as close as possible to the actual function, typically with an accuracy close to that of the underlying computer's floating point arithmetic. This is accomplished by using a polynomial of high degree, and/or narrowing the domain over which the polynomial has to approximate the function. Narrowing the domain can often be done through the use of various addition or scaling formulas for the function being approximated. Modern mathematical libraries often reduce the domain into many tiny segments and use a low-degree polynomial for each segment. Optimal polynomials Once the domain (typically an interval) and degree of the polynomial are chosen, the polynomial itself is chosen in such a way as to minimize the worst-case error. That is, the goal is to minimize the maximum value of , where P(x) is the approximating polynomial, f(x) is the actual function, and x varies over the chosen interval. For well-behaved functions, there exists an Nth-degree polynomial that will lead to an error curve that oscillates back and forth between and a total of N+2 times, giving a worst-case error of . It is seen that there exists an Nth-degree polynomial that can interpolate N+1 points in a curve. That such a polynomial is always optimal is asserted by the equioscillation theorem. It is possible to make contrived functions f(x) for which no such polynomial exists, but these occur rarely in practice. For example, the graphs shown to the right show the error in approximating log(x) and exp(x) for N = 4. The red curves, for the optimal polynomial, are level, that is, they oscillate between and exactly. In each case, the number of extrema is N+2, that is, 6. Two of the extrema are at the end points of the interval, at the left and right edges of the graphs. To prove this is true in general, suppose P is a polynomial of degree N having the property described, that is, it gives rise to an error function that has N + 2 extrema, of alternating signs and equal magnitudes. The red graph to the right shows what this error function might look like for N = 4. Suppose Q(x) (whose error function is shown in blue to the right) is another N-degree polynomial that is a better approximation to f than P. In particular, Q is closer to f than P for each value xi where an extreme of P−f occurs, so When a maximum of P−f occurs at xi, then And when a minimum of P−f occurs at xi, then So, as can be seen in the graph, [P(x) − f(x)] − [Q(x) − f(x)] must alternate in sign for the N + 2 values of xi. But [P(x) − f(x)] − [Q(x) − f(x)] reduces to P(x) − Q(x) which is a polynomial of degree N. This function changes sign at least N+1 times so, by the Intermediate value theorem, it has N+1 zeroes, which is impossible for a polynomial of degree N. Chebyshev approximation One can obtain polynomials very close to the optimal one by expanding the given function in terms of Chebyshev polynomials and then cutting off the expansion at the desired degree. This is similar to the Fourier analysis of the function, using the Chebyshev polynomials instead of the usual trigonometric functions. If one calculates the coefficients in the Chebyshev expansion for a function: and then cuts off the series after the term, one gets an Nth-degree polynomial approximating f(x). The reason this polynomial is nearly optimal is that, for functions with rapidly converging power series, if the series is cut off after some term, the total error arising from the cutoff is close to the first term after the cutoff. That is, the first term after the cutoff dominates all later terms. The same is true if the expansion is in terms of bucking polynomials. If a Chebyshev expansion is cut off after , the error will take a form close to a multiple of . The Chebyshev polynomials have the property that they are level – they oscillate between +1 and −1 in the interval [−1, 1]. has N+2 level extrema. This means that the error between f(x) and its Chebyshev expansion out to is close to a level function with N+2 extrema, so it is close to the optimal Nth-degree polynomial. In the graphs above, the blue error function is sometimes better than (inside of) the red function, but sometimes worse, meaning that it is not quite the optimal polynomial. The discrepancy is less serious for the exp function, which has an extremely rapidly converging power series, than for the log function. Chebyshev approximation is the basis for Clenshaw–Curtis quadrature, a numerical integration technique. Remez's algorithm The Remez algorithm (sometimes spelled Remes) is used to produce an optimal polynomial P(x) approximating a given function f(x) over a given interval. It is an iterative algorithm that converges to a polynomial that has an error function with N+2 level extrema. By the theorem above, that polynomial is optimal. Remez's algorithm uses the fact that one can construct an Nth-degree polynomial that leads to level and alternating error values, given N+2 test points. Given N+2 test points , , ... (where and are presumably the end points of the interval of approximation), these equations need to be solved: The right-hand sides alternate in sign. That is, Since , ..., were given, all of their powers are known, and , ..., are also known. That means that the above equations are just N+2 linear equations in the N+2 variables , , ..., , and . Given the test points , ..., , one can solve this system to get the polynomial P and the number . The graph below shows an example of this, producing a fourth-degree polynomial approximating over [−1, 1]. The test points were set at −1, −0.7, −0.1, +0.4, +0.9, and 1. Those values are shown in green. The resultant value of is 4.43 × 10−4 The error graph does indeed take on the values at the six test points, including the end points, but that those points are not extrema. If the four interior test points had been extrema (that is, the function P(x)f(x) had maxima or minima there), the polynomial would be optimal. The second step of Remez's algorithm consists of moving the test points to the approximate locations where the error function had its actual local maxima or minima. For example, one can tell from looking at the graph that the point at −0.1 should have been at about −0.28. The way to do this in the algorithm is to use a single round of Newton's method. Since one knows the first and second derivatives of , one can calculate approximately how far a test point has to be moved so that the derivative will be zero. Calculating the derivatives of a polynomial is straightforward. One must also be able to calculate the first and second derivatives of f(x). Remez's algorithm requires an ability to calculate , , and to extremely high precision. The entire algorithm must be carried out to higher precision than the desired precision of the result. After moving the test points, the linear equation part is repeated, getting a new polynomial, and Newton's method is used again to move the test points again. This sequence is continued until the result converges to the desired accuracy. The algorithm converges very rapidly. Convergence is quadratic for well-behaved functions—if the test points are within of the correct result, they will be approximately within of the correct result after the next round. Remez's algorithm is typically started by choosing the extrema of the Chebyshev polynomial as the initial points, since the final error function will be similar to that polynomial. Main journals Journal of Approximation Theory Constructive Approximation East Journal on Approximations
Mathematics
Calculus and analysis
null
1092939
https://en.wikipedia.org/wiki/Toothache
Toothache
Toothaches, also known as dental pain or tooth pain, is pain in the teeth or their supporting structures, caused by dental diseases or pain referred to the teeth by non-dental diseases. When severe it may impact sleep, eating, and other daily activities. Common causes include inflammation of the pulp, (usually in response to tooth decay, dental trauma, or other factors), dentin hypersensitivity, apical periodontitis (inflammation of the periodontal ligament and alveolar bone around the root apex), dental abscesses (localized collections of pus), alveolar osteitis ("dry socket", a possible complication of tooth extraction), acute necrotizing ulcerative gingivitis (a gum infection), and temporomandibular disorder. Pulpitis is reversible when the pain is mild to moderate and lasts for a short time after a stimulus (for instance cold); or irreversible when the pain is severe, spontaneous, and lasts a long time after a stimulus. Left untreated, pulpitis may become irreversible, then progress to pulp necrosis (death of the pulp) and apical periodontitis. Abscesses usually cause throbbing pain. The apical abscess usually occurs after pulp necrosis, the pericoronal abscess is usually associated with acute pericoronitis of a lower wisdom tooth, and periodontal abscesses usually represent a complication of chronic periodontitis (gum disease). Less commonly, non-dental conditions can cause toothache, such as maxillary sinusitis, which can cause pain in the upper back teeth, or angina pectoris, which can cause pain in the lower teeth. Correct diagnosis can sometimes be challenging. Proper oral hygiene helps to prevent toothache by preventing dental disease. The treatment of a toothache depends upon the exact cause, and may involve a filling, root canal treatment, extraction, drainage of pus, or other remedial action. The relief of toothache is considered one of the main responsibilities of dentists. Toothache is the most common type of pain in the mouth or face. It is one of the most common reasons for emergency dental appointments. In 2013, 223 million cases of toothache occurred as a result of dental caries in permanent teeth and 53 million cases occurred in baby teeth. Historically, the demand for treatment of toothache is thought to have led to the emergence of dental surgery as the first specialty of medicine. Causes Toothache may be caused by dental (odontogenic) conditions (such as those involving the dentin-pulp complex or periodontium), or by non-dental (non-odontogenic) conditions (such as maxillary sinusitis or angina pectoris). There are many possible non-dental causes, but the vast majority of toothache is dental in origin. Both the pulp and periodontal ligament have nociceptors (pain receptors), but the pulp lacks proprioceptors (motion or position receptors) and mechanoreceptors (mechanical pressure receptors). Consequently, pain originating from the dentin-pulp complex tends to be poorly localized, whereas pain from the periodontal ligament will typically be well localized, although not always. For instance, the periodontal ligament can detect the pressure exerted when biting on something smaller than a grain of sand (10–30 μm). When a tooth is intentionally stimulated, about 33% of people can correctly identify the tooth, and about 20% cannot narrow the stimulus location down to a group of three teeth. Another typical difference between pulpal and periodontal pain is that the latter is not usually made worse by thermal stimuli. Dental Pulpal The majority of pulpal toothache falls into one of the following types; however, other rare causes (which do not always fit neatly into these categories) include galvanic pain and barodontalgia. Pulpitis Pulpitis (inflammation of the pulp) can be triggered by various stimuli (insults), including mechanical, thermal, chemical, and bacterial irritants, or rarely barometric changes and ionizing radiation. Common causes include tooth decay, dental trauma (such as a crack or fracture), or a filling with an imperfect seal. Because the pulp is encased in a rigid outer shell, there is no space to accommodate swelling caused by inflammation. Inflammation therefore increases pressure in the pulp system, potentially compressing the blood vessels which supply the pulp. This may lead to ischemia (lack of oxygen) and necrosis (tissue death). Pulpitis is termed reversible when the inflamed pulp is capable of returning to a state of health, and irreversible when pulp necrosis is inevitable. Reversible pulpitis is characterized by short-lasting pain triggered by cold and sometimes heat. The symptoms of reversible pulpitis may disappear, either because the noxious stimulus is removed, such as when dental decay is removed and a filling placed, or because new layers of dentin (tertiary dentin) have been produced inside the pulp chamber, insulating against the stimulus. Irreversible pulpitis causes spontaneous or lingering pain in response to cold. Dentin hypersensitivity Dentin hypersensitivity is a sharp, short-lasting dental pain occurring in about 15% of the population, which is triggered by cold (such as liquids or air), sweet or spicy foods, and beverages. Teeth will normally have some sensation to these triggers, but what separates hypersensitivity from regular tooth sensation is the intensity of the pain. Hypersensitivity is most commonly caused by a lack of insulation from the triggers in the mouth due to gingival recession (receding gums) exposing the roots of the teeth, although it can occur after scaling and root planing or dental bleaching, or as a result of erosion. The pulp of the tooth remains normal and healthy in dentin hypersensitivity. Many topical treatments for dentin hypersensitivity are available, including desensitizing toothpastes and protective varnishes that coat the exposed dentin surface. Treatment of the root cause is critical, as topical measures are typically short lasting. Over time, the pulp usually adapts by producing new layers of dentin inside the pulp chamber called tertiary dentin, increasing the thickness between the pulp and the exposed dentin surface and lessening the hypersensitivity. Periodontal In general, chronic periodontal conditions do not cause any pain. Rather, it is acute inflammation which is responsible for the pain. Apical periodontitis Apical periodontitis is acute or chronic inflammation around the apex of a tooth caused by an immune response to bacteria within an infected pulp. It does not occur because of pulp necrosis, meaning that a tooth that tests as if it's alive (vital) may cause apical periodontitis, and a pulp which has become non-vital due to a sterile, non-infectious processes (such as trauma) may not cause any apical periodontitis. Bacterial cytotoxins reach the region around the roots of the tooth via the apical foramina and lateral canals, causing vasodilation, sensitization of nerves, osteolysis (bone resorption) and potentially abscess or cyst formation. The periodontal ligament becomes inflamed and there may be pain when biting or tapping on the tooth. On an X-ray, bone resorption appears as a radiolucent area around the end of the root, although this does not manifest immediately. Acute apical periodontitis is characterized by well-localized, spontaneous, persistent, moderate to severe pain. The alveolar process may be tender to palpation over the roots. The tooth may be raised in the socket and feel more prominent than the adjacent teeth. Food impaction Food impaction occurs when food debris, especially fibrous food such as meat, becomes trapped between two teeth and is pushed into the gums during chewing. The usual cause of food impaction is disruption of the normal interproximal contour or drifting of teeth so that a gap is created (an open contact). Decay can lead to collapse of part of the tooth, or a dental restoration may not accurately reproduce the contact point. Irritation, localized discomfort or mild pain and a feeling of pressure from between the two teeth results. The gingival papilla is swollen, tender and bleeds when touched. The pain occurs during and after eating, and may slowly disappear before being evoked again at the next meal, or relieved immediately by using a tooth pick or dental floss in the involved area. A gingival or periodontal abscess may develop from this situation. Periodontal abscess A periodontal abscess (lateral abscess) is a collection of pus that forms in the gingival crevices, usually as a result of chronic periodontitis where the pockets are pathologically deepened greater than 3mm. A healthy gingival pocket will contain bacteria and some calculus kept in check by the immune system. As the pocket deepens, the balance is disrupted, and an acute inflammatory response results, forming pus. The debris and swelling then disrupt the normal flow of fluids into and out of the pocket, rapidly accelerating the inflammatory cycle. Larger pockets also have a greater likelihood of collecting food debris, creating additional sources of infection. Periodontal abscesses are less common than apical abscesses, but are still frequent. The key difference between the two is that the pulp of the tooth tends to be alive, and will respond normally to pulp tests. However, an untreated periodontal abscess may still cause the pulp to die if it reaches the tooth apex in a periodontic-endodontic lesion. A periodontal abscess can occur as the result of tooth fracture, food packing into a periodontal pocket (with poorly shaped fillings), calculus build-up, and lowered immune responses (such as in diabetes). Periodontal abscess can also occur after periodontal scaling, which causes the gums to tighten around the teeth and trap debris in the pocket. Toothache caused by a periodontal abscess is generally deep and throbbing. The oral mucosa covering an early periodontal abscess appears erythematous (red), swollen, shiny, and painful to touch. A variant of the periodontal abscess is the gingival abscess, which is limited to the gingival margin, has a quicker onset, and is typically caused by trauma from items such as a fishbone, toothpick, or toothbrush, rather than chronic periodontitis. The treatment of a periodontal abscess is similar to the management of dental abscesses in general (see: Treatment). However, since the tooth is typically alive, there is no difficulty in accessing the source of infection and, therefore, antibiotics are more routinely used in conjunction with scaling and root planing. The occurrence of a periodontal abscess usually indicates advanced periodontal disease, which requires correct management to prevent recurrent abscesses, including daily cleaning below the gumline to prevent the buildup of subgingival plaque and calculus. Acute necrotizing ulcerative gingivitis Common marginal gingivitis in response to subgingival plaque is usually a painless condition. However, an acute form of gingivitis/periodontitis, termed acute necrotizing ulcerative gingivitis (ANUG), can develop, often suddenly. It is associated with severe periodontal pain, bleeding gums, "punched out" ulceration, loss of the interdental papillae, and possibly also halitosis (bad breath) and a bad taste. Predisposing factors include poor oral hygiene, smoking, malnutrition, psychological stress, and immunosuppression. This condition is not contagious, but multiple cases may simultaneously occur in populations who share the same risk factors (such as students in a dormitory during a period of examination). ANUG is treated over several visits, first with debridement of the necrotic gingiva, homecare with hydrogen peroxide mouthwash, analgesics and, when the pain has subsided sufficiently, cleaning below the gumline, both professionally and at home. Antibiotics are not indicated in ANUG management unless there is underlying systemic disease. Pericoronitis Pericoronitis is inflammation of the soft tissues surrounding the crown of a partially erupted tooth. The lower wisdom tooth is the last tooth to erupt into the mouth, and is, therefore, more frequently impacted, or stuck, against the other teeth. This leaves the tooth partially erupted into the mouth, and there frequently is a flap of gum (an operculum), overlying the tooth. Bacteria and food debris accumulate beneath the operculum, which is an area that is difficult to keep clean because it is hidden and far back in the mouth. The opposing upper wisdom tooth also tends to have sharp cusps and over-erupt because it has no opposing tooth to bite into, and instead traumatizes the operculum further. Periodontitis and dental caries may develop on either the third or second molars, and chronic inflammation develops in the soft tissues. Chronic pericoronitis may not cause any pain, but an acute pericoronitis episode is often associated with pericoronal abscess formation. Typical signs and symptoms of a pericoronal abscess include severe, throbbing pain, which may radiate to adjacent areas in the head and neck, redness, swelling and tenderness of the gum over the tooth. There may be trismus (difficulty opening the mouth), facial swelling, and rubor (flushing) of the cheek that overlies the angle of the jaw. Persons typically develop pericoronitis in their late teens and early 20s, as this is the age that the wisdom teeth are erupting. Treatment for acute conditions includes cleaning the area under the operculum with an antiseptic solution, painkillers, and antibiotics if indicated. After the acute episode has been controlled, the definitive treatment is usually by tooth extraction or, less commonly, the soft tissue is removed (operculectomy). If the tooth is kept, good oral hygiene is required to keep the area free of debris to prevent recurrence of the infection. Occlusal trauma Occlusal trauma results from excessive biting forces exerted on teeth, which overloads the periodontal ligament, causing periodontal pain and a reversible increase in tooth mobility. Occlusal trauma may occur with bruxism, the parafunctional (abnormal) clenching and grinding of teeth during sleep or while awake. Over time, there may be attrition (tooth wear), which may also cause dentin hypersensitivity, and possibly formation of a periodontal abscess, as the occlusal trauma causes adaptive changes in the alveolar bone. Occlusal trauma often occurs when a newly placed dental restoration is built too "high", concentrating the biting forces on one tooth. Height differences measuring less than a millimeter can cause pain. Dentists, therefore, routinely check that any new restoration is in harmony with the bite and forces are distributed correctly over many teeth using articulating paper. If the high spot is quickly eliminated, the pain disappears and there is no permanent harm. Over-tightening of braces can cause periodontal pain and, occasionally, a periodontal abscess. Alveolar osteitis Alveolar osteitis is a complication of tooth extraction (especially lower wisdom teeth) in which the blood clot is not formed or is lost, leaving the socket where the tooth used to be empty, and bare bone is exposed to the mouth. The pain is moderate to severe, and dull, aching, and throbbing in character. The pain is localized to the socket, and may radiate. It normally starts two to four days after the extraction, and may last 10–40 days. Healing is delayed, and it is treated with local anesthetic dressings, which are typically required for five to seven days. There is some evidence that chlorhexidine mouthwash used prior to extractions prevents alveolar osteitis. Combined pulpal-periodontal Dental trauma and cracked tooth syndrome Cracked tooth syndrome refers to a highly variable set of pain-sensitivity symptoms that may accompany a tooth fracture, usually sporadic, sharp pain that occurs during biting or with release of biting pressure, or relieved by releasing pressure on the tooth. The term is falling into disfavor and has given way to the more generalized description of fractures and cracks of the tooth, which allows for the wide variations in signs, symptoms, and prognosis for traumatized teeth. A fracture of a tooth can involve the enamel, dentin, and/or pulp, and can be orientated horizontally or vertically. Fractured or cracked teeth can cause pain via several mechanisms, including dentin hypersensitivity, pulpitis (reversible or irreversible), or periodontal pain. Accordingly, there is no single test or combination of symptoms that accurately diagnose a fracture or crack, although when pain can be stimulated by causing separation of the cusps of the tooth, it's highly suggestive of the disorder. Vertical fractures can be very difficult to identify because the crack can rarely be probed or seen on radiographs, as the fracture runs in the plane of conventional films (similar to how the split between two adjacent panes of glass is invisible when facing them). When toothache results from dental trauma (regardless of the exact pulpal or periodontal diagnosis), the treatment and prognosis is dependent on the extent of damage to the tooth, the stage of development of the tooth, the degree of displacement or, when the tooth is avulsed, the time out of the socket and the starting health of the tooth and bone. Because of the high variation in treatment and prognosis, dentists often use trauma guides to help determine prognosis and direct treatment decisions. The prognosis for a cracked tooth varies with the extent of the fracture. Those cracks that are irritating the pulp but do not extend through the pulp chamber can be amenable to stabilizing dental restorations such as a crown or composite resin. Should the fracture extend though the pulp chamber and into the root, the prognosis of the tooth is hopeless. Periodontic-endodontic lesion Apical abscesses can spread to involve periodontal pockets around a tooth, and periodontal pockets cause eventual pulp necrosis via accessory canals or the apical foramen at the bottom of the tooth. Such lesions are termed periodontic-endodontic lesions, and they may be acutely painful, sharing similar signs and symptoms with a periodontal abscess, or they may cause mild pain or no pain at all if they are chronic and free-draining. Successful root canal therapy is required before periodontal treatment is attempted. Generally, the long-term prognosis of perio-endo lesions is poor. Non-dental Non-dental causes of toothache are much less common as compared with dental causes. In a toothache of neurovascular origin, pain is reported in the teeth in conjunction with a migraine. Local and distant structures (such as ear, brain, carotid artery, or heart) can also refer pain to the teeth. Other non-dental causes of toothache include myofascial pain (muscle pain) and angina pectoris (which classically refers pain to the lower jaw). Very rarely, toothache can be psychogenic in origin. Disorders of the maxillary sinus can be referred to the upper back teeth. The posterior, middle and anterior superior alveolar nerves are all closely associated with the lining of the sinus. The bone between the floor of the maxillary sinus and the roots of the upper back teeth is very thin, and frequently the apices of these teeth disrupt the contour of the sinus floor. Consequently, acute or chronic maxillary sinusitis can be perceived as maxillary toothache, and neoplasms of the sinus (such as adenoid cystic carcinoma) can cause similarly perceived toothache if malignant invasion of the superior alveolar nerves occurs. Classically, sinusitis pain increases upon Valsalva maneuvers or tilting the head forward. Painful conditions which do not originate from the teeth or their supporting structures may affect the oral mucosa of the gums and be interpreted by the individual as toothache. Examples include neoplasms of the gingival or alveolar mucosa (usually squamous cell carcinoma), conditions which cause gingivostomatitis and desquamative gingivitis. Various conditions may involve the alveolar bone, and cause non-odontogenic toothache, such as Burkitt's lymphoma, infarcts in the jaws caused by sickle cell disease, and osteomyelitis. Various conditions of the trigeminal nerve can masquerade as toothache, including trigeminal zoster (maxillary or mandibular division), trigeminal neuralgia, cluster headache, and trigeminal neuropathies. Very rarely, a brain tumor might cause toothache. Another chronic facial pain syndrome which can mimic toothache is temporomandibular disorder (temporomandibular joint pain-dysfunction syndrome), which is very common. Toothache which has no identifiable dental or medical cause is often termed atypical odontalgia, which, in turn, is usually considered a type of atypical facial pain (or persistent idiopathic facial pain). Atypical odontalgia may give very unusual symptoms, such as pain which migrates from one tooth to another and which crosses anatomical boundaries (such as from the left teeth to the right teeth). Pathophysiology A tooth is composed of an outer shell of calcified hard tissues (from hardest to softest: enamel, dentin, and cementum), and an inner soft tissue core (the pulp system), which contains nerves and blood vessels. The visible parts of the teeth in the mouth – the crowns (covered by enamel) – are anchored into the bone by the roots (covered by cementum). Underneath the cementum and enamel layers, dentin forms the bulk of the tooth and surrounds the pulp system. The part of the pulp inside the crown is the pulp chamber, and the central soft tissue nutrient canals within each root are root canals, exiting through one or more holes at the root end (apical foramen/foramina). The periodontal ligament connects the roots to the bony socket. The gingiva covers the alveolar processes, the tooth-bearing arches of the jaws. Enamel is not a vital tissue, as it lacks blood vessels, nerves, and living cells. Consequently, pathologic processes involving only enamel, such as shallow cavities or cracks, tend to be painless. Dentin contains many microscopic tubes containing fluid and the processes of odontoblast cells, which communicate with the pulp. Mechanical, osmotic, or other stimuli cause movement of this fluid, triggering nerves in the pulp (the "hydrodynamic theory" of pulp sensitivity). Due to the close relationship between dentin and pulp, they are frequently considered together as the dentin-pulp complex. The teeth and gums exhibit normal sensations in health. Such sensations are generally sharp, lasting as long as the stimulus. There is a continuous spectrum from physiologic sensation to pain in disease. Pain is an unpleasant sensation caused by intense or damaging events. In a toothache, nerves are stimulated by either exogenous sources (for instance, bacterial toxins, metabolic byproducts, chemicals, or trauma) or endogenous factors (such as inflammatory mediators). The pain pathway is mostly transmitted via myelinated Aδ (sharp or stabbing pain) and unmyelinated C nerve fibers (slow, dull, aching, or burning pain) of the trigeminal nerve, which supplies sensation to the teeth and gums via many divisions and branches. Initially, pain is felt while noxious stimuli are applied (such as cold). Continued exposure decreases firing thresholds of the nerves, allowing normally non-painful stimuli to trigger pain (allodynia). Should the insult continue, noxious stimuli produce larger discharges in the nerve, perceived as more intense pain. Spontaneous pain may occur if the firing threshold is decreased so it can fire without stimulus (hyperalgesia). The physical component of pain is processed in the medullary spinal cord and perceived in the frontal cortex. Because pain perception involves overlapping sensory systems and an emotional component, individual responses to identical stimuli are variable. Diagnosis The diagnosis of toothache can be challenging, not only because the list of potential causes is extensive, but also because dental pain may be extremely variable, and pain can be referred to and from the teeth. Dental pain can simulate virtually any facial pain syndrome. However, the vast majority of toothache is caused by dental, rather than non-dental, sources. Consequently, the saying "horses, not zebras" has been applied to the differential diagnosis of orofacial pain. That is, everyday dental causes (such as pulpitis) should always be considered before unusual, non-dental causes (such as myocardial infarction). In the wider context of orofacial pain, all cases of orofacial pain may be considered as having a dental origin until proven otherwise. The diagnostic approach for toothache is generally carried out in the following sequence: history, followed by examination, and investigations. All this information is then collated and used to build a clinical picture, and a differential diagnosis can be carried out. Symptoms The chief complaint, and the onset of the complaint, are usually important in the diagnosis of toothache. For example, the key distinction between reversible and irreversible pulpitis is given in the history, such as pain following a stimulus in the former, and lingering pain following a stimulus and spontaneous pain in the latter. History is also important in recent filling or other dental treatment, and trauma to the teeth. Based on the most common causes of toothache (dentin hypersensitivity, periodontitis, and pulpitis), the key indicators become localization of the pain (whether the pain is perceived as originating in a specific tooth), thermal sensitivity, pain on biting, spontaneity of the pain, and factors that make the pain worse.The various qualities of the toothache, such as the effect of biting and chewing on the pain, the effect of thermal stimuli, and the effect of the pain on sleep, are verbally established by the clinician, usually in a systematic fashion, such as using the Socrates pain assessment method (see table). From the history, indicators of pulpal, periodontal, a combination of both, or non-dental causes can be observed. Periodontal pain is frequently localized to a particular tooth, which is made much worse by biting on the tooth, sudden in onset, and associated with bleeding and pain when brushing. More than one factor may be involved in the toothache. For example, a pulpal abscess (which is typically severe, spontaneous and localized) can cause periapical periodontitis (which results in pain on biting). Cracked tooth syndrome may also cause a combination of symptoms. Lateral periodontitis (which is usually without any thermal sensitivity and sensitive to biting) can cause pulpitis and the tooth becomes sensitive to cold. Non-dental sources of pain often cause multiple teeth to hurt and have an epicenter that is either above or below the jaws. For instance, cardiac pain (which can make the bottom teeth hurt) usually radiates up from the chest and neck, and sinusitis (which can make the back top teeth hurt) is worsened by bending over. As all of these conditions may mimic toothache, it is possible that dental treatment, such as fillings, root canal treatment, or tooth extraction may be carried out unnecessarily by dentists in an attempt to relieve the individual's pain, and as a result the correct diagnosis is delayed. A hallmark is that there is no obvious dental cause, and signs and symptoms elsewhere in the body may be present. As migraines are typically present for many years, the diagnosis is easier to make. Often the character of the pain is the differentiator between dental and non-dental pain. Irreversible pulpitis progresses to pulp necrosis, wherein the nerves are non-functional, and a pain-free period following the severe pain of irreversible pulpitis may be experienced. However, it is common for irreversible pulpitis to progress to apical periodontitis, including an acute apical abscess, without treatment. As irreversible pulpitis generates an apical abscess, the character of the toothache may simply change without any pain-free period. For instance, the pain becomes well localized, and biting on the tooth becomes painful. Hot drinks can make the tooth feel worse because they expand the gases and likewise, cold can make it feel better, thus some will sip cold water. Examination The clinical examination narrows the source down to a specific tooth, teeth, or a non-dental cause. Clinical examination moves from the outside to the inside, and from the general to the specific. Outside of the mouth, the sinuses, muscles of the face and neck, the temporomandibular joints, and cervical lymph nodes are palpated for pain or swelling. In the mouth, the soft tissues of the gingiva, mucosa, tongue, and pharynx are examined for redness, swelling or deformity. Finally, the teeth are examined. Each tooth that may be painful is percussed (tapped), palpated at the base of the root, and probed with a dental explorer for dental caries and a periodontal probe for periodontitis, then wiggled for mobility. Sometimes the symptoms reported in the history are misleading and point the examiner to the wrong area of the mouth. For instance, sometimes people may mistake pain from pulpitis in a lower tooth as pain in the upper teeth, and vice versa. In other instances, the apparent examination findings may be misleading and lead to the wrong diagnosis and wrong treatment. Pus from a pericoronal abscess associated with a lower third molar may drain along the submucosal plane and discharge as a parulis over the roots of the teeth towards the front of the mouth (a "migratory abscess"). Another example is decay of the tooth root which is hidden from view below the gumline, giving the casual appearance of a sound tooth if careful periodontal examination is not carried out. Factors indicating infection include movement of fluid in the tissues during palpation (fluctuance), swollen lymph nodes in the neck, and fever with an oral temperature more than 37.7 °C. Investigations Any tooth that is identified, in either the history of pain or base clinical exam, as a source for toothache may undergo further testing for vitality of the dental pulp, infection, fractures, or periodontitis. These tests may include: Pulp sensitivity tests, usually carried out with a cotton wool pledget sprayed with ethyl chloride to serve as a cold stimulus, or with an electric pulp tester. The air spray from a three-in-one syringe may also be used to demonstrate areas of dentin hypersensitivity. Heat tests can also be applied with hot Gutta-percha. A healthy tooth will feel the cold but the pain will be mild and disappear once the stimulus is removed. The accuracy of these tests has been reported as 86% for cold testing, 81% for electric pulp testing, and 71% for heat testing. Because of the lack of test sensitivity, a second symptom should be present or a positive test before making a diagnosis. Radiographs utilized to find dental caries and bone loss laterally or at the apex. Assessment of biting on individual teeth (which sometimes helps to localize the problem) or the separate cusps (may help to detect cracked cusp syndrome). Less commonly used tests might include trans-illumination (to detect congestion of the maxillary sinus or to highlight a crack in a tooth), dyes (to help visualize a crack), a test cavity, selective anaesthesia and laser doppler flowmetry. Establishing a diagnosis of nondental toothache is initially done by careful questioning about the site, nature, aggravating and relieving factors, and referral of the pain, then ruling out any dental causes. There are no specific treatments for nondental pain (each treatment is directed at the cause of the pain, rather than the toothache itself), but a dentist can assist in offering potential sources of the pain and direct the patient to appropriate care. The most critical nondental source is the radiation of angina pectoris into the lower teeth and the potential need for urgent cardiac care. Differential diagnoses When it becomes extremely painful and decayed the tooth may be known as a hot tooth. Prevention Since most toothache is the result of plaque-related diseases, such as tooth decay and periodontal disease, the majority of cases could be prevented by avoidance of a cariogenic diet and maintenance of good oral hygiene. That is, reduction in the number times that refined sugars are consumed per day and brushing the teeth twice a day with fluoride toothpaste and interdental cleaning. Regular visits to a dentist also increases the likelihood that problems are detected early and averted before toothache occurs. Dental trauma could also be significantly reduced by routine use of mouthguards in contact sports. Management There are many causes of toothache and its diagnosis is a specialist topic, meaning that attendance at a dentist is usually required. Since many cases of toothache are inflammatory in nature, over the counter non-steroidal anti-inflammatory drugs (NSAIDs) may help (unless contraindicated, such as with a peptic ulcer). Generally, NSAIDs are as effective as aspirin alone or in combination with codeine. However, simple analgesics may have little effect on some causes of toothache, and the severe pain can drive individuals to exceed the maximum dose. For example, when acetaminophen (paracetamol) is taken for toothache, an accidental overdose is more likely to occur when compared to people who are taking acetaminophen for other reasons. Another risk in persons with toothache is a painful chemical burn of the oral mucosa caused by holding a caustic substance such as aspirin tablets and toothache remedies containing eugenol (such as clove oil) against the gum. Although the logic of placing a tablet against the painful tooth is understandable, an aspirin tablet needs to be swallowed to have any pain-killing effect. Caustic toothache remedies require careful application to the tooth only, without coming into excessive contact with the soft tissues of the mouth. For the dentist, the goal of treatment generally is to relieve the pain, and wherever possible to preserve or restore function. The treatment depends on the cause of the toothache, and frequently a clinical decision regarding the current state and long-term prognosis of the affected tooth, as well as the individual's wishes and ability to cope with dental treatment, will influence the treatment choice. Often, administration of an intra-oral local anesthetic such as lidocaine and epinephrine is indicated in order to carry out pain-free treatment. Treatment may range from simple advice, removal of dental decay with a dental drill and subsequent placement of a filling, to root canal treatment, tooth extraction, or debridement. Pulpitis and its sequalae In pulpitis, an important distinction in regard to treatment is whether the inflammation is reversible or irreversible. Treatment of reversible pulpitis is by removing or correcting the causative factor. Usually, the decay is removed, and a sedative dressing is used to encourage the pulp to return to a state of health, either as a base underneath a permanent filling or as a temporary filling intended to last for a period while the tooth is observed to see if pulpitis resolves. Irreversible pulpitis and its sequalae pulp necrosis and apical periodontitis require treatment with root canal therapy or tooth extraction, as the pulp acts as a nidus of infection, which will lead to a chronic infection if not removed. Generally, there is no difference in outcomes between whether the root canal treatment is completed in one or multiple appointments. The field of regenerative endodontics is now developing ways to clean the pulp chamber and regenerate the soft and hard tissues to either regrow or simulate pulp structure. This has proved especially helpful in children where the tooth root has not yet finished developing and root canal treatments have lower success rates. Reversible/irreversible pulpitis is a distinct concept from whether the tooth is restorable or unrestorable, e.g. a tooth may only have reversible pulpitis, but has been structurally weakened by decay or trauma to the point that it is impossible to restore the tooth in the long term. Dental abscesses A general principle concerning dental abscesses is ubi pus, ibi evacua ("where there is pus, drain it"), which applies to any case where there is a collection of pus in the tissues (such as a periodontal abscess, pericoronal abscess, or apical abscess). The pus within the abscess is under pressure, and the surrounding tissues are deformed and stretched to accommodate the swelling. This leads to a sensation of throbbing (often in time with the pulse) and constant pain. Pus may be evacuated via the tooth by drilling into the pulp chamber (an endodontic access cavity). Such a treatment is sometimes termed open drainage. Drainage can also be performed via the tooth socket, once the causative tooth is extracted. If neither of those measures succeeds, or they are impossible, incision and drainage may be required, in which a small incision is made in the soft tissues directly over the abscess at the most dependent point. A surgical instrument such as a pair of tweezers is gently inserted into the incision and opened, while the abscess is massaged to encourage the pus to drain out. Usually, the reduction in pain when the pus drains is immediate and marked as the built up pressure is relieved. If the pus drains into the mouth, there is usually a bad or offensive taste. Antibiotics Antibiotics tend to be extensively used for emergency dental problems. As samples for microbiologic culture and sensitivity are hardly ever carried out in general dental practice, broad-spectrum antibiotics such as amoxicillin are typically used for a short course of about three to seven days. Antibiotics are seen as a "quick fix" by both dentists, who generally only have a very short time to manage dental emergencies, and by patients, who tend to want to avoid treatments (such as tooth extraction) which are perceived negatively. However, antibiotics typically only temporarily suppress an infection, and the need for definitive treatment is only postponed for an unpredictable length of time. An estimated 10% of all antibiotic prescriptions are made by dentists, a major factor in antibiotic resistance. They are often used inappropriately, in conditions for which they are ineffective, or their risks outweigh the benefits, such as irreversible pulpitis, apical abscess, dry socket, or mild pericoronitis. However, the reality is that antibiotics are rarely needed, and they should be used restrictively in dentistry. Local measures such as incision and drainage, and removal of the cause of the infection (such as a necrotic tooth pulp) have a greater therapeutic benefit and are much more important. If abscess drainage has been achieved, antibiotics are not usually necessary. Antibiotics tend to be used when local measures cannot be carried out immediately. In this role, antibiotics suppress the infection until local measures can be carried out. Severe trismus may occur in when the muscles of mastication are involved in an odontogenic infection, making any surgical treatment impossible. Immunocompromised individuals are less able to fight off infections, and antibiotics are usually given. Evidence of systemic involvement (such as a fever higher than 38.5 °C, cervical lymphadenopathy, or malaise) also indicates antibiotic therapy, as do rapidly spreading infections, cellulitis, or severe pericoronitis. Drooling and difficulty swallowing are signs that the airway may be threatened, and may precede difficulty in breathing. Ludwig's angina and cavernous sinus thrombosis are rare but serious complications of odontogenic infections. Severe infections tend to be managed in hospital. Prognosis Most dental pain can be treated with routine dentistry. In rare cases, toothache can be a symptom representing a life-threatening condition, such as a deep neck infection (compression of the airway by a spreading odontogenic infection) or something more remote like a heart attack. Dental caries, if left untreated, follows a predictable natural history as it nears the pulp of the tooth. First it causes reversible pulpitis, which transitions to irreversible pulpitis, then to necrosis, then to necrosis with periapical periodontitis and, finally, to necrosis with periapical abscess. Reversible pulpitis can be stopped by removal of the cavity and the placement of a sedative dressing of any part of the cavity that is near the pulp chamber. Irreversible pulpitis and pulp necrosis are treated with either root canal therapy or extraction. Infection of the periapical tissue will generally resolve with the treatment of the pulp, unless it has expanded to cellulitis or a radicular cyst. The success rate of restorative treatment and sedative dressings in reversible pulpitis, depends on the extent of the disease, as well as several technical factors, such as the sedative agent used and whether a rubber dam was used. The success rate of root canal treatment also depends on the degree of disease (root canal therapy for irreversible pulpitis has a generally higher success rate than necrosis with periapical abscess) and many other technical factors. Epidemiology In the United States, an estimated 12% of people reported that they had a toothache at some point in the six months before questioning. Individuals aged 18–34 reported much higher rates toothache than those aged 75 or over. In a survey of Australian schoolchildren, 12% had experienced toothache before the age of five, and 32% by the age of 12. Dental trauma is extremely common and tends to occur more often in children than adults. Toothache may occur at any age, in any gender and in any geographic region. Diagnosing and relieving toothache is considered one of the main responsibilities of dentists. Irreversible pulpitis is thought to be the most common reason that people seek emergency dental treatment. Since dental caries associated with pulpitis is the most common cause, toothache is more common in populations that are at higher risk of dental caries. The prevalence of caries in a population is dependent upon factors such as diet (refined sugars), socioeconomic status, and exposure to fluoride (such as areas without water fluoridation). History, society and culture The first known mention of tooth decay and toothache occurs on a Sumerian clay tablet now referred to as the "Legend of the worm". It was written in cuneiform, recovered from the Euphrates valley, and dates from around 5000 BC. The belief that tooth decay and dental pain is caused by tooth worms is found in ancient India, Egypt, Japan, and China, and persists until the Age of Enlightenment. Although toothache is an ancient problem, it is thought that ancient people suffered less dental decay due to a lack of refined sugars in their diet. On the other hand, diets were frequently coarser, leading to more tooth wear. For example, hypotheses hold that ancient Egyptians had a lot of tooth wear due to desert sand blown on the wind mixing with the dough of their bread. The ancient Egyptians also wore amulets to prevent toothache. The Ebers papyrus (1500 BC) details a recipe to treat "gnawing of the blood in the tooth", which included fruit of the gebu plant, onion, cake, and dough, to be chewed for four days. Archigenes of Apamea describes use of a mouthwash made by boiling gallnuts and hallicacabum in vinegar, and a mixture of roasted earthworms, spikenard ointment, and crushed spider eggs. Pliny advises toothache sufferers to ask a frog to take away the pain by moonlight. Claudius' physician Scribonius Largus recommends "fumigations made with the seeds of the hyoscyamus scattered on burning charcoal  ... followed by rinsings of the mouth with hot water, in this way  ... small worms are expelled." In Christianity, Saint Apollonia is the patron saint of toothache and other dental problems. She was an early Christian martyr who was persecuted for her beliefs in Alexandria during the Imperial Roman age. A mob struck her repeatedly in the face until all her teeth were smashed. She was threatened with being burned alive unless she renounced Christianity, but she instead chose to throw herself onto the fire. Supposedly, toothache sufferers who invoke her name will find relief. In the 15th century, priest-physician Andrew Boorde describes a "deworming technique" for the teeth: "And if it [toothache] do come by worms, make a candle of wax with Henbane seeds and light it and let the perfume of the candle enter into the tooth and gape over a dish of cold water and then you may take the worms out of the water and kill them on your nail." Albucasis (Abu al-Qasim Khalaf ibn al-Abbas Al-Zahrawi) used cautery for toothache, inserting a red-hot needle into the pulp of the tooth. The medieval surgeon Guy de Chauliac used a camphor, sulfur, myrrh, and asafetida mixture to fill teeth and cure toothworm and toothache. French anatomist Ambroise Paré recommended: "Toothache is, of all others, the most atrocious pain that can torment a man, being followed by death. Erosion (i.e. dental decay) is the effect of an acute and acrid humour. To combat this, one must recourse to cauterization  ... by means of cauterization  ... one burns the nerve, thus rendering it incapable of again feeling or causing pain." In the Elizabethan era, toothache was an ailment associated with lovers, as in Massinger and Fletcher's play The False One. Toothache also appears in a number of William Shakespeare's plays, such as Othello and Cymbeline. In Much Ado About Nothing, Act III scene 2, when asked by his companions why he is feeling sad, a character replies that he has toothache so as not to admit the truth that he is in love. There is reference to "toothworm" as the cause of toothache and to tooth extraction as a cure ("draw it"). In Act V, scene 1, another character remarks: "For there was never yet philosopher That could endure the toothache patiently." In modern parlance, this translates to the observation that philosophers are still human and feel pain, even though they claim they have transcended human suffering and misfortune. In effect, the character is rebuking his friend for trying to make him feel better with philosophical platitudes. The Scottish poet, Robert Burns wrote "Address to the Toothache" in 1786, inspired after he suffered from it. The poem elaborates on the severity of toothache, describing it as the "hell o' a' diseases" (hell of all diseases). A number of plants and trees include "toothache" in their common name. Prickly ash (Zanthoxylum americanum) is sometimes termed "toothache tree", and its bark, "toothache bark"; whilst Ctenium Americanum is sometimes termed "toothache grass", and Acmella oleracea is called "toothache plant". Pellitory (Anacyclus pyrethrum) was traditionally used to relieve toothache. In Kathmandu, Nepal, there is a shrine to Vaishya Dev, the Newar god of toothache. The shrine consists of part of an old tree to which sufferers of toothache nail a rupee coin in order to ask the god to relieve their pain. The lump of wood is called the "toothache tree" and is said to have been cut from the legendary tree, Bangemudha. On this street, many traditional tooth pullers still work and many of the city's dentists have advertisements placed next to the tree. The phrase toothache in the bones is sometimes used to describe the pain in certain types of diabetic neuropathy.
Biology and health sciences
Dentistry
null
1093068
https://en.wikipedia.org/wiki/Horseshoe%20crab
Horseshoe crab
Horseshoe crabs are arthropods of the family Limulidae and the only surviving xiphosurans. Despite their name, they are not true crabs or even crustaceans; they are chelicerates, more closely related to arachnids like spiders, ticks, and scorpions. The body of a horseshoe crab is divided into three main parts: the cephalothorax, abdomen, and telson. The largest of these, the cephalothorax, houses most of the animal's eyes, limbs, and internal organs. It is also where the animal gets its name, as its shape somewhat resembles that of a horseshoe. Horseshoe crabs have been described as "living fossils", having changed little since they first appeared in the Triassic. Only four species of horseshoe crab are extant today. Most are marine, though the mangrove horseshoe crab is often found in brackish water. Additionally, certain extinct species transitioned to living in freshwater. Horseshoe crabs primarily live at the water's bottom but they can swim if needed. In the modern day, their distribution is limited, only found along the east coasts of North America and South Asia. Horseshoe crabs are often caught for their blood, which contains Limulus amebocyte lysate, a chemical used to detect bacterial endotoxins. Additionally, the animals are used as fishing bait in the United States and eaten as a delicacy in some parts of Asia. In recent years, horseshoe crabs have experienced a population decline. This is mainly due to coastal habitat destruction and overharvesting. To ensure their continued existence, many areas have enacted regulations on harvesting and established captive breeding programs. Phylogeny and evolution The fossil record of xiphosurans extends back to the Late Ordovician, or around 445 million years ago. For modern horseshoe crabs, their earliest appearance was approximately 250 million years ago during the Early Triassic. Because they have seen little morphological change since then, extant (surviving) forms have been described as "living fossils". Horseshoe crabs resemble crustaceans but belong to a separate subphylum of the arthropods, Chelicerata. Horseshoe crabs are closely related to the extinct eurypterids (sea scorpions), which include some of the largest arthropods to have ever existed, and the two may be sister groups. The difficult-to-classify chasmataspidids are also thought to be closely related to horseshoe crabs. The radiation of horseshoe crabs resulted in 22 known species, of which only 4 remain. The Atlantic species is sister to the three Asian species, the latter of which are likely the result of two divergences relatively close in time. The last common ancestor of the four extant species is estimated to have lived about 135 million years ago in the Cretaceous. Limulidae is the only extant family of the order Xiphosura, and contains all four living species of horseshoe crabs: Carcinoscorpius rotundicauda, the mangrove horseshoe crab, found in South and Southeast Asia Limulus polyphemus, the Atlantic or American horseshoe crab, found along the Atlantic coast of the United States and the Southeast Gulf of Mexico Tachypleus gigas, the Indo-Pacific, Indonesian, Indian, or southern horseshoe crab, found in South and Southeast Asia Tachypleus tridentatus, the Chinese, Japanese, or tri-spine horseshoe crab, found in Southeast and East Asia Genera After Bicknell et al. 2021 and Lamsdell et al. 2020 Uncertain Placement †Albalimulus? Bicknell & Pates, 2019 Ballagan Formation, Scotland, Early Carboniferous (Tournaisian) (Considered Xiphosura incertae sedis by Lamsdell, 2020) †Casterolimulus Holland, Erickson & O'Brien, 1975 Late Cretaceous (Maastrichtian) Fox Hills Formation, North Dakota, USA (Inconsistently placed in this family) †Heterolimulus gadeai Vıa & Villalta, 1966 Alcover Limestone Formation, Spain, Middle Triassic (Ladinian) †Limulitella? Størmer, 1952 Middle-Upper Triassic, France, Germany, Tunisia, Russia †Sloveniolimulus Bicknell et al., 2019 Strelovec Formation, Slovenia Middle Triassic (Anisian) †Tarracolimulus Romero & Vıa Boada, 1977 Alcover Limestone Formation, Spain, Middle Triassic (Ladinian) †Victalimulus Riek & Gill, 1971 Lower Cretaceous (Aptian) Korumburra Group, NSW, Australia †Yunnanolimulus Zhang et al., 2009 Middle Triassic (Anisian), Guanling Formation, Yunnan, China †Mesolimulus Middle Triassic-Late Cretaceous England, Spain, Siberia, Germany, Morocco †Ostenolimulus Lamsdell et al. 2021 Early Jurassic (Sinemurian) Moltrasio Limestone, Italy †Volanalimulus Lamsdell, 2020 Early Triassic, Madagascar. Subfamily Limulinae Leach, 1819 †Crenatolimulus Feldmann et al., 2011 Upper Jurassic (upper Tithonian) Kcynia Formation, Poland. Lower Cretaceous (Albian) Glen Rose Formation, Texas, USA Limulus O. F. Müller, 1785 Pierre Shale, United States, Late Cretaceous (Maastrichtian), Atlantic North America, Recent Subfamily Tachypleinae Pocock, 1902 Carcinoscorpius Pocock, 1902, Asia, Recent Tachypleus Leach, 1819 Upper Cretaceous (Cenomanian) Haqel and Hjoula Konservat-Lagerstatten, Lebanon, Upper Eocene Domsen Sands, Germany, Asia, Recent Phylogeny The horseshoe crab's position within Chelicerata is complicated. However, most morphological analyses have placed them outside the Arachnida. This assumption was challenged when a genetics-based phylogeny found horseshoe crabs to be the sister group to the ricinuleids, thereby making them an arachnid. In response, a more recent paper has again placed horseshoe crabs as separate from the arachnids. This new study utilized both new and more complete sequencing data while also sampling a larger number of taxa. Below is a cladogram showing the internal relationships of Limulidae (modern horseshoe crabs) based on morphology. It contains both extant and extinct members. Whole genome duplication The common ancestor of arachnids and xiphosurans (the group that includes horseshoe crabs) underwent a whole-genome duplication (WGD) event. This was followed by at least two, possibly three, WGDs in a common ancestor of the living horseshoe crabs. This gives them unusually large genomes for invertebrates (the genomes of C. rotundicauda and T. tridentatus being approximately 1.72 Gb each). Evidence for the duplication events includes similarity in structure between chromosomes (synteny), and clustering of homeobox genes. Over time, many of the duplicated genes have changed through processes of neofunctionalization or subfunctionalization, meaning their functions are different from what they originally were. Evolution of sexual size dimorphism Several hypotheses have been given as possible reasons why a size difference exists between male and female horseshoe crabs This phenomenon is known as sexual size dimorphism and results in the females having a larger average size than males. The existence of this trend is likely due to a combination of two things: First, females take a year longer to mature and undergo an additional molt, giving them a larger average body size. Second, larger female horseshoe crabs can house more eggs within their bodies. This lets them pass on more genetic material than smaller females during each mating cycle, making larger females more prevalent. Anatomy and physiology General body plan Like all arthropods, horseshoe crabs have segmented bodies with jointed limbs, which are covered by a protective cuticle made of chitin. They have heads composed of several segments, which eventually fuse as an embryo. Horseshoe crabs are chelicerates, meaning their bodies are composed of two main parts (tagma): the cephalothorax and the opisthosoma. The first tagma, the cephalothorax or prosoma, is a fusion of the head and thorax. This tagma is also covered by a large, semicircular, carapace that acts like a shield around the animal's body. It is shaped like the hoof of a horse, giving this animal its common name. In addition to the two main tagmata, the horseshoe crab also possesses a long tail-like section known as the telson. In total, horseshoe crabs have 6 pairs of appendages on their cephalothorax. The first of these are the chelicerae, which give chelicerates their name. In horseshoe crabs, these look like tiny pincers in front of the mouth. Behind the chelicerae are the pedipalps, which are primarily used as legs. In the final molt of males, the ends of the pedipalps are modified into specialized, grasping claws used in mating. Following the pedipalps are three pairs of walking legs and a set of pusher legs for moving through soft sediment. Each of these pusher legs is biramous or divided into two separate branches. The branch closest to the front bears a flat end that looks like a leaf. This end is called the flabellum. The branch towards the back is far longer and looks similar to a walking leg. However, rather than ending in just a claw, the back branch has four leaf-like ends that are arranged like a petal. The final segment of the cephalothorax was originally part of the abdomen but fused in the embryo. On it are two flap-like appendages known as chilaria. If severed from the body, lost legs or the telson may slowly regenerate, and cracks in the body shell can heal. The opisthosoma or abdomen of a horseshoe crab is composed of several fused segments. Similar to a trilobite, the abdomen is made up of three lobes: a medial lobe in the middle, and a pleural lobe on either side. Attached to the perimeter of each pleural lobe is a flat, serrated structure known as the flange. The flange on either side is connected by the telson embayment, which itself is attached to the medial lobe. Along the line where these lobes meet are six sets of indentations known as apodeme. Each of these serves as a muscle attachment point for the animal's twelve movable spines. On the underside of the abdomen are several biramous limbs. The branches closest to the outside are flat and broad, while the ones on the inside are more narrow. Closest to the front is a plate-like structure made of two fused appendages. This is the genital operculum and is where horseshoe crabs keep their reproductive organs. Following the operculum are five pairs of book gills. While mainly used for breathing, horseshoe crabs can also use their book gills to swim. At the end of a horseshoe crab's abdomen is a long, tail-like spine known as a telson. It is highly mobile and serves a variety of functions. Nervous system Eyes Horseshoe crabs have a variety of eyes that provide them with useful visual information. The most obvious of these are two large compound eyes found on top of the carapace. This feature is unusual, as all other living chelicerates have lost them in their evolution. In adult horseshoe crabs, the compound eyes comprise around 1,000 individual units known as ommatidia. Each ommatidium is made up of a ring of retinal and pigment cells that surround something known as the eccentric cell. This secondary visual cell gets its name from the way it behaves. The eccentric cell is coupled with the dendrites of normal retinal cells so that when a normal cell depolarizes in the presence of light, the eccentric cell does too. A horseshoe crab's compound eyes are less complex and organized than those of most other arthropods. Ommatidia are arranged messily in what's been deemed an "imperfect hexagonal array" and have a highly variable number of photoreceptors (between 4 and 20) in their retina. Although each ommatidium typically has one eccentric cell, there are sometimes two, and occasionally more. All the eye's photoreceptors, both rods and cones, have a single visual pigment with a peak absorption of around 525 nanometers. This differs from insects or decapod crustaceans, as their photoreceptors are sensitive to different spectrums of light. Horseshoe crabs have relatively poor vision, and to compensate for that, have the largest rods and cones of any known animal, about 100 times the size of humans'. Furthermore, their eyes are a million times more sensitive to light at night than during the day. At the front of the animal along the cardiac ridge are a pair of eyes known as median ocelli. Their retina is even less organized than those of the compound eyes having between 5 and 11 photoreceptors paired with one or two secondary visual cells called arhabdomeric cells. Arhabdomeric cells are equivalent to eccentric cells as they function identically. The median ocelli are unique due to having two distinct visual pigments. While the first functions similarly to the pigment in the compound eyes, the second has a peak absorption of around 360 nanometers, allowing the animal to see ultraviolet light. Other, more rudimentary eyes in horseshoe crabs include the endoparietal ocelli, the two lateral ocelli, two ventral ocelli, and a cluster of photoreceptors on the abdomen and telson. The endoparietal, lateral, and ventral ocelli are very similar to the median ocelli, except like the compound eyes, they only see in visual light with a peak absorbance of around 525 nanometers. The endoparietal eye further differs due to being a fusion of two separate ocelli. This eye is found not far behind the median eyes and sits directly on the cardiac ridge. The two ventral ocelli are located on the underside of the cephalothorax near the mouth and likely help to orient the animal when walking around or swimming. The lateral eyes can be found directly behind the compound eyes and become functional just before a horseshoe crab larvae hatch. The telson's photoreceptors are unique as they're spaced throughout the structure rather than located in a fixed spot. Together with UV-seeing median ocelli, these photoreceptors have been found to influence the animal's circadian rhythm. Circulation and respiration Like all arthropods, horseshoe crabs have an open circulatory system. This means that instead of using a system of closed-off veins and arteries, gasses are transported through a cavity called the hemocoel. The hemocoel contains hemolymph, a fluid that fills all parts of the cavity and serves as the animal's blood. Rather than using iron-based hemoglobin, horseshoe crabs transport oxygen with a copper-based protein called hemocyanin, giving its blood a bright blue color. The blood also contains two types of cells: amebocytes that are utilized in clotting, and cyanocytes that create hemocyanin. Horseshoe crabs pump blood with a long, tubular heart located in the middle of their body. Like the hearts of vertebrates, the hearts of these animals have two separate states: a state of contraction known as systole, and a state of relaxation known as diastole. At the beginning of systole, blood leaves the heart through a large artery known as the aorta and numerous arteries parallel to the heart. Next, the arteries dump blood into large cavities of the hemocoel surrounding the animal's tissues. Larger cavities lead to smaller cavities, allowing the hemocoel to oxygenate all the animal's tissues. During diastole, blood flows from the hemocoel to a cavity known as the pericardial sinus. From there, blood re-enters the heart and the cycle begins again. Horseshoe crabs breathe through modified swimming appendages beneath their abdomen known as book gills. While they appear smooth on the outside, the insides of these book gills are lined with several thin "pages" called lamellae. Each lamella is hollow and contains an extension of hemocoel, allowing gasses to diffuse between a Horseshoe crab's blood and external environment. There are roughly 80–200 lamellae are present in each gill, with all ten of them giving the animal with a total breathing surface area of about two square meters. When underwater, the lamellae are routinely aerated by rhythmic movement of the book gills. These movements create a current that enters through two gaps between the cephalothorax and abdomen and exits on either side of the telson. Feeding, digestion, and excretion Horseshoe crabs first break up their food using bristles known as gnathobases located at the coxa or base of their walking limbs. Gnathobases on the right and left legs form a cavity known as the food groove that begins near the pusher legs and extends to the animal's mouth. The end of the groove is closed off by the animal's chilaria. To break up any food, each pair of coxa moves in the opposite direction parallel to the ones in front of and behind it. This motion happens while feeding and walking, pushing food towards the mouth. Horseshoe crabs catch soft prey with claws on their second to fifth legs and place them in the food groove to be ground up. For harder prey, Horseshoe crabs use a pair of stout, cuspid gnathobases (informally known as "nutcrackers") on the back of their sixth legs. After the food is sufficiently torn up, it is moved by the chelicerae into the mouth for further digestion. Horseshoe crabs are some of the only living chelicerates with guts that can process solid food. Its digestive system is J-shaped, lined with a cuticle, and can be divided into three main sections: the foregut, midgut, and hindgut. The foregut is contained in the animal's cephalothorax and comprises the esophagus, crop, and gizzard. The esophagus moves food from the mouth to the crop where it is stored before entering the gizzard. The gizzard is a muscular, toothed organ that serves to pulverize the food from the crop and regurgitate any indigestible particles. The foregut terminates in the pyloric valve and sphincter, a muscular door of sorts that separates it from the midgut. The midgut is composed of a short stomach a long intestinal tube. Connected to the stomach are a pair of large, sack-like digestive ceca known as hepatopancreases. These ceca fill most of the cephalothoracic and abdominal hemocoel and are where most digestion and nutrient absorption takes place. Before and following digestion, the midgut lining (epithelium) secretes a peritrophic membrane made of chitin and mucoproteins that surrounds the food and later the feces. Horseshoe crabs excrete waste through both their book gills and hindgut. Similar to many aquatic animals, horseshoe crabs have an ammonotelic metabolism and eliminate ammonia and other small toxins through diffusion with their gills. After being processed in the midgut, waste is passed into a muscular tube known as the hindgut or rectum and then excreted from a sphincter known as the anus. Externally, this opening is located on the bottom side of the animal right below its telson. Distribution and habitat In the modern day, horseshoe crabs have a relatively limited distribution. The three Asian species mainly occur in South and Southeast Asia along the Bay of Bengal and the coasts of Indonesia. A notable exception is the tri-spine horseshoe crab, whose range extends northward to the coasts of China, Taiwan, and Southern Japan. The American species lives from the coast of Nova Scotia to the northern Gulf of Mexico, with another population residing around the Yucatán Peninsula. Extant horseshoe crabs generally live in salt water, though one species, the mangrove horseshoe crab (Carcinoscorpius) is often found in more brackish environments. Past adaptation to freshwater According to a phylogeny from 2015, now-extinct xiphosurans traveled to freshwater at least five times throughout history. This same transition happened twice in the horseshoe crabs Victalimulus and Limulitella, with both inhabiting environments such as swamps and rivers. Behavior and life history Diet Horseshoe crabs primarily eat worms and mollusks living on the ocean floor. They may also feed on crustaceans and even small fish. Foraging usually takes place at night. Locomotion Horseshoe crabs live a primarily benthic lifestyle, preferring to stay at the water's bottom. However, they're also known to swim. This behavior is widespread in young individuals or those traveling to the shore to breed. Horseshoe crabs swim upside-down with their bodies pointed downwards at an angle. They use their telson as a rudder, changing direction towards where it moved. To swim, the animal's retracted legs move to the front of its cephalothorax, extend, and stroke towards the back. This motion happens in unison with the genital operculum and the first three pairs of book gills. While the front appendages reset, the back two book gills perform a smaller stroke. Horseshoe crabs have a variety of ways to right or flip themselves over. The most common method involves the animal arching its opisthosoma towards the carapace and balancing its telson on the substrate. The animal then moves the telson while beating its legs and gills. This causes the animal to tilt and eventually flip over. Furthermore, horseshoe crabs can right themselves while swimming. This method involves the animal swimming to the bottom, rolling on its side, and touching the bottom with its pusher legs while still in the water column. It has been found that harvesting blood from horshoecrabs drastically impacts their percent daily activity, decreasing their overall movement. Growth and development Baby horseshoe crabs begin their lives as a "trilobite larva", a name given due to their resemblance to a trilobite. Upon hatching, larva typically measure around long. Their telson is small, and they lack three pairs of book gills. In all other respects, the larvae appear like minuscule adults. Baby horseshoe crabs can swim and burrow in sediment after emerging from their egg. As the larvae molt into juveniles, their telson gets longer and they gain their missing book gills. Juveniles can attain a carapace width of around in their first year. For each molt, the juvenile will grow about 33% larger. This process continues until the animal reaches its adult size. When mature, female horseshoe crabs are typically 20–30% larger than males. The smallest species is the mangrove horseshoe crab (C. rotundicauda) and the largest is the tri-spine horseshoe crab (T. tridentatus). On average, males of C. rotundicauda are about long, including a telson that is about , and a carapace about wide. Some southern populations (in the Yucatán Peninsula) of L. polyphemus are somewhat smaller, but otherwise, this species is larger. In the largest species, T. tridentatus, females can reach as much as long, including their telson, and up to in weight. This is only about longer than the largest females of L. polyphemus and T. gigas, but roughly twice the weight. Reproduction During the breeding season (spring and summer in the Northeast US, year-round in warmer locations) horseshoe crabs migrate to shallow coastal waters. Nesting typically happens at high tides around full or new moons. When nesting, they spawn on beaches and salt marshes. When mating, the smaller male clings to the back or opisthosoma of the larger female using specialized pedipalps. This typically leaves scars, allowing younger females to be easily identified. In the meantime, the female digs a hole in the sediment and lays between 2,000 and 30,000 large eggs. Unusual for arthropods, fertilization is done externally. In most species, procreation is done by both the main and additional "satellite males". Satellite males surround the main pair and may have some success fertilizing eggs. In L. polyphemus, the eggs take about two weeks to hatch with shore birds eating many of them in the process. Natural breeding of horseshoe crabs in captivity has proven to be difficult. Some evidence indicates that mating takes place only in the presence of the sand or mud in where horseshoe crab eggs have previously hatched. However, it is not known with certainty what the animals sense in the sand, how they sense it, or why they only mate in its presence. In contrast, artificial insemination and induced spawning have been done since the 1980s. Additionally, eggs and juveniles collected from the wild can easily be raised to adulthood in a captive environment. Relationship with humans Consumption Though they have little meat, horseshoe crabs are valued as a delicacy in some parts of East and Southeast Asia. The meat is white, has a rubbery texture similar to lobster, and possesses a slightly salty aftertaste. Horseshoe crab can be eaten both raw and cooked, but must be properly prepared to prevent food poisoning. Furthermore, only certain species can be eaten. There have been numerous reports of poisonings after consuming mangrove horseshoe crabs (Carcinoscorpius rotundicauda) as its meat contains tetrodotoxin. While horseshoe crab meat is commonly prepared by grilling or stewing, it can also be pickled in vinegar or stir-fried with vegetables. Many recipes involve the use of various spices, herbs, and chilies to give the dish more flavor. In addition to their meat, horseshoe crabs are valued for their eggs. Much like the meat, only the eggs of specific species can be eaten. The eggs of mangrove horseshoe crabs contain tetrodotoxin and will result in food poisoning if consumed. Use in fisheries In the United States, horseshoe crabs are used as bait to fish for eels, whelk, or conch. Nearly 1 million crabs are harvested yearly for bait in the United States, dwarfing the biomedical mortality. However, fishing with horseshoe crab was banned indefinitely in New Jersey in 2008 with a moratorium on harvesting to protect the red knot, a shorebird that eats the crabs' eggs. A ban on catching female crabs was put in place in Delaware, and a permanent moratorium is in effect in South Carolina. Use in medicine The blood of a horseshoe crab contains cells known as amebocytes. These play a similar role to the white blood cells of vertebrates in defending the organism against pathogens. Amebocytes from the blood of Limulus polyphemus are used to make Limulus amebocyte lysate (LAL), which is used for the detection of bacterial endotoxins in medical applications. There is a high demand for blood, the harvest of which involves collecting the animals, bleeding them, and then releasing them back into the sea. Most of the animals survive the process; mortality is correlated with both the amount of blood extracted from an individual animal and the stress experienced during handling and transportation. Estimates of mortality rates following blood harvesting vary from 3–15% to 10–30%. Approximately 500,000 Limulus are harvested annually for this purpose. Bleeding may prevent female horseshoe crabs from being able to spawn or decrease the number of eggs they can lay. According to the biomedical industry, up to 30% of an individual's blood is removed. NPR disagrees with this claim, reporting that it "can deplete them of more than half their volume of blue blood". The horseshoe crabs spend between one and three days away from the ocean before being returned. As long as the gills stay moist, they can survive on land for four days. Some scientists are skeptical that certain companies return their horseshoe crabs to the ocean at all, instead suspecting them of selling the horseshoe crabs as fishing bait. The harvesting of horseshoe crab blood in the pharmaceutical industry is in decline. In 1986, Kyushu University researchers discovered that the same test could be achieved by using isolated Limulus clotting factor C (rFC), an enzyme found in LAL, as by using LAL itself. Jeak Ling Ding, a National University of Singapore researcher, patented a process for manufacturing rFC; on 8 May 2003, synthetic isolated rFC made via her patented process became available for the first time. Industry at first took little interest in the new product, however, as it was patent-encumbered, not yet approved by regulators, and sold by a single manufacturer, Lonza Group. In 2013, however, Hyglos GmbH also began manufacturing its own rFC product. This, combined with the acceptance of rFC by European regulators, the comparable cost between LAL and rFC, and support from Eli Lilly and Company, which committed to using rFC in lieu of LAL, is projected to all but end the practice of blood harvesting from horseshoe crabs. Vaccine research and development during the COVID-19 pandemic has added an additional "strain on the American horseshoe crab." In December 2019, a report of the US Senate which encouraged the Food and Drug Administration to "establish processes for evaluating alternative pyrogenicity tests and report back [to the Senate] on steps taken to increase their use" was released; PETA backed the report. In June 2020, it was reported that U.S. Pharmacopeia had declined to give rFC equal standing with horseshoe crab blood. Without the approval for the classification as an industry standard testing material, U.S. companies will have to overcome the scrutiny of showing that rFC is safe and effective for their desired uses, which may serve as a deterrent for usage of the horseshoe crab blood substitute. Conservation status Development along shorelines is dangerous to horseshoe crab spawning, limiting available space and degrading habitat. Bulkheads can block access to intertidal spawning regions as well. The population of Indo-Pacific horseshoe crabs (Tachypleus gigas) in Malaysia and Indonesia has decreased dramatically since 2010. This is primarily due to overharvesting, as horseshoe crabs are considered a delicacy in countries like Thailand. The individuals most likely to be targeted are gravid females, as they can be sold for both their meat and eggs. This method of harvesting has led to an unbalanced sex ratio in the wild, something that also contributes to the area's declining population. Because of habitat destruction for shoreline development, use in fishing, plastic pollution, status as a culinary delicacy, and use in research and medicine, the horseshoe crab faces both endangered and extinct statuses. One species, the tri-spine horseshoe crab (Tachypleus tridentatus), has already been declared locally extinct in Taiwan. Facing a greater than 90% decrease in T. tridentatus juveniles, it is suspected that Hong Kong will be the next to declare tri-spine horseshoe crabs as extinct from the area. This species is listed as endangered on the IUCN Red List, specifically because of the overexploitation and loss of critical habitat. To preserve and ensure the continuous supply of horseshoe crabs, a breeding center was built in Johor, Malaysia where animals are bred and released back into the ocean in the thousands once every two years. It is estimated to take around 12 years before they are suitable for consumption. A low horseshoe crab population in Delaware Bay is hypothesized to endanger the future of the red knot. Red knots, long-distance migratory shorebirds, feed on the protein-rich eggs during their stopovers on the beaches of New Jersey and Delaware. An effort is ongoing to develop adaptive-management plans to regulate horseshoe crab harvests in the bay in a way that protects migrating shorebirds. In 2023, the US Fish and Wildlife Service halted the harvesting of horseshoe crabs in the Cape Romain National Wildlife Refuge, South Carolina, from March 15 to July 15 to aid their reproduction. This decision was influenced by the importance of horseshoe crab eggs as a food source for migratory birds, the ongoing use of horseshoe crabs for bait, and the use of their blood in medical products. The ban supports the conservation goals of the refuge, spanning 66,000 acres (26,700 hectares) of marshes, beaches, and islands near Charleston. Declining horseshoe crab populations are also a danger to the population stability of other species in shared ecosystems, such as bird species on the East Coast of the USA which feed upon their eggs, for example.
Biology and health sciences
Chelicerata (except arachnids)
Animals
1093073
https://en.wikipedia.org/wiki/Xiphosura
Xiphosura
Xiphosura (; , in reference to its sword-like telson) is an order of arthropods related to arachnids. They are more commonly known as horseshoe crabs (a name applied more specifically to the only extant family, Limulidae). They first appeared in the Hirnantian (Late Ordovician). Currently, there are only four living species. Xiphosura contains one suborder, Xiphosurida, and several stem-genera. The group has hardly changed in appearance in hundreds of millions of years; the modern horseshoe crabs look almost identical to prehistoric genera and are considered to be living fossils. The most notable difference between ancient and modern forms is that the abdominal segments in present species are fused into a single unit in adults. Xiphosura were historically placed in the class Merostomata, although this term was intended to encompass also the eurypterids, whence it denoted what is now thought to be an unnatural (paraphyletic) group (although this is a grouping recovered in some recent cladistic analyses). Although the name Merostomata is still seen in textbooks, without reference to the Eurypterida, some have urged that this usage should be discouraged. The Merostomata label originally did not include Eurypterida, although they were added in as a better understanding of the extinct group evolved. Now Eurypterida is classified within Sclerophorata together with the arachnids, and therefore, Merostomata is now a synonym of Xiphosura. Several recent phylogenomic studies place Xiphosura within Arachnida, often as the sister group of Ricinulei; included among them are taxonomically comprehensive analyses of both morphology and genomes, which have recovered Merostomata as a derived clade of arachnids. Description Modern xiphosurans reach up to in adult length, but the Paleozoic species were often far smaller, some as small as long. Their bodies are divided into an anterior prosoma and a posterior opisthosoma, or abdomen. The upper surface of the prosoma is covered by a semicircular carapace, while the underside bears five pairs of walking legs and a pair of pincer-like chelicerae. The mouth is located on underside of the center of the prosoma, between the bases of the walking legs, and lies behind a lip-like structure called the labrum. The exoskeleton consist of a tough cuticle, but do not contain any crystalline biominerals. Like scorpions, xiphosurans have an exocuticular layer of hyaline which exhibits UV fluorescence. Xiphosurans have up to four eyes, located in the carapace. Two compound eyes are on the side of the prosoma, with one or two median ocelli towards the front. The compound eyes are simpler in structure than those of other arthropods, with the individual ommatidia not being arranged in a compact pattern. They can probably detect movement, but are unlikely to be able to form a true image. In front of the ocelli is an additional organ that probably functions as a chemoreceptor. The first four pairs of legs end in pincers, and have a series of spines, called the gnathobase, on the inner surface. The spines are used to masticate the food, tearing it up before passing it to the mouth. The fifth and final pair of legs, however, has no pincers or spines, instead having structures for cleaning the gills and pushing mud out of the way while burrowing. Behind the walking legs is a sixth set of appendages, the chilaria, which are greatly reduced in size and covered in hairs and spines. These are thought to be vestiges of the limbs of an absorbed first opisthosomal segment. The opisthosoma is divided into a forward mesosoma, with flattened appendages, and a metasoma at the rear, which has no appendages. In modern forms, the whole of the opisthosoma is fused into a single unsegmented structure. The underside of the opisthosoma carries the genital openings and five pairs of flap-like gills. The opisthosoma terminates in a long caudal spine, commonly referred to as a telson (though this same term is also used for a different structure in crustaceans). The spine is highly mobile, and is used to push the animal upright if it is accidentally turned over. Internal anatomy The mouth opens into a sclerotised oesophagus, which leads to a crop and gizzard. After grinding up its food in the gizzard, the animal regurgitates any inedible portions, and passes the remainder to the true stomach. The stomach secretes digestive enzymes, and is attached to an intestine and two large caeca that extend through much of the body, and absorb the nutrients from the food. The intestine terminates in a sclerotised rectum, which opens just in front of the base of the caudal spine. Xiphosurans have well-developed circulatory systems, with numerous arteries that send blood from the long tubular heart to the body tissues, and then to two longitudinal sinuses next to the gills. After being oxygenated, the blood flows into the body cavity, and back to the heart. The blood contains haemocyanin, a blue copper-based pigment performing the same function as haemoglobin in vertebrates, and also has blood cells that aid in clotting. The excretory system consists of two pairs of coxal glands connected to a bladder that opens near the base of the last pair of walking legs. The brain is relatively large, and, as in many arthropods, surrounds the oesophagus. In both sexes, the single gonad lies next to the intestine and opens on the underside of the opisthosoma. Reproduction Xiphosurans move to shallow water to mate. The male climbs onto the back of the female, gripping her with his first pair of walking legs. The female digs out a depression in the sand, and lays from 200 to 300 eggs, which the male covers with sperm. The pair then separates, and the female buries the eggs. The egg is about across. Inside the egg, the embryo goes through four molts before it hatches into a larva, often called a 'trilobite larva' due to its superficial resemblance to a trilobite. At this stage it has no telson yet, and the larva is lecithotrophic (non-feeding) and planktonic, subsisting on the maternal yolk before settling to the bottom to molt, after which the telson first appears. Through a series of successive moults, the larva develops additional gills, increases the length of its caudal spine, and gradually assumes the adult form. Modern xiphosurans reach sexual maturity after about three years of growth. Evolutionary history The oldest known stem-Xiphosuran, Lunataspis, is known from the late Ordovician of Canada, around 445 million years ago. No xiphosurans are known from the following Silurian. Xiphosurida first appears during the late Devonian. A major radiation of freshwater xiphosurids, the Belinuridae is known from the Carboniferous, with the oldest representatives of the modern family Limulidae also possibly appearing during this time, though they only appear in abundance during the Triassic. Another major radiation of freshwater xiphosurans, the Austrolimulidae, is known from the Permian and Triassic. As a group they have never showed much diversity in regard of species. Less than 50 fossil species are known from the Carboniferous period, when they were at their most diverse. The last common ancestor of modern limulids has been suggested to date to the Jurassic-Cretaceous boundary based on molecular clock dating though depending on phylogeny the fossil record may suggest a split as old as the Triassic. Classification Xiphosuran classification : Order Xiphosura Latreille, 1802 †Maldybulakia Tesakov & Alekseev, 1998 (Devonian) †Willwerathia Størmer, 1969 (Devonian) †Kasibelinuridae Pickett, 1993 (Middle Devonian to Late Devonian) Suborder Xiphosurida †Infraorder Belinurina †Belinuridae Zittel & Eastman, 1913 (Middle Devonian to Upper Carboniferous) Infraorder Limulina †Bellinuroopsis Chernyshev, 1933 (Carboniferous) †Rolfeiidae Selden & Siveter, 1987 (Early Carboniferous to Early Permian) Superfamily †Paleolimuloidea Anderson & Selden, 1997 †Paleolimulidae Raymond, 1944 (Carboniferous to Permian) Superfamily Limuloidea †Valloisella Racheboeuf, 1992 (Carboniferous) †Austrolimulidae Riek, 1955 (Early Permian-Early Jurassic) Limulidae Zittel, 1885 (Triassic to recent) Limulinae Zittel, 1885 (Late Jurassic-Present) Tachypleinae Pocock, 1902 (Late Cretaceous-Recent) Taxa removed from Xiphosura Two groups were originally included in the Xiphosura, but since have been assigned to separate classes: Aglaspida Walcott, 1911 (Cambrian to Ordovician) Chasmataspidida Caster & Brooks, 1956 (Lower Ordovician) Cladogram Cladogram after Lasmdell 2020.
Biology and health sciences
Chelicerata
null
1093780
https://en.wikipedia.org/wiki/Turbot
Turbot
The turbot (English: , French: .; Scophthalmus maximus) is a relatively large species of flatfish in the family Scophthalmidae. It is a demersal fish native to marine or brackish waters of the Northeast Atlantic, Baltic Sea and the Mediterranean Sea. It is an important food fish. Turbot in the Black Sea were often included in this species, but are now generally regarded as separate - the Black Sea turbot or kalkan (S. maeoticus). True turbot are not found in the Northwest Atlantic; the "turbot" of that region, which was involved in the so-called "Turbot War" between Canada and Spain, is the Greenland halibut or Greenland turbot (Reinhardtius hippoglossoides). Etymology The word comes from the Old French , which may be a derivative of the Latin ('spinning top'), a possible reference to its shape. Another possible origin of the Old French word is from Old Swedish , from 'thorn' + 'stump, butt, flatfish', which may also be a reference to its shape (compare native English halibut). Early reference to the turbot can be found in a satirical poem ("The Emperor's Fish") by Juvenal, a Roman poet of the late 1st and early 2nd centuries AD, suggesting this fish was a delicacy in the Roman empire. Description The turbot is a large left eyed flatfish found primarily close to shore in sandy shallow waters throughout the Mediterranean, the Baltic Sea, the Black Sea, and the North Atlantic. The European turbot has an asymmetric disk-shaped body, and has been known to grow up to long and in weight. Fisheries Turbot is highly prized as a food fish for its delicate flavour, and is also known as brat, breet, britt, or butt. It is a valuable commercial species, acquired through aquaculture and trawling. Turbot are farmed in Bulgaria, Canada, France, Spain, Portugal, Romania, Turkey, Chile, Norway, and China. Turbot has a bright white flesh that retains this appearance when cooked. Like all flatfish, turbot yields four fillets with meatier topside portions that may be baked, poached, steamed, or pan-fried.
Biology and health sciences
Acanthomorpha
null
1093842
https://en.wikipedia.org/wiki/Scutum
Scutum
The scutum (; :scuta) was a type of shield used among Italic peoples in antiquity, most notably by the army of ancient Rome starting about the fourth century BC. The Romans adopted it when they switched from the military formation of the hoplite phalanx of the Greeks to the formation with maniples (). In the former, the soldiers carried a round shield, which the Romans called a clipeus. In the latter, they used the scutum, which was larger. Originally, it was oblong and convex, but by the first century BC, it had developed into the rectangular, semi-cylindrical shield that is popularly associated with the scutum in modern times. This was not the only kind the Romans used; Roman shields were of varying types depending on the role of the soldier who carried it. Oval, circular and rectangular shapes were used throughout Roman history. History The first depictions of the scutum are by the Este culture in the 8th century BC, and subsequently spread to the Italians, Illyrians, and Celts. In the early days of ancient Rome (from the late regal period to the first part of the early republican period) Roman soldiers wore clipeus, which were like the (), smaller (than the scutum) round shields used in the Greek hoplite phalanx. The hoplites were heavy infantrymen who originally wore bronze shields and helmets. The phalanx was a compact, rectangular mass military formation. The soldiers lined up in very tight ranks in a formation that was eight lines deep. The phalanx advanced in unison, which encouraged cohesion among the troops. It formed a shield wall and a mass of spears pointing towards the enemy. Its compactness provided a thrusting force that had a great impact on the enemy and made frontal assaults against it very difficult. However, it worked only if the soldiers kept the formation tight and had the discipline needed to keep its compactness in the thick of the battle. It was a rigid form of fighting and its maneuverability was limited. The small shields provided less protection. However, their smaller size afforded more mobility. Their round shape enabled the soldiers to interlock them to hold the line together. Sometime in the early fourth century BC, the Romans changed their military tactics from the hoplite phalanx to the manipular formation, which was much more flexible. This involved a change in military equipment. The scutum replaced the clipeus. Some ancient writers thought that the Romans had adopted the maniples and the scutum when they fought against the Samnites in the first or second Samnite War (343–341 BC, 327–304 BC). However, Livy did not mention the scutum being a Samnite shield and wrote that the oblong shield and the manipular formation were introduced in the early fourth century BC, before the conflicts between the Romans and the Samnites. Plutarch mentioned the use of the long shield in a battle that took place in 366 BC. Couissin notes archaeological evidence shows that the scutum was in general use among Italic peoples long before the Samnite Wars and argues that it was not obtained from the Samnites. In some parts of Italy the scutum had been used since pre-historical times. Polybius gave a description of the early second-century scutum BC: Roman rectangular scutums of later eras were smaller than Republican oval scutums and often varied in length from approximately tall (approximately 3 to 3.5 Roman feet, covering the shoulder to top of knee), and wide (approximately 2 to 2.7 Roman feet). The oval scutum is depicted on the Altar of Domitius Ahenobarbus in Rome, the Aemilius Paullus monument at Delphi, and there is an actual example found at Kasr el-Harit in Egypt. Gradually the scutum evolved into the rectangular (or sub-rectangular) type of the early Roman Empire. By the end of the 3rd century the rectangular scutum seems to have disappeared. Fourth century archaeological finds (especially from the fortress of Dura-Europos) indicate the subsequent use of oval or round shields which were not semi-cylindrical but were either dished (bowl-shaped) or flat. Roman artwork from the end of the 3rd century until the end of Antiquity show soldiers wielding oval or round shields. The word "scutum" survived the Fall of the Western Empire and remained in the military vocabulary of the Byzantine Empire. Even in the 11th century, the Byzantines called their armoured soldiers skutatoi (Grk. σκυτατοί), and several modern Romance languages use derivatives of the word. Structure The scutum was a large rectangle curved shield made from three sheets of wood glued together and covered with canvas and leather, usually with a spindle shaped boss along the vertical length of the shield. The best surviving example, from Dura-Europos in Syria, was high, across, and deep (due to its semicylindrical nature). It is made from strips of wood that are 30 to 80 millimetres (1.2 to 3.1 in) wide and 1.5 to 2 millimetres (0.059 to 0.079 in) thick. They are put together in three layers, so that the total thickness of the wood layer is 4.5 to 6 millimetres (0.18 to 0.24 in). It was likely well made and extremely sturdy. Advantages and disadvantages The scutum was light enough to be held in one hand and its large height and width covered the entire wielder, making him very unlikely to be hit by missile fire and in hand-to-hand combat. The metal boss, or umbo, in the centre of the scutum also made it an auxiliary punching weapon. Its composite construction meant that early versions of the scutum could fail from a heavy cutting or piercing blow, which was experienced in the Roman campaigns against Carthage and Dacia where the falcata and falx could easily penetrate and rip through it. The effects of these weapons prompted design changes that made the scutum more resilient such as thicker planks and metal edges. The aspis, which it replaced, provided less protective coverage than the scutum but was much more durable. Combat uses According to Polybius, the scutum gave Roman soldiers an edge over their Carthaginian enemies during the Punic Wars: "Their arms also give the men both protection and confidence, which they owed to the size of the shield." The Roman writer Suetonius recorded anecdotes of the heroic centurion Cassius Scaeva and legionary Gaius Acilius who fought under Caesar in the Battle of Dyrrachium and the battle of Massilia, respectively: The Roman writer Cassius Dio in his Roman History described Roman against Roman in the Battle of Philippi: "For a long time there was pushing of shield against shield and thrusting with the sword, as they were at first cautiously looking for a chance to wound others without being wounded themselves." The shape of the scutum allowed packed formations of legionaries to overlap their shields to provide an effective barrier against projectiles. The most novel (and specialised, for it afforded negligible protection against other attacks) use was the testudo (Latin for "tortoise"), which added legionaries holding shields from above to protect against descending projectiles (such as arrows, spears, or objects thrown by defenders on walls). Dio gives an account of a testudo put to good use by Marc Antony's men while on campaign in Armenia: However, the testudo was not invincible, as Dio also gives an account of a Roman shield array being defeated by Parthian knights and horse archers at the Battle of Carrhae: Special uses Cassius Dio describes scuta being used to aid an ambush: Dio also notes the use of the scutum as a tool of psychological warfare during the capture of Syracuse: In 27 BC, the emperor Augustus was awarded a golden shield by the senate for his part in ending the civil war and restoring the republic, according to the Res Gestae Divi Augusti. The shield, the Res Gestae says, was hung outside the Curia Julia, serving as a symbol of the princeps "valour, clemency, justice and piety". The 5th century writer Vegetius added that scuta helped in identification: Other uses of the word The name Scutum has been adopted as one of the 88 modern constellations, and by UK luxury clothing maker Aquascutum, which became famous in the 19th century for its waterproof menswear. Hence the name, which in Latin means "water shield". In zoology, the term scute or scutum is used for a flat and hardened part of the anatomy of an animal, such as the shell of a turtle.
Technology
Armour
null
1094274
https://en.wikipedia.org/wiki/Coelurosauria
Coelurosauria
Coelurosauria (; from Greek, meaning "hollow-tailed lizards") is the clade containing all theropod dinosaurs more closely related to birds than to carnosaurs. Coelurosauria is a subgroup of theropod dinosaurs that includes compsognathids, tyrannosaurs, ornithomimosaurs, and maniraptorans; Maniraptora includes birds, the only known dinosaur group alive today. Most feathered dinosaurs discovered so far have been coelurosaurs. Philip J. Currie had considered it likely and probable that all coelurosaurs were feathered. However, several skin impressions found for some members of this group show pebbly, scaly skin, indicating that feathers did not completely replace scales in all taxa. In the past, Coelurosauria was used to refer to all small theropods, but this classification has since been amended. Anatomy Bodyplan The studying of anatomical traits in coelurosaurs indicates that the last common ancestor had evolved the ability to eat and digest plant matter, adapting to an omnivorous diet, an ability that could be a major contributor to the clade's success. Later groups would hold on to the omnivory, while others specialized in various directions, becoming insectivorous (Alvarezsauridae), herbivorous (Therizinosauridae) and carnivorous (Tyrannosauroidea and Dromaeosauridae). The group includes some of the largest (Tyrannosaurus) and smallest (Microraptor, Parvicursor) carnivorous dinosaurs ever discovered. Characteristics that distinguish coelurosaurs include: a sacrum (series of vertebrae that attach to the hips) longer than in other dinosaurs a tail stiffened towards the tip a bowed ulna (lower arm bone). a tibia (lower leg bone) that is longer than the femur (upper leg bone) Integument Fossil evidence shows that the skin of even the most primitive coelurosaurs was covered primarily in feathers. Fossil traces of feathers, though rare, have been found in members of most major coelurosaurian lineages. Most coelurosaurs also retained scales and scutes on some portion of their bodies, particularly the feet, though some primitive coelurosaurian species are known to have had scales on the upper legs and portions of the tail as well. These include tyrannosauroids, Juravenator, and Scansoriopteryx. Fossils of at least some of these animals (Scansoriopteryx and possibly Juravenator) also preserve feathers elsewhere on the body. Though once thought to be a feature exclusive to coelurosaurs, feathers or feather-like structures are also known in some ornithischian dinosaurs (like Tianyulong and Kulindadromeus), and in pterosaurs. Though it is unknown whether these are related to true feathers, recent analysis has suggested that the feather-like integument found in ornithischians may have evolved independently of coelurosaurs but this was estimated by assuming that primitive pterosaurs had scales. In 2018, two anurognathid specimens were found to have integumentary structures similar to protofeathers. Based on phylogenetic analysis, protofeathers would have had a common origin with avemetatarsalians. Nervous system and senses Although rare, complete casts of theropod endocrania are known from fossils. Theropod endocrania can also be reconstructed from preserved braincases without damaging valuable specimens by using a computed tomography scan and 3D reconstruction software. These finds are of evolutionary significance because they help document the emergence of the neurology of modern birds from that of earlier reptiles. An increase in the proportion of the brain occupied by the cerebrum seems to have occurred with the advent of the Coelurosauria and "continued throughout the evolution of maniraptorans and early birds." Fossil evidence and age A few fossil traces tentatively associated with the Coelurosauria date as far back as the late Triassic. What has been found between then and the earliest Middle Jurassic is fragmentary. The oldest known unambiguous members of Coelurosauria are the proceratosaurid tyrannosauroids Proceratosaurus and Kileskus from the late Middle Jurassic. Many nearly complete fossil coelurosaurians are known from the Late Jurassic. Archaeopteryx (incl. Wellnhoferia) is known from Bavaria at 155-150 Ma. Ornitholestes, the troodontid Hesperornithoides, Coelurus fragilis and Tanycolagreus topwilsoni are all known from the Morrison Formation in Wyoming at about 150 Ma. Epidendrosaurus and Pedopenna are known from the Daohugou Beds in China at about 165-163 Ma. The wide range of fossils in the late Jurassic and morphological evidence shows that coelurosaurian differentiation was virtually complete before the end of the Jurassic. In the early Cretaceous, a superb range of coelurosaurian fossils (including avians) are known from the Yixian Formation in Liaoning. All known theropod dinosaurs from the Yixian Formation are coelurosaurs. Many of the coelurosaurian lineages survived to the end of the Cretaceous period (about 66 Ma) and fossils of some lineages, such as the Tyrannosauroidea, are best known from the late Cretaceous. A majority of coelurosaur groups became extinct in the Cretaceous–Paleogene extinction event, including the Tyrannosauroidea, Ornithomimosauria, Oviraptorosauria, Deinonychosauria, Enantiornithes, and Hesperornithes. Only the Neornithes, otherwise known as modern birds, survived, and continued to diversify after the extinction of the other dinosaurs into the numerous forms found today. There is consensus among paleontologists that birds are descended from coelurosaurs. Under modern cladistical definitions, birds are considered the only living lineage of coelurosaurs. Birds are classified by most paleontologists as belonging to the subgroup Maniraptora. A portion of a tail belonging to a juvenile coelurosaur was found in 2015, inside of a piece of amber. Classification The phylogeny and taxonomy of Coelurosauria has been subject to intensive research and revision. For many years, Coelurosauria was a 'dumping ground' for all small theropods. In the 1960s several distinctive lineages of coelurosaurs were recognized, and a number of new infraorders were erected, including the Ornithomimosauria, Deinonychosauria, and Oviraptorosauria. During the 1980s and 1990s, paleontologists began to give Coelurosauria a formal definition, usually as all animals closer to birds than to Allosaurus, or equivalent specifiers. Under this modern definition, many small theropods are not classified as coelurosaurs at all and some large theropods, such as the tyrannosaurids, were actually more advanced than allosaurs and therefore were reclassified as giant coelurosaurs. Even more drastically, the segnosaurs, once not even regarded as theropods, have turned out to be non-carnivorous coelurosaurs related to Therizinosaurus. Senter (2007) listed 59 different published phylogenies since 1984. Those since 2005 have followed almost the same pattern, and differ significantly from many older phylogenies. In 1994, a study by paleontologist Thomas Holtz found a close relationship between the Ornithomimosauria and Troodontidae, and named this group Bullatosauria. Holtz rejected this hypothesis in 1999, and most paleontologists now consider troodontids to be much more closely related to either birds or Dromaeosauridae than they are to ornithomimosaurs, causing the Bullatosauria to be abandoned. The name referred to the inflated (bulbous) sphenoid both groups shared. Holtz defined the group as the clade containing the most recent common ancestor of Troodon and Ornithomimus and all its descendants. The concept is now considered redundant, and the clade Bullatosauria is now viewed as synonymous with Maniraptoriformes. In 2002, Gregory S. Paul named an apomorphy-based clade Avepectora, defined to include all theropods with a bird-like arrangement of the pectoral bones, where the angled shoulder girdle (coracoids) come in contact with the breastbone (sternum). According to Paul, ornithomimosaurs are the most basal members of this group. In 2010, Paul used Avepectora for a smaller clade, excluding ornithomimosaurs, compsognathids and alvarezsauroids. Within Coelurosauria exists a slightly less inclusive clade named Tyrannoraptora. This clade was defined by Sereno (1999) as "Tyrannosaurus rex, Passer domesticus (the house sparrow), their last common ancestor, and all of its descendants". As tyrannosauroids are considered to be the most basal large group within Coelurosauria, this means that the common ancestor of tyrannosauroids and birds was an even more basal coelurosaurian. As a result, almost all coelurosaurians are also tyrannoraptorans, with the only exceptions being particularly basal species such as Zuolong salleei or Sciurumimus albersdoerferi. Several recently-named clades have been proposed to define the structure of Coelurosauria crownward of basal groups such as tyrannosauroids and compsognathids. Maniraptoromorpha, defined by Andrea Cau in 2018, includes all coelurosaurians more closely related to birds than to tyrannosauroids. Cau stated that the synapomorphies of the clade included "Keel or carinae in the postaxial cervical centra, absence of hyposphene-hypantra in caudal vertebrae (reversal to the plesiomorphic theropodan condition), a prominent dorsomedial process on the semilunate carpal, a convex ventral margin of the pubic foot, a subrectangular distal end of tibia and a sulcus along the posterior margin of the proximal end of fibula." Another proposed clade is Neocoelurosauria, erected by Hendrickx, Mateus, Araújo and Choiniere (2019), They define it as "the clade Compsognathidae + Maniraptoriformes", which can be more or less inclusive than Maniraptoromorpha depending on the topology. The last, and most exclusive of these proposed subclades is Maniraptoriformes. Maniraptoriformes is a clade which may have been united by the presence of pennaceous feathers and wings. This clade contains ornithomimosaurs and maniraptorans. The group was named by Thomas Holtz, who defined it as "the most recent common ancestor of Ornithomimus and birds, and all descendants of that common ancestor." One of the possible synapomorphies of this clade is the presence of feathers homologous to those of birds, based on study of a specimen of Shuvuuia. The following family tree illustrates a synthesis of the relationships of the major coelurosaurian groups based on various studies conducted in the 2010s.
Biology and health sciences
Theropods
Animals
1094348
https://en.wikipedia.org/wiki/Index%20finger
Index finger
The index finger (also referred to as forefinger, first finger, second finger, pointer finger, trigger finger, digitus secundus, digitus II, and many other terms) is the second digit of a human hand. It is located between the thumb and the middle finger. It is usually the most dextrous and sensitive digit of the hand, though not the longest. It is shorter than the middle finger, and may be shorter or longer than the ring finger (see digit ratio). Anatomy "Index finger" literally means "pointing finger", from the same Latin source as indicate; its anatomical names are "index finger" and "second digit". The index finger has three phalanges. It does not contain any muscles, but is controlled by muscles in the hand by attachments of tendons to the bones. Uses A lone index finger held vertically is often used to represent the number 1 (but finger counting differs across cultures), or when held up or moved side to side (finger-wagging), it can be an admonitory gesture. With the hand held palm out and the thumb and middle fingers touching, it represents the letter d in the American Sign Language alphabet. Pointing Pointing with the pointer finger may be used to indicate or identify an item, person, place or object. Around age one, babies begin pointing to communicate relatively complex thoughts, including interest, desire, and information. Pointing in human babies can demonstrate the theory of mind, or ability to understand what other people are thinking. This gesture may form one basis for the development of human language. Non-human primates, lacking the ability to formulate ideas about what others are thinking, use pointing in much less complex ways. However, corvids, dogs and elephants do understand finger pointing. In some cultures, particularly the Malays and Javanese in Southeast Asia, pointing using the index finger is considered rude, hence the thumb is used instead. Index finger in Islam In Islam raising the index finger signifies the Tawhīd (تَوْحِيد), which denotes the indivisible oneness of God. It is used to express the unity of God ("there is no god but God"). In Arabic, the index or fore finger is called musabbiḥa (مُسَبِّحة), mostly used with the definite article: al-musabbiḥa (الْمُسَبِّحة). Sometimes also as-sabbāḥa (السَّبّاحة) is used. The Arabic verb سَبَّحَ - which shares the same root as the Arabic word for index finger - means to praise or glorify God by saying: "Subḥāna Allāh" (سُبْحانَ الله). Index finger in archaeoastronomy Before the advent of GPS and compass, early humans used index finger for pointing direction of objects with the help of stellar objects during night time. Gestures in art The index finger pointing up is a sign of teaching authority. This is shown in the depiction of Plato in the School of Athens by Raphael. As a modern artistic convention, the index finger pointing at the viewer is in the form of a command or summons. Two famous examples of this are recruiting posters used during World War I by the United Kingdom and the United States.
Biology and health sciences
Human anatomy
Health
1094925
https://en.wikipedia.org/wiki/Minibus
Minibus
A minibus, microbus, or minicoach is a passenger-carrying motor vehicle that is designed to carry more people than a multi-purpose vehicle or minivan, but fewer people than a full-size bus. In the United Kingdom, the word "minibus" is used to describe any full-sized passenger-carrying van or panel truck. Minibuses have a seating capacity of between 12 and 30. Larger minibuses may be called midibuses. Minibuses are typically front-engine step-in vehicles, although low floor minibuses are particularly common in Japan. History It is unknown when the first minibus vehicle was developed. For example, Ford Model T vehicles were modified for passenger transport by early bus companies and entrepreneurs. Ford produced a version during the 1920s to carry up to twelve people. In the Soviet Union, the production of minibuses began in the mid-1950s, among the first mass-produced minibuses were the RAF-10, UAZ-451B, and Start. Since September 1961, the RAF-977D "Latvia" minibus began to be mass-produced. Regional variants There are many different form of public transportation services around the world that are provided by using vehicles that can be considered as minibus: Angkot in Indonesia Bas Mini in Malaysia Chiva bus in Colombia and Ecuador Colectivo in southern South America Community bus (Japanese コミュニティバス komiunitibasu) in Japan (Include minibus and midibus) Dala dala in Tanzania Dollar van a.k.a. jitneys, in the United States. in Turkey Modern Jeepney in the Philippines Maeul-bus (Korean 마을버스) in South Korea Marshrutka in eastern Europe. Matatu around Kenya Minibus taxi in South Africa, Ethiopia, see also Taxi wars in South Africa Pesero, minibuses operating as regular buses in Mexico, especially in Mexico City. Public light buses, in Hong Kong. Sherut in Israel Songthaew around Thailand and Lao Tap tap in Haiti Tro tro around Ghana Weyala in Ethiopia Driving licence Some countries may require an additional class of driving licence over a normal private car licence, and some may require a full commercial driving licence. The need for such a licence may depend on: Vehicle weight or size Seating capacity Driver age Intended usage Additional training (such as the Minibus Driver Awareness Scheme in the UK) In the UK: The holder of an ordinary car driving licence which was obtained prior to January 1997, once aged 21 years minimum, may drive a Minibus with a capacity of 16 passengers. Where the "ordinary car driving licence" is obtained after December 1996, they will have to take a separate test to drive a vehicle with a capacity of more than 8 passengers. However, there is an exemption for certain volunteer drivers, where the vehicle does not exceed 3500 kg GVW (or 4250 kg GVW if the vehicle is designed to be wheelchair accessible). A driving licence issued in Ontario, Canada, for an equivalent of a UK class B or class B-auto driving licence (in the case of Ontario, a class G licence), allows its holder to drive vehicles with: 11 tonnes maximum authorized mass, including trailers with 4.6 tonnes MAM 6 tonnes MAM in certain cases) passenger seating capacity of 9 or less Anyone wanting to drive a vehicle in Ontario, with the same MAM limits as for class G vehicles, with fewer than 25, but at least 10, passenger seats, must obtain a bus licence. This will allow, for example, its holder to drive 12- and 15-passenger vans] that Transport Canada defines as large passenger vans.
Technology
Motorized road transport
null
1094982
https://en.wikipedia.org/wiki/Orion%27s%20Belt
Orion's Belt
Orion's Belt is an asterism in the constellation of Orion. Other names include the Belt of Orion, the Three Kings, and the Three Sisters. The belt consists of three bright and easily identifiable collinear star systems – Alnitak, Alnilam, and Mintaka – nearly equally spaced in a line, spanning an angular size of ~ (2.3°). Owing to the high surface temperatures of their constituent stars, the intense light emitted is blue-white in color. In spite of their spot-like appearance, only Alnilam is a single star; Alnitak is a triple star system, and Mintaka a hextuple. All three owe their luminosity to the presence of one or more blue supergiants. The brightest as viewed from Sol is Alnilam, with an apparent magnitude of 1.69, followed by Alnitak at 1.74 and Mintaka at 2.25. The ten stars of the three systems have a combined luminosity approximately 970,000 times that of the Sun. Orion's Belt appears widely in historical literature and in various cultures, under many different names. It has played a central role in astral navigation in the Northern hemisphere since prehistoric times. It is considered to be among the clearest constellations in the winter sky, although it is not visible during summer, when the Sun is too visually close. The discredited archeological Orion correlation theory postulated a connection between the positions of the Giza pyramids and those of the belt, with the linkage shown to be spurious when placed within the proper historical context. Belt features The names of the three stars that comprise the belt derive from Arabic. All three were once known as () meaning "string of pearls" with spelling variants that include and , which was suggested by Knobel to be mistakes in transliteration or copy errors. Alnitak Alnitak (ζ Orionis) is a triple star system at the eastern end of Orion's belt and is 1,260 light-years from the Earth. Alnitak B is a 4th-magnitude B-type star which orbits Alnitak A every 1,500 years. The primary (Alnitak A) is itself a close binary, comprising Alnitak Aa (a blue supergiant of spectral type O9.7 Ibe and an apparent magnitude of 2.0) and Alnitak Ab (a blue subgiant of spectral type B1IV and an apparent magnitude of about 4). Alnitak Aa is estimated to be up to 28 times as massive as the Sun and have a diameter 20 times greater. It is the brightest star of class O in the night sky. Alnilam Alnilam (ε Orionis) is a singular B0 supergiant, approximately 2,000 light-years away from Earth and magnitude 1.69. It is the 29th-brightest star in the sky and the fourth-brightest in Orion. It is 375,000 times more luminous than the Sun. Its spectrum serves as one of the stable anchor points by which other stars are classified. Mintaka Mintaka (δ Orionis) is a six-star system at the western end of the Belt, and the star system closest to the celestial equator. It is the nearest massive multiple stellar system, composed of three spectroscopic components. The most luminous individual star is a O9.5 II blue giant. Together, the system has a combined ~250,000 solar luminosity. Mintaka is 1,200 light-years distant, with a visual magnitude of 2.25. The innermost binary has a period of 5.732 days and a semi-major axis of approximately 32 million kilometers (0.22 AU), with the two massive stars eclipsing each other twice per completed orbit as viewed from Sol, from which regular minor dips in brightness arise.
Physical sciences
Asterism
Astronomy
1095283
https://en.wikipedia.org/wiki/Blood%20agent
Blood agent
A blood agent is a toxic chemical agent that affects the body by being absorbed into the blood. Blood agents are fast-acting, potentially lethal poisons that typically manifest at room temperature as volatile colorless gases with a faint odor. They are either cyanide- or arsenic-based. Exposure Blood agents work through inhalation or ingestion. As chemical weapons, blood agents are typically disseminated as aerosols and take effect through inhalation. Due to their volatility, they are more toxic in confined areas than in open areas. Cyanide compounds occur in small amounts in the natural environment and in cigarette smoke. They are also used in several industrial processes and as pesticides. Cyanides are released when synthetic fabrics or polyurethane burn, and may thus contribute to fire-related deaths. Arsine gas, formed when arsenic encounters an acid, is used as a pesticide and in the semiconductor industry; most exposures to it occur accidentally in the workplace. Symptoms The symptoms of blood agent poisoning depend on concentration and duration. Cyanide-based blood agents irritate the eyes and the respiratory tract, while arsine is nonirritating. Hydrogen cyanide has a faint, bitter, almond odor that only about half of all people can smell. Arsine has a very faint garlic odor detectable only at greater than fatal concentrations. Exposure to small amounts of cyanide has no effect. Higher concentrations cause dizziness, weakness and nausea, which cease with the exposure, but long-time exposure can cause mild symptoms followed by permanent brain damage and muscle paralysis. Moderate exposure causes stronger and longer-lasting symptoms, including headache, that can be followed by convulsions and coma. Stronger or longer exposure will also lead to convulsions and coma. Very strong exposure causes severe toxic effects within seconds, and rapid death. The blood of people killed by blood agents is bright red, because the agents inhibit the use of the oxygen in it by the body's cells. Cyanide poisoning can be detected by the presence of thiocyanate or cyanide in the blood, a smell of bitter almonds, or respiratory tract inflammations and congestions in the case of cyanogen chloride poisoning. There is no specific test for arsine poisoning, but it may leave a garlic smell on the victim's breath. Effects At sufficient concentrations, blood agents can quickly saturate the blood and cause death in a matter of minutes or seconds. They cause powerful gasping for breath, violent convulsions and a painful death that can take several minutes. The immediate cause of death is usually respiratory failure. Blood agents work at the cellular level by preventing the exchange of oxygen and carbon dioxide between the blood and the body's cells. This causes the cells to suffocate from lack of oxygen. Cyanide-based agents do so by interrupting the electron transport chain in the inner membranes of mitochondria. Arsine damages the red blood cells which deliver oxygen throughout the body. Detection and countermeasures Chemical detection methods, in the form of kits or testing strips, exist for hydrogen cyanide. Ordinary clothing provides some protection, but proper protective clothing and masks are recommended. Mask filters containing only charcoal are ineffective, and effective filters are quickly saturated. Due to their high volatility, cyanide agents generally need no decontamination. In enclosed areas, fire extinguishers spraying sodium carbonate can decontaminate hydrogen cyanide, but the resulting metal salts remain poisonous on contact. Liquid hydrogen cyanide can be flushed with water. Cyanide poisoning can be treated with antidotes. List of blood agents The information in the following table, which lists blood agents of military significance, is taken from Ledgard. The values given are on a scale from 1 to 10. Sodium cyanide and potassium cyanide, colorless crystalline compounds similar in appearance to sugar, also act as blood agents. Carbon monoxide could technically be called a blood agent because it binds with oxygen-carrying hemoglobin in the blood (see carbon monoxide poisoning), but its high volatility makes it impractical as a chemical warfare agent. One of the earliest proposed chemical weapons, cacodyl oxide, or Cadet's fuming liquid, also displays properties of a blood agent (as well as those of a malodorant). It was proposed as a chemical weapon in the British Empire during the Crimean War, along with the significantly more potent blood agent, cacodyl cyanide. Use The most significant practical application of blood agents was the use of hydrogen cyanide (Zyklon B) in gas chambers by Nazi Germany to commit the mass murder of Jews and others in the course of the Holocaust. This resulted in the largest death toll as a result of the use of chemical agents to date.
Technology
Weapon of mass destruction
null
1095698
https://en.wikipedia.org/wiki/Mill%20%28grinding%29
Mill (grinding)
A mill is a device, often a structure, machine or kitchen appliance, that breaks solid materials into smaller pieces by grinding, crushing, or cutting. Such comminution is an important unit operation in many processes. There are many different types of mills and many types of materials processed in them. Historically, mills were powered by hand or by animals (e.g., via a hand crank), working animal (e.g., horse mill), wind (windmill) or water (watermill). In the modern era, they are usually powered by electricity. The grinding of solid materials occurs through mechanical forces that break up the structure by overcoming the interior bonding forces. After the grinding the state of the solid is changed: the grain size, the grain size disposition and the grain shape. Milling also refers to the process of breaking down, separating, sizing, or classifying aggregate material (e.g. mining ore). For instance rock crushing or grinding to produce uniform aggregate size for construction purposes, or separation of rock, soil or aggregate material for the purposes of structural fill or land reclamation activities. Aggregate milling processes are also used to remove or separate contamination or moisture from aggregate or soil and to produce "dry fills" prior to transport or structural filling. Grinding may serve the following purposes in engineering: increase of the surface area of a solid manufacturing of a solid with a desired grain size pulping of resources Grinding laws In spite of a great number of studies in the field of fracture schemes there is no formula known which connects the technical grinding work with grinding results. Mining engineers, Peter von Rittinger, Friedrich Kick and Fred Chester Bond independently produced equations to relate the needed grinding work to the grain size produced and a fourth engineer, R.T.Hukki suggested that these three equations might each describe a narrow range of grain sizes and proposed uniting them along a single curve describing what has come to be known as the Hukki relationship. In stirred mills, the Hukki relationship does not apply and instead, experimentation has to be performed to determine any relationship. To evaluate the grinding results the grain size disposition of the source material (1) and of the ground material (2) is needed. Grinding degree is the ratio of the sizes from the grain disposition. There are several definitions for this characteristic value: Grinding degree referring to grain size d80 Instead of the value of d80 also d50 or other grain diameter can be used. Grinding degree referring to specific surface The specific surface area referring to volume Sv and the specific surface area referring to mass Sm can be found out through experiments. Pretended grinding degree The discharge die gap a of the grinding machine is used for the ground solid matter in this formula. Grinding machines In materials processing a grinder is a machine for producing fine particle size reduction through attrition and compressive forces at the grain size level.
Technology
Industrial machinery
null
4361469
https://en.wikipedia.org/wiki/Passive%20margin
Passive margin
A passive margin is the transition between oceanic and continental lithosphere that is not an active plate margin. A passive margin forms by sedimentation above an ancient rift, now marked by transitional lithosphere. Continental rifting forms new ocean basins. Eventually the continental rift forms a mid-ocean ridge and the locus of extension moves away from the continent-ocean boundary. The transition between the continental and oceanic lithosphere that was originally formed by rifting is known as a passive margin. Global distribution Passive margins are found at every ocean and continent boundary that is not marked by a strike-slip fault or a subduction zone. Passive margins define the region around the Arctic Ocean, Atlantic Ocean, and western Indian Ocean, and define the entire coasts of Africa, Australia, Greenland, and the Indian Subcontinent. They are also found on the east coast of North America and South America, in Western Europe and most of Antarctica. Northeast Asia also contains some passive margins. Key components Active vs. passive margins The distinction between active and passive margins refers to whether a crustal boundary between oceanic lithosphere and continental lithosphere is a plate boundary. Active margins are found on the edge of a continent where subduction occurs. These are often marked by uplift and volcanic mountain belts on the continental plate. Less often there is a strike-slip fault, as defines the southern coastline of West Africa. Most of the eastern Indian Ocean and nearly all of the Pacific Ocean margin are examples of active margins. While a weld between oceanic and continental lithosphere is called a passive margin, it is not an inactive margin. Active subsidence, sedimentation, growth faulting, pore fluid formation and migration are all active processes on passive margins. Passive margins are only passive in that they are not active plate boundaries. Morphology Passive margins consist of both onshore coastal plain and offshore continental shelf-slope-rise triads. Coastal plains are often dominated by fluvial processes, while the continental shelf is dominated by deltaic and longshore current processes. The great rivers (Amazon, Orinoco, Congo, Nile, Ganges, Yellow, Yangtze, and Mackenzie rivers) drain across passive margins. Extensive estuaries are common on mature passive margins. Although there are many kinds of passive margins, the morphologies of most passive margins are remarkably similar. Typically they consist of a continental shelf, continental slope, continental rise, and abyssal plain. The morphological expression of these features are largely defined by the underlying transitional crust and the sedimentation above it. Passive margins defined by a large fluvial sediment budget and those dominated by coral and other biogenous processes generally have a similar morphology. In addition, the shelf break seems to mark the maximum Neogene lowstand, defined by the glacial maxima. The outer continental shelf and slope may be cut by great submarine canyons, which mark the offshore continuation of rivers. At high latitudes and during glaciations, the nearshore morphology of passive margins may reflect glacial processes, such as the fjords of Greenland and Norway. Cross-section The main features of passive margins lie underneath the external characters. Beneath passive margins the transition between the continental and oceanic crust is a broad transition known as transitional crust. The subsided continental crust is marked by normal faults that dip seaward. The faulted crust transitions into oceanic crust and may be deeply buried due to thermal subsidence and the mass of sediment that collects above it. The lithosphere beneath passive margins is known as transitional lithosphere. The lithosphere thins seaward as it transitions seaward to oceanic crust. Different kinds of transitional crust form, depending on how fast rifting occurs and how hot the underlying mantle was at the time of rifting. Volcanic passive margins represent one endmember transitional crust type, the other endmember (amagmatic) type is the rifted passive margin. Volcanic passive margins also are marked by numerous dykes and igneous intrusions within the subsided continental crust. There are typically a lot of dykes formed perpendicular to the seaward-dipping lava flows and sills. Igneous intrusions within the crust cause lava flows along the top of the subsided continental crust and form seaward-dipping reflectors. Subsidence mechanisms Passive margins are characterized by thick accumulations of sediments. Space for these sediments is called accommodation and is due to subsidence of especially the transitional crust. Subsidence is ultimately caused by gravitational equilibrium that is established between the crustal tracts, known as isostasy. Isostasy controls the uplift of the rift flank and the subsequent subsidence of the evolving passive margin and is mostly reflected by changes in heat flow. Heat flow at passive margins changes significantly over its lifespan, high at the beginning and decreasing with age. In the initial stage, the continental crust and lithosphere is stretched and thinned due to plate movement (plate tectonics) and associated igneous activity. The very thin lithosphere beneath the rift allows the upwelling mantle to melt by decompression. Lithospheric thinning also allows the asthenosphere to rise closer to the surface, heating the overlying lithosphere by conduction and advection of heat by intrusive dykes. Heating reduces the density of the lithosphere and elevates the lower crust and lithosphere. In addition, mantle plumes may heat the lithosphere and cause prodigious igneous activity. Once a mid-oceanic ridge forms and seafloor spreading begins, the original site of rifting is separated into conjugate passive margins (for example, the eastern US and NW African margins were parts of the same rift in early Mesozoic time and are now conjugate margins) and migrates away from the zone of mantle upwelling and heating and cooling begins. The mantle lithosphere below the thinned and faulted continental oceanic transition cools, thickens, increases in density and thus begins to subside. The accumulation of sediments above the subsiding transitional crust and lithosphere further depresses the transitional crust. Classification There are four different perspectives needed to classify passive margins: map-view formation geometry (rifted, sheared, and transtensional), nature of transitional crust (volcanic and non-volcanic), whether the transitional crust represents a continuous change from normal continental to normal oceanic crust or this includes isolated rifts and stranded continental blocks (simple and complex), and sedimentation (carbonate-dominated, clastic-dominated, or sediment starved). The first describes the relationship between rift orientation and plate motion, the second describes the nature of transitional crust, and the third describes post-rift sedimentation. All three perspectives need to be considered in describing a passive margin. In fact, passive margins are extremely long, and vary along their length in rift geometry, nature of transitional crust, and sediment supply; it is more appropriate to subdivide individual passive margins into segments on this basis and apply the threefold classification to each segment. Geometry of passive margins Rifted margin This is the typical way that passive margins form, as separated continental tracts move perpendicular to the coastline. This is how the Central Atlantic opened, beginning in Jurassic time. Faulting tends to be listric: normal faults that flatten with depth. Sheared margin Sheared margins form where continental breakup was associated with strike-slip faulting. A good example of this type of margin is found on the south-facing coast of west Africa. Sheared margins are highly complex and tend to be rather narrow. They also differ from rifted passive margins in structural style and thermal evolution during continental breakup. As the seafloor spreading axis moves along the margin, thermal uplift produces a ridge. This ridge traps sediments, thus allowing for thick sequences to accumulate. These types of passive margins are less volcanic. Transtensional margin This type of passive margin develops where rifting is oblique to the coastline, as is now occurring in the Gulf of California. Nature of transitional crust Transitional crust, separating true oceanic and continental crusts, is the foundation of any passive margin. This forms during the rifting stage and consists of two endmembers: volcanic and non-volcanic. This classification scheme only applies to rifted and transtensional margin; transitional crust of sheared margins is very poorly known. Non-volcanic rifted margin Non-volcanic margins are formed when extension is accompanied by little mantle melting and volcanism. Non-volcanic transitional crust consists of stretched and thinned continental crust. Non-volcanic margins are typically characterized by continentward-dipping seismic reflectors (rotated crustal blocks and associated sediments) and low P wave velocities (<7.0 km/s) in the lower part of the transitional crust. Volcanic rifted margin Volcanic margins form part of large igneous provinces, which are characterised by massive emplacements of mafic extrusives and intrusive rocks over very short time periods. Volcanic margins form when rifting is accompanied by significant mantle melting, with volcanism occurring before and/or during continental breakup. The transitional crust of volcanic margins is composed of basaltic igneous rocks, including lava flows, sills, dykes, and gabbro. Volcanic margins are usually distinguished from non-volcanic (or magma-poor) margins (e.g. the Iberian margin, Newfoundland margin) which do not contain large amounts of extrusive and/or intrusive rocks and may exhibit crustal features such as unroofed, serpentinized mantle. Volcanic margins are known to differ from magma-poor margins in a number of ways: A transitional crust composed of basaltic igneous rocks, including lava flows, sills, dykes, and gabbros A huge volume of basalt flows, typically expressed as seaward-dipping reflector sequences (SDRS) rotated during the early stages of crustal accretion (breakup stage) The presence of numerous sill/dyke and vent complexes intruding into the adjacent basin The lack of significant passive-margin subsidence during and after breakup The presence of a lower crust with anomalously high seismic P wave velocities (Vp=7.1-7.8 km/s) – referred to as lower crustal bodies (LCBs) in the geologic literature The high velocities (Vp > 7 km) and large thicknesses of the LCBs are evidence that supports the case for plume-fed accretion (mafic thickening) underplating the crust during continental breakup. LCBs are located along the continent-ocean transition but can sometimes extend beneath the continental part of the rifted margin (as observed in the mid-Norwegian margin for example). In the continental domain, there are still open discussion on their real nature, chronology, geodynamic and petroleum implications. Examples of volcanic margins: The Yemen margin The East Australian margin The West Indian margin The Hatton-Rockal margin The U.S. East Coast The mid-Norwegian margin The Brazilian margins The Namibian margin The East Greenland margin The West Greenland margin Examples of non-volcanic margins: The Newfoundland Margin The Iberian Margin The Margins of the Labrador Sea (Labrador and Southwest Greenland) Heterogeneity of transitional crust Simple transitional crust Passive margins of this type show a simple progression through the transitional crust, from normal continental to normal oceanic crusts. The passive margin offshore Texas is a good example. Complex transitional crust This type of transitional crust is characterized by abandoned rifts and continental blocks, such as the Blake Plateau, Grand Banks, or Bahama Islands offshore eastern Florida. Sedimentation A fourth way to classify passive margins is according to the nature of sedimentation of the mature passive margin. Sedimentation continues throughout the life of a passive margin. Sedimentation changes rapidly and progressively during the initial stages of passive margin formation because rifting begins on land, becoming marine as the rift opens and a true passive margin is established. Consequently, the sedimentation history of a passive margin begins with fluvial, lacustrine, or other subaerial deposits, evolving with time depending on how the rifting occurred and how, when, and by what type of sediment it varies. Constructional Constructional margins are the "classic" mode of passive margin sedimentation. Normal sedimentation results from the transport and deposition of sand, silt, and clay by rivers via deltas and redistribution of these sediments by longshore currents. The nature of sediments can change remarkably along a passive margin, due to interactions between carbonate sediment production, clastic input from rivers, and alongshore transport. Where clastic sediment inputs are small, biogenic sedimentation can dominate especially nearshore sedimentation. The Gulf of Mexico passive margin along the southern United States is an excellent example of this, with muddy and sandy coastal environments down current (west) from the Mississippi River Delta and beaches of carbonate sand to the east. The thick layers of sediment gradually thin with increasing distance offshore, depending on subsidence of the passive margin and the efficacy of offshore transport mechanisms such as turbidity currents and submarine channels. Development of the shelf edge and its migration through time is critical to the development of a passive margin. The location of the shelf edge break reflects complex interaction between sedimentation, sealevel, and the presence of sediment dams. Coral reefs serve as bulwarks that allow sediment to accumulate between them and the shore, cutting off sediment supply to deeper water. Another type of sediment dam results from the presence of salt domes, as are common along the Texas and Louisiana passive margin. Starved Sediment-starved margins produce narrow continental shelves and passive margins. This is especially common in arid regions, where there is little transport of sediment by rivers or redistribution by longshore currents. The Red Sea is a good example of a sediment-starved passive margin. Formation There are three main stages in the formation of passive margins: In the first stage a continental rift is established due to stretching and thinning of the crust and lithosphere by plate movement. This is the beginning of the continental crust subsidence. Drainage is generally away from the rift at this stage. The second stage leads to the formation of an oceanic basin, similar to the modern Red Sea. The subsiding continental crust undergoes normal faulting as transitional marine conditions are established. Areas with restricted sea water circulation coupled with arid climate create evaporite deposits. Crust and lithosphere stretching and thinning are still taking place in this stage. Volcanic passive margins also have igneous intrusions and dykes during this stage. The last stage in formation happens only when crustal stretching ceases and the transitional crust and lithosphere subsides as a result of cooling and thickening (thermal subsidence). Drainage starts flowing towards the passive margin causing sediment to accumulate over it. Economic significance Passive margins are important exploration targets for petroleum. Mann et al. (2001) classified 592 giant oil fields into six basin and tectonic-setting categories, and noted that continental passive margins account for 31% of giants. Continental rifts (which are likely to evolve into passive margins with time) contain another 30% of the world's giant oil fields. Basins associated with collision zones and subduction zones are where most of the remaining giant oil fields are found. Passive margins are petroleum storehouses because these are associated with favorable conditions for accumulation and maturation of organic matter. Early continental rifting conditions led to the development of anoxic basins, large sediment and organic flux, and the preservation of organic matter that led to oil and gas deposits. Crude oil will form from these deposits. These are the localities in which petroleum resources are most profitable and productive. Productive fields are found in passive margins around the globe, including the Gulf of Mexico, western Scandinavia, and Western Australia. Law of the Sea International discussions about who controls the resources of passive margins are the focus of law of the sea negotiations. Continental shelves are important parts of national exclusive economic zones, important for seafloor mineral deposits (including oil and gas) and fisheries.
Physical sciences
Tectonics
Earth science
4363193
https://en.wikipedia.org/wiki/Samanea%20saman
Samanea saman
Samanea saman is a species of flowering tree in the pea family, Fabaceae, now in the Mimosoid clade and is native to Central and South America. It is often placed in the genus Samanea, which by yet other authors is subsumed in Albizia entirely. Its range extends from Mexico south to Peru and Brazil, but it has been widely introduced to South and Southeast Asia, as well as the Pacific Islands, including Hawaii. It is a well-known tree, rivaled perhaps only by lebbeck and pink siris among its genus. It is well represented in many languages and has numerous local names in its native range; common English names include saman, rain tree and monkeypod (see also below). In Cambodia it is colloquially known as the Chankiri Tree (can be written or ). Description Tree Saman is a wide-canopied tree with a large symmetrical umbrella-shaped crown. It usually reaches a height of and a diameter of . This species of flowering tree in the Fabaceae family is native to Central and South America but has been widely introduced across the tropics, especially South and Southeast Asia. Its branches have velvety and hairy bark. Large branches of the tree tend to break off, particularly during rainstorms. This can be hazardous as the tree is very commonly used for avenue plantation. A rain tree leaf is pinnate made of 6–16 leaflets, each leaflet is shaped like a diamond long and wide with a dull top surface and finely hairy beneath. The tree sheds its leaves for a while during dry periods. Its crown is big and can provide shade, but allows rain to fall through into the ground beneath it. The leaves fold in rainy weather and in the evenings, hence the names rain tree and five o'clock tree. Flowers and seeds The tree has pinkish flowers with white and red stamens, set on heads with around 12–25 flowers per head. These heads may number in the thousands, covering the whole tree. The seed pods of the tree are curved and leathery; they contain sticky, edible flesh covering the flat, oval seeds. Names In English it is usually known as rain tree or saman. It is also known as "monkey pod", "giant thibet", "inga saman", "cow tamarind", East Indian walnut, "soar", or "suar". In English-speaking regions of the Caribbean, it is known as coco tamarind in Grenada; French tamarind in Guyana; and samaan tree in Trinidad. In Philippine English, it is confusingly simply known as "acacia", due to its resemblance to native Acacia species. The original name, saman - known in many languages and used for the specific epithet - derives from zamang, meaning "Mimosoideae tree" in some Cariban languages of northern Venezuela. This name is also where its genus name Samanea comes from. The origin of the name "rain tree" is unknown. It has been variously attributed to local names ki hujan or pokok hujan ("rain tree") in Indonesia and Malaysia because its leaves fold during rainy days (allowing rain to fall through the tree); the way the relative abundance of grass under the tree in comparison to surrounding areas; the steady drizzle of honeydew-like discharge of cicadas feeding on the leaves; the occasional shower of sugary secretions from the nectaries on the leaf petioles; to the shedding of stamens during heavy flowering. In the Caribbean, it is sometimes known as marsave. It is also known as algarrobo in Cuba; guannegoul(e) in Haiti; and goango or guango in Jamaica. In French-speaking islands, it is known as gouannegoul or saman. In Latin America, it is variously known as samán, cenízaro, cenicero, genízaro, carreto, carreto negro, delmonte, dormilón, guannegoul, algarrobo del país, algarrobo, campano, carabeli, couji, lara, urero, or zarza in Spanish; and chorona in Portuguese. In the Pacific Islands, it is known as filinganga in the Northern Marianas; trongkon-mames in Guam; gumorni spanis in Yap; kasia kula or mohemohe in Tonga; marmar in New Guinea; ohai in Hawaii; tamalini or tamaligi in Samoa; and vaivai ni vavalangi, vaivai moce or sirsa in Fiji. The former comes from vaivai "watery" (in allusion to the tree's "rain") + vavalagi "foreign". In some parts of Vanua Levu, Fiji the word vaivai is used to describe the lebbeck, because of the sound the seedpods make, and the word mocemoce (sleepy, or sleeping) is used for A. saman due to the 'sleepiness' of its leaves. In Southeast Asia, it is known as akasya or palo de China in the Philippines; meh or trembesi in Indonesia; pukul lima ("five o'clock tree") in Malaysia and Singapore; ampil barang ("Western tamarind") in Cambodia; ก้ามปู (kampu), ฉำฉา (chamcha), จามจุรีแดง (chamchuri daeng), จามจุรี (chamchuri) in Thai; ကုက္ကို (kokko) in Myanmar; and còng, muồng tím, or cây mưa ("rain tree") in Vietnam. In South Asia, it is known as shiriisha in Sanskrit; শিরীষ (shirish) in Bengali; shirish in Gujarati; सीरस (vilaiti siris) in Hindi; bagaya mara in Kannada; ചക്കരക്കായ്‌ മരം (chakkarakkay maram) in Malayalam; विलायती शिरीश in Marathi; මාර (māra) in Sinhalese; தூங்குமூஞ்சி மரம் (thoongu moonji maram, "sleepy faced tree") in Tamil; and నిద్ర గన్నేరు (nidra ganneru) in Telugu. In Madagascar, it is also known as bonara (mbaza), kily vazaha, madiromany, mampihe, or mampohehy. In European regions where the tree does not usually grow, its names are usually direct translations of "rain tree". These include arbre à (la) pluie (France), árbol de lluvia (Spain); and Regenbaum (Germany). Use The edible fruit pulp can be made into a beverage that tastes like lemons; the pulp is also an additive to gasoline. Its wood is used for carving and making furniture. The "Samanea saman" tree is one of several types of host plants that allows lac insects (Kerria lacca) infestation. The resultant copious sap/insect discharge caused by this insect is a harden material that is subsequently collected and processed into lac/shellac and used in making lacquerware and wood finishes. Raintrees around the world In Cambodia It is unclear when and how Chankiri was introduced to Cambodia. It is possible the tree was introduced from Brazil by the French in the 1920s, together with the rubber tree (Hevea brasiliensis) during the rubber industry's global boom in the early 1900s. It is also possible the tree came from neighboring countries in the region where the plant had been introduced earlier on by Western colonial explorers. Since its introduction to Cambodia, the Samanea saman is known locally as chankiri (ចន្ទគិរី). It has been widely planted across the country thanks to its tall height and expansive branches that can shade large areas, and as an ornamental. The fruit is eaten, and in famine times the young leaves are eaten in salads. is the official Khmer name for the plant because the flowers from this tree resemble the beautiful long-haired tail of the (known in English as yak). (French tamarind) is another colloquial name for it in Cambodia. Chankiri Trees in the Killing Fields Since its introduction to Cambodia, Chankiri has been widely planted across the country thanks to its tall height and expansive branches that can shade large areas. Multiple chankiri can also be found in the Killing Fields, an execution field used by the Khmer Rouge during the Cambodian genocide, though the trees were planted at the field long before. Children and infants with parents accused of crimes against the regime were smashed against trees, in the hope that the children "wouldn't grow up and take revenge for their parents' deaths". It was a coincidence that the Chankiri tree at the Killing Fields is one of the many trees against which the Khmer Rouge executioners beat young children and there are no specific associations locally between the Chankiri tree and the Khmer Rouge. In Venezuela When Alexander von Humboldt travelled in the Americas from 1799 to 1804, he encountered a giant saman tree near Maracay, Venezuela. He measured the circumference of the parasol-shaped crown at 576 ft (about 180.8 m), its diameter was around 190 ft (about 59.6 m), on a trunk at 9 ft (about 2.8 m) in diameter and reaching just 60 ft (nearly 19 m) in height. Humboldt mentioned the tree was reported to have changed little since the Spanish colonization of Venezuela; he estimated it to be as old as the famous Canary Islands dragon tree (Dracaena draco) of Icod de los Vinos on Tenerife. The tree, called Samán de Güere (transcribed Zamang del Guayre by von Humboldt) still stands today, and is a Venezuelan national treasure. Just like the dragon tree on Tenerife, the age of the saman in Venezuela is rather indeterminate. As von Humboldt's report makes clear, according to local tradition, it would be older than 500 years today, which is rather outstanding by the genus' standards. It is certain, however, the tree is quite more than 200 years old today, but it is one exceptional individual; even the well-learned von Humboldt could not believe it was actually the same species as the saman trees he knew from the greenhouses at Schönbrunn Castle. A famous specimen called the "Brahmaputra Rain Tree" located at Guwahati on the banks of the Brahmaputra River in Assam, India has the thickest trunk of any Saman; approximately diameter at breast height (DBH). The size of the pollen is around 119 microns and it is polyad of 24 to 32 grains. Carbon sequestration Carbon sequestration is the capture and long-term removal of carbon dioxide from the atmosphere. According to a research conducted at the School of Forestry of the Bogor Agricultural Institute, Indonesia, a mature tree with a crown diameter measuring absorbed of CO2 annually. The trees have been planted in cities of Kudus and Demak and also will be planted along the shoulder of the road from Semarang to Losari. Gallery
Biology and health sciences
Fabales
Plants
3205366
https://en.wikipedia.org/wiki/1-Propanol
1-Propanol
1-Propanol (also propan-1-ol, propanol, n-propyl alcohol) is a primary alcohol with the formula and sometimes represented as PrOH or n-PrOH. It is a colourless liquid and an isomer of 2-propanol. 1-Propanol is used as a solvent in the pharmaceutical industry, mainly for resins and cellulose esters, and, sometimes, as a disinfecting agent. History The compound was discovered by Gustave Chancel in 1853 by fractional distillation of fusel oil. He measured its boiling point at 96°C, correctly identified its empirical formula, studied some of its chemical properties and gave it two names: propionic alcohol and hydrate of trityl. After several unsuccessful attempts, it was synthesized independently and by two different routes by Eduard Linnemann andCarl Schorlemmer in 1868. Occurrence Fusel alcohols like 1-Propanol are grain fermentation byproducts, and therefore trace amounts of 1-Propanol are present in many alcoholic beverages. Chemical properties 1-Propanol shows the normal reactions of a primary alcohol. Thus it can be converted to alkyl halides; for example red phosphorus and iodine produce n-propyl iodide in 80% yield, while with catalytic gives n-propyl chloride. Reaction with acetic acid in the presence of an catalyst under Fischer esterification conditions gives propyl acetate, while refluxing propanol overnight with formic acid alone can produce propyl formate in 65% yield. Oxidation of 1-propanol with and gives a 36% yield of propionaldehyde, and therefore for this type of reaction higher yielding methods using PCC or the Swern oxidation are recommended. Oxidation with chromic acid yields propionic acid. Preparation 1-Propanol is manufactured by catalytic hydrogenation of propionaldehyde. Propionaldehyde is produced via the oxo process by hydroformylation of ethylene using carbon monoxide and hydrogen in the presence of a catalyst such as cobalt octacarbonyl or a rhodium complex. A traditional laboratory preparation of 1-propanol involves treating n-propyl iodide with moist . Safety 1-Propanol is thought to be similar to ethanol in its effects on the human body, but 2 to 4 times more potent according to a study conducted on rabbits. Many toxicology studies find oral acute LD50 ranging from 1.9 g/kg to 6.5 g/kg (compared to 7.06 g/kg for ethanol). It is metabolized into propionic acid. Effects include alcoholic intoxication and high anion gap metabolic acidosis. As of 2011, one case of lethal poisoning was reported following oral ingestion of 500mL of 1-propanol. Due to a lack of long term data, the carcinogenicity of 1-propanol in humans is unknown. 1-Propanol as fuel 1-Propanol has a high octane number and is suitable for use as engine fuel. However, propanol is too expensive to use as a motor fuel. The research octane number (RON) of propanol is 118, and the anti-knock index (AKI) is 108.
Physical sciences
Alcohols
Chemistry
3206060
https://en.wikipedia.org/wiki/Syntax%20%28programming%20languages%29
Syntax (programming languages)
In computer science, the syntax of a computer language is the rules that define the combinations of symbols that are considered to be correctly structured statements or expressions in that language. This applies both to programming languages, where the document represents source code, and to markup languages, where the document represents data. The syntax of a language defines its surface form. Text-based computer languages are based on sequences of characters, while visual programming languages are based on the spatial layout and connections between symbols (which may be textual or graphical). Documents that are syntactically invalid are said to have a syntax error. When designing the syntax of a language, a designer might start by writing down examples of both legal and illegal strings, before trying to figure out the general rules from these examples. Syntax therefore refers to the form of the code, and is contrasted with semantics – the meaning. In processing computer languages, semantic processing generally comes after syntactic processing; however, in some cases, semantic processing is necessary for complete syntactic analysis, and these are done together or concurrently. In a compiler, the syntactic analysis comprises the frontend, while the semantic analysis comprises the backend (and middle end, if this phase is distinguished). Levels of syntax Computer language syntax is generally distinguished into three levels: Words – the lexical level, determining how characters form tokens; Phrases – the grammar level, narrowly speaking, determining how tokens form phrases; Context – determining what objects or variables names refer to, if types are valid, etc. Distinguishing in this way yields modularity, allowing each level to be described and processed separately and often independently. First, a lexer turns the linear sequence of characters into a linear sequence of tokens; this is known as "lexical analysis" or "lexing". Second, the parser turns the linear sequence of tokens into a hierarchical syntax tree; this is known as "parsing" narrowly speaking. This ensures that the line of tokens conform to the formal grammars of the programming language. The parsing stage itself can be divided into two parts: the parse tree, or "concrete syntax tree", which is determined by the grammar, but is generally far too detailed for practical use, and the abstract syntax tree (AST), which simplifies this into a usable form. The AST and contextual analysis steps can be considered a form of semantic analysis, as they are adding meaning and interpretation to the syntax, or alternatively as informal, manual implementations of syntactical rules that would be difficult or awkward to describe or implement formally. Thirdly, the contextual analysis resolves names and checks types. This modularity is sometimes possible, but in many real-world languages an earlier step depends on a later step – for example, the lexer hack in C is because tokenization depends on context. Even in these cases, syntactical analysis is often seen as approximating this ideal model. The levels generally correspond to levels in the Chomsky hierarchy. Words are in a regular language, specified in the lexical grammar, which is a Type-3 grammar, generally given as regular expressions. Phrases are in a context-free language (CFL), generally a deterministic context-free language (DCFL), specified in a phrase structure grammar, which is a Type-2 grammar, generally given as production rules in Backus–Naur form (BNF). Phrase grammars are often specified in much more constrained grammars than full context-free grammars, in order to make them easier to parse; while the LR parser can parse any DCFL in linear time, the simple LALR parser and even simpler LL parser are more efficient, but can only parse grammars whose production rules are constrained. In principle, contextual structure can be described by a context-sensitive grammar, and automatically analyzed by means such as attribute grammars, though, in general, this step is done manually, via name resolution rules and type checking, and implemented via a symbol table which stores names and types for each scope. Tools have been written that automatically generate a lexer from a lexical specification written in regular expressions and a parser from the phrase grammar written in BNF: this allows one to use declarative programming, rather than need to have procedural or functional programming. A notable example is the lex-yacc pair. These automatically produce a concrete syntax tree; the parser writer must then manually write code describing how this is converted to an abstract syntax tree. Contextual analysis is also generally implemented manually. Despite the existence of these automatic tools, parsing is often implemented manually, for various reasons – perhaps the phrase structure is not context-free, or an alternative implementation improves performance or error-reporting, or allows the grammar to be changed more easily. Parsers are often written in functional languages, such as Haskell, or in scripting languages, such as Python or Perl, or in C or C++. Examples of errors As an example, (add 1 1) is a syntactically valid Lisp program (assuming the 'add' function exists, else name resolution fails), adding 1 and 1. However, the following are invalid: (_ 1 1) lexical error: '_' is not valid (add 1 1 parsing error: missing closing ')' The lexer is unable to identify the first error – all it knows is that, after producing the token LEFT_PAREN, '(' the remainder of the program is invalid, since no word rule begins with '_'. The second error is detected at the parsing stage: The parser has identified the "list" production rule due to the '(' token (as the only match), and thus can give an error message; in general it may be ambiguous. Type errors and undeclared variable errors are sometimes considered to be syntax errors when they are detected at compile-time (which is usually the case when compiling strongly-typed languages), though it is common to classify these kinds of error as semantic errors instead. As an example, the Python code 'a' + 1 contains a type error because it adds a string literal to an integer literal. Type errors of this kind can be detected at compile-time: They can be detected during parsing (phrase analysis) if the compiler uses separate rules that allow "integerLiteral + integerLiteral" but not "stringLiteral + integerLiteral", though it is more likely that the compiler will use a parsing rule that allows all expressions of the form "LiteralOrIdentifier + LiteralOrIdentifier" and then the error will be detected during contextual analysis (when type checking occurs). In some cases this validation is not done by the compiler, and these errors are only detected at runtime. In a dynamically typed language, where type can only be determined at runtime, many type errors can only be detected at runtime. For example, the Python code a + b is syntactically valid at the phrase level, but the correctness of the types of a and b can only be determined at runtime, as variables do not have types in Python, only values do. Whereas there is disagreement about whether a type error detected by the compiler should be called a syntax error (rather than a static semantic error), type errors which can only be detected at program execution time are always regarded as semantic rather than syntax errors. Syntax definition The syntax of textual programming languages is usually defined using a combination of regular expressions (for lexical structure) and Backus–Naur form (a metalanguage for grammatical structure) to inductively specify syntactic categories (nonterminal) and terminal symbols. Syntactic categories are defined by rules called productions, which specify the values that belong to a particular syntactic category. Terminal symbols are the concrete characters or strings of characters (for example keywords such as define, if, let, or void) from which syntactically valid programs are constructed. Syntax can be divided into context-free syntax and context-sensitive syntax. Context-free syntax are rules directed by the metalanguage of the programming language. These would not be constrained by the context surrounding or referring that part of the syntax, whereas context-sensitive syntax would. A language can have different equivalent grammars, such as equivalent regular expressions (at the lexical levels), or different phrase rules which generate the same language. Using a broader category of grammars, such as LR grammars, can allow shorter or simpler grammars compared with more restricted categories, such as LL grammar, which may require longer grammars with more rules. Different but equivalent phrase grammars yield different parse trees, though the underlying language (set of valid documents) is the same. Example: Lisp S-expressions Below is a simple grammar, defined using the notation of regular expressions and Extended Backus–Naur form. It describes the syntax of S-expressions, a data syntax of the programming language Lisp, which defines productions for the syntactic categories expression, atom, number, symbol, and list: expression = atom | list atom = number | symbol number = [+-]?['0'-'9']+ symbol = ['A'-'Z']['A'-'Z''0'-'9'].* list = '(', expression*, ')' This grammar specifies the following: an expression is either an atom or a list; an atom is either a number or a symbol; a number is an unbroken sequence of one or more decimal digits, optionally preceded by a plus or minus sign; a symbol is a letter followed by zero or more of any characters (excluding whitespace); and a list is a matched pair of parentheses, with zero or more expressions inside it. Here the decimal digits, upper- and lower-case characters, and parentheses are terminal symbols. The following are examples of well-formed token sequences in this grammar: '12345', '()', '(A B C232 (1))' Complex grammars The grammar needed to specify a programming language can be classified by its position in the Chomsky hierarchy. The phrase grammar of most programming languages can be specified using a Type-2 grammar, i.e., they are context-free grammars, though the overall syntax is context-sensitive (due to variable declarations and nested scopes), hence Type-1. However, there are exceptions, and for some languages the phrase grammar is Type-0 (Turing-complete). In some languages like Perl and Lisp the specification (or implementation) of the language allows constructs that execute during the parsing phase. Furthermore, these languages have constructs that allow the programmer to alter the behavior of the parser. This combination effectively blurs the distinction between parsing and execution, and makes syntax analysis an undecidable problem in these languages, meaning that the parsing phase may not finish. For example, in Perl it is possible to execute code during parsing using a BEGIN statement, and Perl function prototypes may alter the syntactic interpretation, and possibly even the syntactic validity of the remaining code. Colloquially this is referred to as "only Perl can parse Perl" (because code must be executed during parsing, and can modify the grammar), or more strongly "even Perl cannot parse Perl" (because it is undecidable). Similarly, Lisp macros introduced by the defmacro syntax also execute during parsing, meaning that a Lisp compiler must have an entire Lisp run-time system present. In contrast, C macros are merely string replacements, and do not require code execution. Syntax versus semantics The syntax of a language describes the form of a valid program, but does not provide any information about the meaning of the program or the results of executing that program. The meaning given to a combination of symbols is handled by semantics (either formal or hard-coded in a reference implementation). Valid syntax must be established before semantics can make meaning out of it. Not all syntactically correct programs are semantically correct. Many syntactically correct programs are nonetheless ill-formed, per the language's rules; and may (depending on the language specification and the soundness of the implementation) result in an error on translation or execution. In some cases, such programs may exhibit undefined behavior. Even when a program is well-defined within a language, it may still have a meaning that is not intended by the person who wrote it. Using natural language as an example, it may not be possible to assign a meaning to a grammatically correct sentence or the sentence may be false: "Colorless green ideas sleep furiously." is grammatically well formed but has no generally accepted meaning. "John is a married bachelor." is grammatically well formed but expresses a meaning that cannot be true. The following C language fragment is syntactically correct, but performs an operation that is not semantically defined (because is a null pointer, the operations and have no meaning): complex *p = NULL; complex abs_p = sqrt (p->real * p->real + p->im * p->im); As a simpler example, int x; printf("%d", x); is syntactically valid, but not semantically defined, as it uses an uninitialized variable. Even though compilers for some programming languages (e.g., Java and C#) would detect uninitialized variable errors of this kind, they should be regarded as semantic errors rather than syntax errors.
Technology
Programming languages
null
379609
https://en.wikipedia.org/wiki/Vaquita
Vaquita
The vaquita ( ; Phocoena sinus) is a species of porpoise endemic to the northern end of the Gulf of California in Baja California, Mexico. Reaching a maximum body length of (females) or (males), it is the smallest of all living cetaceans. The species is currently on the brink of extinction, and is listed as Critically Endangered by the IUCN Red List; the steep decline in abundance is primarily due to bycatch in gillnets from the illegal totoaba fishery. Taxonomy The vaquita was defined as a species by two zoologists, Kenneth S. Norris and William N. McFarland, in 1958 after studying the morphology of skull specimens found on the beach. It was not until nearly thirty years later, in 1985, that fresh specimens allowed scientists to describe their external appearance fully. The genus Phocoena comprises four species of porpoise, most of which inhabit coastal waters (the spectacled porpoise is more oceanic). The vaquita is most closely related to Burmeister's porpoise (Phocoena spinipinnis) and less so to the spectacled porpoise (Phocoena dioptrica), two species limited to the Southern Hemisphere. Their ancestors are thought to have moved north across the equator more than 2.5 million years ago during a period of cooling in the Pleistocene. Genome sequencing from an individual captured in 2017 indicates that the ancestral vaquitas had already gone through a major population bottleneck in the past, which may explain why the few remaining individuals are still healthy despite the very low population size. "Vaquita" is Spanish for "little cow". Description The smallest living species of cetacean, the vaquita can be easily distinguished from any other species in its range. It has a small body with an unusually tall triangular dorsal fin, a rounded head, and no distinguished beak. The coloration is mostly grey with a darker back and a white ventral field. Prominent black patches surround its lips and eyes. Sexual dimorphism is apparent in body size, with mature females being longer than males and having larger heads and wider flippers. Females reach a maximum size of about , while males reach about . Dorsal fin height is greater in males than in females. They are also known to weigh around to . This makes them one of the smallest species in the porpoise family. Distribution and habitat Vaquita habitat is restricted to a small portion of the upper Gulf of California (also called the Sea of Cortez), making this the smallest range of any cetacean species. They live in shallow, turbid waters of less than depth. Diet Vaquitas are generalists, foraging on a variety of demersal fish species, crustaceans, and squids, though benthic fish such as grunts and croakers make up most of the diet. Social behavior Vaquitas are generally seen alone or in pairs, often with a calf, but have been observed in small groups of up to 10 individuals. Little is known about the life history of this species. Life expectancy is estimated at about 20 years and age of sexual maturity is somewhere between 3 and 6 years of age. While an initial analysis of stranded vaquitas estimated a two-year calving interval, recent sightings data suggest that vaquitas can reproduce annually. It is thought that vaquitas have a polygynous mating system in which males compete for females. This competition is evidenced by the presence of sexual dimorphism (females are larger than males), small group sizes, and large testes (accounting for nearly 3% of body mass). Population status Because the vaquita was only fully described in the late 1980s, historical abundance is unknown. Since 1983, all confirmed specimens, records, and sightings of P. sinus were evaluated. There were 45 records of P. sinus that were collected by skeletal remains, photographs, and sightings in 1983. The first comprehensive vaquita survey throughout their range took place in 1997 and estimated a population of 567 individuals. By 2007 abundance was estimated to have dropped to 150. Population abundance as of 2018 was estimated at less than 19 individuals. Given the continued rate of bycatch and low reproductive output from a small population, it is estimated that there are fewer than 10 vaquitas alive as of February 2022. In 2023, it is still estimated that there are as few as 10 in the wild. A 2024 survey observed a minimum of 6 to 8 individuals (with a maximum of 9 to 11), the lowest ever count, but this number may just be a result of the small survey area instead of an actual population decline, as vaquitas freely move in and out of the survey region. Reproduction Vaquitas reach sexual maturity from three to six years old. Vaquitas have synchronous reproduction, suggesting that calving span is greater than a year. Their pregnancies last from 10 to 11 months, and vaquita calves are nursed by their mothers for 6–8 months until becoming independent. Vaquitas give birth about every other year to a single calf, usually between the months of February and April. Because of their low reproduction rates, long gestation periods and larger species size, vaquitas are considered a K-selected species. K-selected species are more vulnerable to extinction as they cannot repopulate at the rate of r-selected species. Vaquitas are on the brink of extinction because their numbers are few and they cannot replenish their population fast enough to exceed the number of vaquitas dying off. Threats Fisheries bycatch Anthropogenic effects of a rise in commercial fishing such as accidental bycatch, illegal fishing, and entanglement have been linked to the cause of their decline. Shrimp fishing and gillnets create entanglement issues for the vaquita. Aspects of illegal fishing include open access fisheries and absent fisheries management has correlated towards poaching of the main prey source of the vaquita. The drastic decline in vaquita abundance is the result of fisheries bycatch in commercial and illegal gillnets, including fisheries targeting the now-vulnerable Totoaba, shrimp, and other available fish species. Despite government regulations, including a partial gillnet ban in 2015 and establishment of a permanent gillnet exclusion zone in 2017, illegal totoaba fishing remains prevalent in vaquita habitat, and as a result the population has continued to decline. Fewer than 19 vaquitas remained in the wild in 2018. Large-mesh gillnets used in illegal fishing for totoaba caused an increase in the rate of loss of vaquitas after 2011. In 2021, the Mexican government eliminated a "no tolerance" zone in the Upper Gulf of California and opened it up to fishing. In 2022 the navy began to place concrete blocks with rebar hooks into what is considered the primary area for vaquita, which was designated as a zero tolerance area (ZTA) for fishing in 2020. These are known to destroy gillnets (which can cost tens of thousands of dollars) in shallower waters. Their deployment had an immediate effect on the number of fishing boats in the area. Citing reports of echolocation outside of the ZTA, the navy expanded the area in which these blocks were placed starting in November 2023 with plans to continue placing new blocks until the end of June 2024. There are some concerns that the hooks will create ghost nets, and the navy and Sea Shepherd have removed nets trapped on hooks after being caught. Other threats Given their proximity to the coast, vaquitas are exposed to habitat alteration and pollution from runoff. Pesticides present in the water as a result of runoff from agriculture are a threat as they can be ingested by the vaquitas, causing harm and even death. Exposure to toxic compounds has also had a deleterious effect on vaquitas. Bycatch, which is the incidental catch of non-target species in fishing gear, is not only the largest threat to the survival of the vaquita, but to all marine mammals around the world. A series of simulations in a 2022 study indicate that the species has a chance to survive and recover if all bycatch is halted, despite the presence of other threats. However, the biggest threat still towards vaquita are fisheries. Northern fishing fleets have had an indirect positive impact mainly on marine mammals, because fishing on predators like sharks reduces its predatory negative impact on those groups. Although the predation of sharks towards vaquita do result in a decline in population and is seen as an alternate threat, northern fishing fleets also negatively impact this small marine mammal because the negative influence of incidental catch is greater than the positive influence of predation reduction by shark fisheries. Populations that experience a sudden decline in numbers are often more vulnerable to other threats in the future due to a bottleneck of genetic diversity within the reduced population. The reduced gene pool lowers the rate of adaptation and increases the rate of inbreeding, potentially leading to inbreeding depression. This phenomenon is attributed to the anthropogenic Allee effect, specifically on the end where small population size leads to low species fitness because of a lack of genetic diversity and the potential for inbreeding. Because of their small population size, vaquitas are experiencing a negative Allee effect, attributing to even smaller population growth rates, driving them further into extinction. However, a 2022 study on the genetic diversity of the vaquita suggests that the marine mammal’s historically small population ensures it is unlikely to greatly suffer from inbreeding depression. Attempts to start a population in captivity have proved to be more threatening to the population than helpful. A November 2017 effort ended up traumatizing and killing one female vaquita, as well as invoking unnecessary stress onto a juvenile. Still, creating a captive population could be used as a last resort to save the species and to further educate on vaquitas. Conservation Conservation status The vaquita is listed as critically endangered on the IUCN Red List, which is only one level above being completely extinct in the wild. It is considered the most endangered marine mammal in the world. The vaquita has been listed as critically endangered by the IUCN Red List of Threatened Species since 1996. The vaquita is at risk of extinction due to its small population size. It was approximated at one point that there were 150 individuals. In 2019, the UNESCO World Heritage Site where the last vaquita are located was classified as a World Heritage Site in Danger. The vaquita is also protected under the Endangered Species Act of 1973, the Mexican Official Standard NOM-059 (Norma Oficial Mexicana), and Appendix I of the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). For a small population such as the vaquita to recover after a severe decline in population size is very difficult. This conservation status is strongly influenced in part of the species reproductive biology. The large amount of unknown surrounding the key reproductive parameters of the vaquita makes understanding its potential for recovery even harder. Conservation efforts The vaquita is found only in the upper Gulf of California, Mexico area. Anthropogenic effects of a rise in commercial fishing such as accidental by-catch, illegal fishing, and entanglement have been linked to the cause of their decline. Shrimp fishing and gillnets create entanglement issues for the vaquita. Aspects of illegal fishing include open access fisheries and absent fisheries management have correlated towards poaching of the main prey source of the vaquita. The swim bladders of the Totoaba macdonaldi are being sold on the black market by cartels for profit. The Mexican government, international committees, scientists, and conservation groups have recommended and implemented plans to help reduce the rate of bycatch, enforce gillnet bans, and promote population recovery. Protection efforts throughout Mexico have taken place in order to preserve the population. In 2017, the Government of Mexico established it as a felony to remove an endangered species. Alongside this, the Government of Mexico also made a public agreement to prohibit gillnet use. Efforts are proactive in incentive applications to fisheries in a system of trade-offs that benefit fishermen and the vaquita. Mexico launched a program in 2008 called PACE-VAQUITA in an effort to enforce the gillnet ban in the Biosphere Reserve, allow fishermen to swap their gillnets for vaquita-safe fishing gear, and provide economic support to fishermen for surrendering fishing permits and pursuing alternative livelihoods. Despite the progress made with legal fishermen, hundreds of poachers continued to fish in the exclusion zone. Poaching continues as the swim bladders of totoaba can sell for anywhere from $20,000 to upwards of $80,000, and they are often referred to as the "cocaine of the sea." A black market for totoaba swim bladders has developed fairly recently in China (including Hong Kong). In 2017, poachers received up to US$20,000 for a kilogram of totoaba swim bladders, with some making as much as $116,000 in one day. The swim bladders of the Totoaba macdonaldi are being sold on the black market by cartel for profit. With continued illegal totoaba fishing, which is largely motivated by sales to the Chinese market where it is used in traditional medicine, and uncontrolled bycatch of vaquitas, the International Committee for the Recovery of the Vaquita (CIRVA) recommended that some vaquitas be removed from the high-density fishing area and be relocated to protected sea pens. This effort, called VaquitaCPR, captured two vaquitas in 2017; one was later released and the other died shortly after capture after both suffered from shock. Local and international conservation groups, including Museo de Ballena and Sea Shepherd Conservation Society, are working with the Mexican Navy to detect fishing in the Refuge Area and remove illegal gillnets. In March 2020, the U.S. National Marine Fisheries Service (NMFS) announced a ban on imported Mexican shrimp and other seafood caught in vaquita habitat in the northern Gulf of California. In response to the dire circumstances facing the vaquita as by-catch of the illegal totoaba trade, in 2017 Earth League International (ELI) commenced an investigation and intelligence gathering operation called Operation Fake Gold, during which the entire illicit totoaba maw (swim bladder) international supply chain, from Mexico to China, has been mapped and researched. Thanks to the confidential data that ELI shared with the Mexican authorities, in November 2020, a series of important arrests were made in Mexico. To date, efforts have been unsuccessful in solving the complex socioeconomic and environmental issues that affect vaquita conservation and the greater Gulf of California ecosystem. Necessary action includes habitat protection, resource management, education, fisheries enforcement, alternative livelihoods for fishermen, and raising awareness of the vaquita and associated issues. Jaramillo-Legorreta, et al. stated in 2007 that captive breeding programs were not a viable option for saving the species from extinction. The Secretariat of Environment and Natural Resources (SEMARNAT) announced on February 27, 2021, that it may reduce the protected area for the vaquita in the Sea of Cortés as there are only ten of the porpoises left and it may never recuperate its historical range. Beginning in July 2022, the Mexican government placed 193 concrete blocks in the Gulf of California no-tolerance zone, intended to allow the detection of nets by acoustic sonar and prevent further entrapment of vaquitas. Creating protected areas is always an option for conservationists, but because the vaquita's range is so small, there would be no use in trying to establish habitat corridors. One option for conservationists could be trying to create buffer zones near the coast in which pesticides harmful to vaquitas are restricted or even unavailable in order to enhance the protection value of the vaquita's range. In May 2023, a wildlife survey expedition discovered that the population had stabilized since it had last been recorded in 2021. In October 2024, Colossal Biosciences announced their non-profit foundation dedicated to conservation of extant species, with one of their first projects being the vaquita. Colossal plans to biobank genetic material to revive the species if it were to become extinct, and to use acoustic sensors and drones to monitor vaquitas in collaboration with CONANP. Consumers Roughly 80% of shrimp caught in the northern end of the Gulf of California, which has a high aquatic mammal bycatch rate, is consumed in the United States. As such, U.S. consumers of this shrimp are likely contributing to the vaquita extinction crisis. The Marine Mammal Protection Act of 1972, which forbids foreign fishers from exporting seafood with high levels of marine mammal bycatch, may allow for better efforts to preserve endangered vaquitas.
Biology and health sciences
Toothed whale
Animals
379788
https://en.wikipedia.org/wiki/Pomelo
Pomelo
The pomelo ( ; Citrus maxima), also known as a shaddock, is the largest citrus fruit. It is an ancestor of several cultivated citrus species, including the bitter orange and the grapefruit. It is a natural, non-hybrid, citrus fruit, native to Southeast Asia. Similar in taste to a sweet grapefruit, the pomelo is commonly eaten and used for festive occasions throughout Southeast and East Asia. As with the grapefruit, phytochemicals in the pomelo have the potential for drug interactions. Description The pomelo tree may be tall, possibly with a crooked trunk thick, and low-hanging, irregular branches. Their leaf petioles are distinctly winged, with alternate, ovate or elliptic shapes long, with a leathery, dull green upper layer, and hairy under-leaf. The flowers – single or in clusters – are fragrant and yellow-white in color. The fruit is large, in diameter, usually weighing . It has a thicker rind than a grapefruit, and is divided into 11 to 18 segments. The flesh tastes like mild grapefruit, with a little of its common bitterness (the grapefruit is a hybrid of the pomelo and the orange). The enveloping membranes around the segments are chewy and bitter, considered inedible, and usually discarded. There are at least sixty varieties. The fruit generally contains a few, relatively large seeds, but some varieties have numerous seeds. The physical and chemical characteristics of pomelo vary widely across South Asia. History Ancestral Citrus species The pomelo is significant botanically as one of the three major wild ancestors of several cultivated hybrid Citrus species, including the bitter orange and the grapefruit; and less directly also of the lemon, the sweet orange, and some types of mandarin. The sweet orange is a naturally occurring hybrid between the pomelo and the mandarin, with the pomelo the larger and firmer of the two. The grapefruit was originally presumed to be a naturally occurring hybrid of the pomelo and the mandarin; however, genome analysis shows that it is actually a backcrossed hybrid between a pomelo and a sweet orange, which is why 63% of the grapefruit's genome comes from the pomelo. The bitter orange is a hybrid of wild type mandarin and pomelo; in turn, the lemon is a hybrid of bitter orange and citron, i.e. cultivated lemons have some pomelo ancestry. In addition, there has been repeated introgression of pomelo genes into both early cultivated hybrid mandarins and later mandarin varieties, these last also involving hybridization with the sweet orange. Pomelo genes are thus included in many types of cultivated Citrus. Etymology According to the Oxford English Dictionary, the etymology of the word 'pomelo' is uncertain. It may be derived from Dutch . The Dutch name in turn has uncertain etymology, but is possibly derived from Dutch 'swollen' or 'pumpkin', combined with 'lemon, citrus fruit', influenced by Portuguese with the same meaning. An alternative possibility is that the Dutch name derives from Portuguese 'citrus fruit'. The specific name maxima is the female form of the Latin for 'biggest'. One theory for the alternative English name 'shaddock' is that it was adopted after the plant's introduction into Barbados by a 'Captain Shaddock' of the East India Company (apparently Philip Chaddock, who visited the island in the late 1640s). From there the name spread to Jamaica in 1696. The fruit is called jambola in varieties of English spoken in South Asia, and jabong in Hawai'i. As food Nutrition Raw pomelo flesh is 89% water, 10% carbohydrates, 1% protein, and contains negligible fat. A 100-gram reference amount provides of food energy, and is rich in vitamin C (68% of the Daily Value), with no other micronutrients in significant content (table). Culinary The flesh and juice are edible, and the rind is used to make preserves, or may be candied. In Brazil, the outer part of the rind is used for making a sweet conserve, while the spongy pith of the rind is discarded. In Sri Lanka, it is often eaten as a dessert, sometimes sprinkled with sugar. In large parts of Southeast Asia where pomelo is native it is commonly eaten as a dessert, sprinkled with salt or dipped in a salt mixture, or made into salads. In the Philippines, a pink beverage is made from pomelo and pineapple juice. The fruit may have been introduced to China around 100 BCE. In East Asia, especially in Cantonese cuisine, braised pomelo pith is used to make dishes that are high in fibre and low in fat. Drug interactions The pomelo, while not itself toxic, can cause adverse interactions similar to those caused by the grapefruit with a wide range of prescription drugs. These occur by the inhibition of cytochrome P450-mediated metabolism of prescription drugs including for example some anti-hypertensives, some anticoagulants, some anticancer agents, some anti-infective agents, some statins, and some immunosuppressants. Cultivation The seeds of the pomelo are monoembryonic, producing seedlings with genes from both parents, but they are usually similar to the tree they grow from and therefore in Asia, pomelos are typically grown from seed. Seeds can be stored for 80 days at a temperature of with moderate relative humidity. Outside Asia, the pomelo is usually grafted onto other citrus rootstocks to produce trees that are identical to the parent; high-quality varieties are propagated by air-layering or by budding onto favored rootstocks.
Biology and health sciences
Citrus fruits
Plants
379803
https://en.wikipedia.org/wiki/Clock%20generator
Clock generator
A clock generator is an electronic oscillator that produces a clock signal for use in synchronizing a circuit's operation. The output clock signal can range from a simple symmetrical square wave to more complex arrangements. The basic parts that all clock generators share are a resonant circuit and an amplifier. The resonant circuit is usually a quartz piezo-electric oscillator, although simpler tank circuits and even RC circuits may be used. The amplifier circuit usually inverts the signal from the oscillator and feeds a portion back into the oscillator to maintain oscillation. The generator may have additional sections to modify the basic signal. The 8088 for example, used a 2/3 duty cycle clock, which required the clock generator to incorporate logic to convert the 50/50 duty cycle which is typical of raw oscillators. Other such optional sections include frequency divider or clock multiplier sections. Programmable clock generators allow the number used in the divider or multiplier to be changed, allowing any of a wide variety of output frequencies to be selected without modifying the hardware. The clock generator in a motherboard is often changed by computer enthusiasts to control the speed of a CPU, FSB, GPU or RAM. Typically the programmable clock generator is set by the BIOS at boot time to the selected value; although some systems have dynamic frequency scaling, which frequently re-programs the clock generator. Timing-signal generators (TSGs) TSGs are clocks that are used throughout service-provider networks, frequently as the building integrated timing supply (BITS) for a central office. Digital switching systems and some transmission systems (e.g., SONET, RREX, LUBI) depend on reliable, high-quality synchronization (or timing) to prevent impairments. To provide this, most service providers use interoffice synchronization distribution networks based on the stratum hierarchy and implement the BITS concept to meet intraoffice synchronization needs. A TSG is clock equipment that accepts input timing reference signals and generates output timing reference signals. The input reference signals can be either DS1 or composite-clock (CC) signals, and the output signals can also be DS1 or CC signals (or both). A TSG is made up of the six components listed below: An input timing interface that accepts DS1 or CC input signals A timing-generation component that creates the timing signals used by the output timing-distribution component An output timing distribution component that utilizes the timing signals from the timing-generation component to create multiple DS1 and CC output signals A performance-monitoring (PM) component that monitors the timing characteristics of the input signals An alarm interface that connects to the central-office (CO) alarm-monitoring system An operations interface for local crafts person use and communications with remote operations systems
Technology
Functional circuits
null
379828
https://en.wikipedia.org/wiki/Cyanosis
Cyanosis
Cyanosis is the change of body tissue color to a bluish-purple hue, as a result of decrease in the amount of oxygen bound to the hemoglobin in the red blood cells of the capillary bed. Cyanosis is apparent usually in the body tissues covered with thin skin, including the mucous membranes, lips, nail beds, and ear lobes. Some medications may cause discoloration such as medications containing amiodarone or silver. Furthermore, mongolian spots, large birthmarks, and the consumption of food products with blue or purple dyes can also result in the bluish skin tissue discoloration and may be mistaken for cyanosis. Appropriate physical examination and history taking is a crucial part to diagnose cyanosis. Management of cyanosis involves treating the main cause, as cyanosis isn’t a disease, it is a symptom. Cyanosis is further classified into central cyanosis and peripheral cyanosis. Pathophysiology The mechanism behind cyanosis is different depending on whether it is central or peripheral. Central cyanosis Central cyanosis occurs due to decrease in arterial oxygen saturation (SaO2), and begins to show once the concentration of deoxyhemoglobin in the blood reaches a concentration of ≥ 5.0 g/dL (≥ 3.1 mmol/L or oxygen saturation of ≤ 85%). This indicates a cardiopulmonary condition. Causes of central cyanosis are discussed below. Peripheral cyanosis Peripheral cyanosis happens when there is increased concentration of deoxyhemoglobin on the venous side of the peripheral circulation. In other words, cyanosis is dependent on the concentration of deoxyhemoglobin. Patients with severe anemia may appear normal despite higher-than-normal concentrations of deoxyhemoglobin. While patients with increased amounts of red blood cells (e.g., polycythemia vera) can appear cyanotic even with lower concentrations of deoxyhemoglobin. Causes Central cyanosis Central cyanosis is often due to a circulatory or ventilatory problem that leads to poor blood oxygenation in the lungs. It develops when arterial oxygen saturation drops below 85% or 75%. Acute cyanosis can be a result of asphyxiation or choking and is one of the definite signs that ventilation is being blocked. Central cyanosis may be due to the following causes: Central nervous system (impairing normal ventilation): Intracranial hemorrhage Drug overdose (e.g., heroin) Generalized tonic–clonic seizure (GTCS) Respiratory system: Pneumonia Bronchiolitis Bronchospasm (e.g., asthma) Pulmonary hypertension Pulmonary embolism Hypoventilation Chronic obstructive pulmonary disease, or COPD (emphysema) Cardiovascular system: Congenital heart disease (e.g., Tetralogy of Fallot, right to left shunts in heart or great vessels) Heart failure Valvular heart disease Myocardial infarction Hemoglobinopathies: Methemoglobinemia Sulfhemoglobinemia Polycythemia Congenital cyanosis (HbM Boston) arises from a mutation in the α-codon which results in a change of primary sequence, H → Y. Tyrosine stabilizes the Fe(III) form (oxyhaemoglobin) creating a permanent T-state of Hb. Others: High altitude, cyanosis may develop in ascents to altitudes >2400 m. Hypothermia Frostbite Obstructive sleep apnea Peripheral cyanosis Peripheral cyanosis is the blue tint in fingers or extremities, due to an inadequate or obstructed circulation. The blood reaching the extremities is not oxygen-rich and when viewed through the skin a combination of factors can lead to the appearance of a blue color. All factors contributing to central cyanosis can also cause peripheral symptoms to appear, but peripheral cyanosis can be observed in the absence of heart or lung failures. Small blood vessels may be restricted and can be treated by increasing the normal oxygenation level of the blood. Peripheral cyanosis may be due to the following causes: All common causes of central cyanosis Reduced cardiac output (e.g., heart failure or hypovolemia) Cold exposure Chronic obstructive pulmonary disease (COPD) Arterial obstruction (e.g., peripheral vascular disease, Raynaud phenomenon) Venous obstruction (e.g., deep vein thrombosis) Differential cyanosis Differential cyanosis is the bluish coloration of the lower but not the upper extremity and the head. This is seen in patients with a patent ductus arteriosus. Patients with a large ductus develop progressive pulmonary vascular disease, and pressure overload of the right ventricle occurs. As soon as pulmonary pressure exceeds aortic pressure, shunt reversal (right-to-left shunt) occurs. The upper extremity remains pink because deoxygenated blood flows through the patent duct and directly into the descending aorta while sparing the brachiocephalic trunk, left common carotid, and left subclavian arteries. Evaluation A detailed history and physical examination (particularly focusing on the cardiopulmonary system) can guide further management and help determine the medical tests to be performed. Tests that can be performed include pulse oximetry, arterial blood gas, complete blood count, methemoglobin level, electrocardiogram, echocardiogram, X-Ray, CT scan, cardiac catheterization, and hemoglobin electrophoresis. In newborns, peripheral cyanosis typically presents in the distal extremities, circumoral, and periorbital areas. Of note, mucous membranes remain pink in peripheral cyanosis as compared to central cyanosis where the mucous membranes are cyanotic. Skin pigmentation and hemoglobin concentration can affect the evaluation of cyanosis. Cyanosis may be more difficult to detect on people with darker skin pigmentation. However, cyanosis can still be diagnosed with careful examination of the typical body areas such as nail beds, tongue, and mucous membranes where the skin is thinner and more vascular. As mentioned above, patients with severe anemia may appear normal despite higher than normal concentrations of deoxyhemoglobin. Signs of severe anemia may include pale mucosa (lips, eyelids, and gums), fatigue, lightheadedness, and irregular heartbeats. Management Cyanosis is a symptom, not a disease itself, so management should be focused on treating the underlying cause. If it is an emergency, management should always begin with securing the airway, breathing, and circulation. In patients with significant respiratory distress, supplemental oxygen (in the form of nasal canula or continuous positive airway pressure depending on severity) should be given immediately. If the methemoglobin levels are positive for methemoglobinemia, first-line treatment is to administer methylene blue. History The name cyanosis literally means the blue disease or the blue condition. It is derived from the color cyan, which comes from cyanós (κυανός), the Greek word for blue. It is postulated by Dr. Christen Lundsgaard that cyanosis was first described in 1749 by Jean-Baptiste de Sénac, a French physician who served King Louis XV. De Sénac concluded from an autopsy that cyanosis was caused by a heart defect that led to the mixture of arterial and venous blood circulation. But it was not until 1919, when Dr. Lundsgaard was able to derive the concentration of deoxyhemoglobin (8 volumes per cent) that could cause cyanosis.
Biology and health sciences
Symptoms and signs
Health
379845
https://en.wikipedia.org/wiki/Inflection%20point
Inflection point
In differential calculus and differential geometry, an inflection point, point of inflection, flex, or inflection (rarely inflexion) is a point on a smooth plane curve at which the curvature changes sign. In particular, in the case of the graph of a function, it is a point where the function changes from being concave (concave downward) to convex (concave upward), or vice versa. For the graph of a function of differentiability class (its first derivative to have opposite signs in the neighborhood of  (Bronshtein and Semendyayev 2004, p. 231). Categorization of points of inflection Points of inflection can also be categorized according to whether is zero or nonzero. if is zero, the point is a stationary point of inflection if is not zero, the point is a non-stationary point of inflection A stationary point of inflection is not a local extremum. More generally, in the context of functions of several real variables, a stationary point that is not a local extremum is called a saddle point. An example of a stationary point of inflection is the point on the graph of . The tangent is the -axis, which cuts the graph at this point. An example of a non-stationary point of inflection is the point on the graph of , for any nonzero . The tangent at the origin is the line , which cuts the graph at this point. Functions with discontinuities Some functions change concavity without having points of inflection. Instead, they can change concavity around vertical asymptotes or discontinuities. For example, the function is concave for negative and convex for positive , but it has no points of inflection because 0 is not in the domain of the function. Functions with inflection points whose second derivative does not vanish Some continuous functions have an inflection point even though the second derivative is never 0. For example, the cube root function is concave upward when x is negative, and concave downward when x is positive, but has no derivatives of any order at the origin.
Mathematics
Functions: General
null
379868
https://en.wikipedia.org/wiki/Linear%20differential%20equation
Linear differential equation
In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form where and are arbitrary differentiable functions that do not need to be linear, and are the successive derivatives of an unknown function of the variable . Such an equation is an ordinary differential equation (ODE). A linear differential equation may also be a linear partial differential equation (PDE), if the unknown function depends on several variables, and the derivatives that appear in the equation are partial derivatives. Types of solution A linear differential equation or a system of linear equations such that the associated homogeneous equations have constant coefficients may be solved by quadrature, which means that the solutions may be expressed in terms of integrals. This is also true for a linear equation of order one, with non-constant coefficients. An equation of order two or higher with non-constant coefficients cannot, in general, be solved by quadrature. For order two, Kovacic's algorithm allows deciding whether there are solutions in terms of integrals, and computing them if any. The solutions of homogeneous linear differential equations with polynomial coefficients are called holonomic functions. This class of functions is stable under sums, products, differentiation, integration, and contains many usual functions and special functions such as exponential function, logarithm, sine, cosine, inverse trigonometric functions, error function, Bessel functions and hypergeometric functions. Their representation by the defining differential equation and initial conditions allows making algorithmic (on these functions) most operations of calculus, such as computation of antiderivatives, limits, asymptotic expansion, and numerical evaluation to any precision, with a certified error bound. Basic terminology The highest order of derivation that appears in a (linear) differential equation is the order of the equation. The term , which does not depend on the unknown function and its derivatives, is sometimes called the constant term of the equation (by analogy with algebraic equations), even when this term is a non-constant function. If the constant term is the zero function, then the differential equation is said to be homogeneous, as it is a homogeneous polynomial in the unknown function and its derivatives. The equation obtained by replacing, in a linear differential equation, the constant term by the zero function is the . A differential equation has constant coefficients if only constant functions appear as coefficients in the associated homogeneous equation. A of a differential equation is a function that satisfies the equation. The solutions of a homogeneous linear differential equation form a vector space. In the ordinary case, this vector space has a finite dimension, equal to the order of the equation. All solutions of a linear differential equation are found by adding to a particular solution any solution of the associated homogeneous equation. Linear differential operator A basic differential operator of order is a mapping that maps any differentiable function to its th derivative, or, in the case of several variables, to one of its partial derivatives of order . It is commonly denoted in the case of univariate functions, and in the case of functions of variables. The basic differential operators include the derivative of order 0, which is the identity mapping. A linear differential operator (abbreviated, in this article, as linear operator or, simply, operator) is a linear combination of basic differential operators, with differentiable functions as coefficients. In the univariate case, a linear operator has thus the form where are differentiable functions, and the nonnegative integer is the order of the operator (if is not the zero function). Let be a linear differential operator. The application of to a function is usually denoted or , if one needs to specify the variable (this must not be confused with a multiplication). A linear differential operator is a linear operator, since it maps sums to sums and the product by a scalar to the product by the same scalar. As the sum of two linear operators is a linear operator, as well as the product (on the left) of a linear operator by a differentiable function, the linear differential operators form a vector space over the real numbers or the complex numbers (depending on the nature of the functions that are considered). They form also a free module over the ring of differentiable functions. The language of operators allows a compact writing for differentiable equations: if is a linear differential operator, then the equation may be rewritten There may be several variants to this notation; in particular the variable of differentiation may appear explicitly or not in and the right-hand and of the equation, such as or . The kernel of a linear differential operator is its kernel as a linear mapping, that is the vector space of the solutions of the (homogeneous) differential equation . In the case of an ordinary differential operator of order , Carathéodory's existence theorem implies that, under very mild conditions, the kernel of is a vector space of dimension , and that the solutions of the equation have the form where are arbitrary numbers. Typically, the hypotheses of Carathéodory's theorem are satisfied in an interval , if the functions are continuous in , and there is a positive real number such that for every in . Homogeneous equation with constant coefficients A homogeneous linear differential equation has constant coefficients if it has the form where are (real or complex) numbers. In other words, it has constant coefficients if it is defined by a linear operator with constant coefficients. The study of these differential equations with constant coefficients dates back to Leonhard Euler, who introduced the exponential function , which is the unique solution of the equation such that . It follows that the th derivative of is , and this allows solving homogeneous linear differential equations rather easily. Let be a homogeneous linear differential equation with constant coefficients (that is are real or complex numbers). Searching solutions of this equation that have the form is equivalent to searching the constants such that Factoring out (which is never zero), shows that must be a root of the characteristic polynomial of the differential equation, which is the left-hand side of the characteristic equation When these roots are all distinct, one has distinct solutions that are not necessarily real, even if the coefficients of the equation are real. These solutions can be shown to be linearly independent, by considering the Vandermonde determinant of the values of these solutions at . Together they form a basis of the vector space of solutions of the differential equation (that is, the kernel of the differential operator). In the case where the characteristic polynomial has only simple roots, the preceding provides a complete basis of the solutions vector space. In the case of multiple roots, more linearly independent solutions are needed for having a basis. These have the form where is a nonnegative integer, is a root of the characteristic polynomial of multiplicity , and . For proving that these functions are solutions, one may remark that if is a root of the characteristic polynomial of multiplicity , the characteristic polynomial may be factored as . Thus, applying the differential operator of the equation is equivalent with applying first times the operator and then the operator that has as characteristic polynomial. By the exponential shift theorem, and thus one gets zero after application of As, by the fundamental theorem of algebra, the sum of the multiplicities of the roots of a polynomial equals the degree of the polynomial, the number of above solutions equals the order of the differential equation, and these solutions form a basis of the vector space of the solutions. In the common case where the coefficients of the equation are real, it is generally more convenient to have a basis of the solutions consisting of real-valued functions. Such a basis may be obtained from the preceding basis by remarking that, if is a root of the characteristic polynomial, then is also a root, of the same multiplicity. Thus a real basis is obtained by using Euler's formula, and replacing and by and . Second-order case A homogeneous linear differential equation of the second order may be written and its characteristic polynomial is If and are real, there are three cases for the solutions, depending on the discriminant . In all three cases, the general solution depends on two arbitrary constants and . If , the characteristic polynomial has two distinct real roots , and . In this case, the general solution is If , the characteristic polynomial has a double root , and the general solution is If , the characteristic polynomial has two complex conjugate roots , and the general solution is which may be rewritten in real terms, using Euler's formula as Finding the solution satisfying and , one equates the values of the above general solution at and its derivative there to and , respectively. This results in a linear system of two linear equations in the two unknowns and . Solving this system gives the solution for a so-called Cauchy problem, in which the values at for the solution of the DEQ and its derivative are specified. Non-homogeneous equation with constant coefficients A non-homogeneous equation of order with constant coefficients may be written where are real or complex numbers, is a given function of , and is the unknown function (for sake of simplicity, "" will be omitted in the following). There are several methods for solving such an equation. The best method depends on the nature of the function that makes the equation non-homogeneous. If is a linear combination of exponential and sinusoidal functions, then the exponential response formula may be used. If, more generally, is a linear combination of functions of the form , , and , where is a nonnegative integer, and a constant (which need not be the same in each term), then the method of undetermined coefficients may be used. Still more general, the annihilator method applies when satisfies a homogeneous linear differential equation, typically, a holonomic function. The most general method is the variation of constants, which is presented here. The general solution of the associated homogeneous equation is where is a basis of the vector space of the solutions and are arbitrary constants. The method of variation of constants takes its name from the following idea. Instead of considering as constants, they can be considered as unknown functions that have to be determined for making a solution of the non-homogeneous equation. For this purpose, one adds the constraints which imply (by product rule and induction) for , and Replacing in the original equation and its derivatives by these expressions, and using the fact that are solutions of the original homogeneous equation, one gets This equation and the above ones with as left-hand side form a system of linear equations in whose coefficients are known functions (, the , and their derivatives). This system can be solved by any method of linear algebra. The computation of antiderivatives gives , and then . As antiderivatives are defined up to the addition of a constant, one finds again that the general solution of the non-homogeneous equation is the sum of an arbitrary solution and the general solution of the associated homogeneous equation. First-order equation with variable coefficients The general form of a linear ordinary differential equation of order 1, after dividing out the coefficient of , is: If the equation is homogeneous, i.e. , one may rewrite and integrate: where is an arbitrary constant of integration and is any antiderivative of . Thus, the general solution of the homogeneous equation is where is an arbitrary constant. For the general non-homogeneous equation, it is useful to multiply both sides of the equation by the reciprocal of a solution of the homogeneous equation. This gives As the product rule allows rewriting the equation as Thus, the general solution is where is a constant of integration, and is any antiderivative of (changing of antiderivative amounts to change the constant of integration). Example Solving the equation The associated homogeneous equation gives that is Dividing the original equation by one of these solutions gives That is and For the initial condition one gets the particular solution System of linear differential equations A system of linear differential equations consists of several linear differential equations that involve several unknown functions. In general one restricts the study to systems such that the number of unknown functions equals the number of equations. An arbitrary linear ordinary differential equation and a system of such equations can be converted into a first order system of linear differential equations by adding variables for all but the highest order derivatives. That is, if {{tmath| y', y, \ldots, y^{(k)} }} appear in an equation, one may replace them by new unknown functions that must satisfy the equations and for . A linear system of the first order, which has unknown functions and differential equations may normally be solved for the derivatives of the unknown functions. If it is not the case this is a differential-algebraic system, and this is a different theory. Therefore, the systems that are considered here have the form where and the are functions of . In matrix notation, this system may be written (omitting "") The solving method is similar to that of a single first order linear differential equations, but with complications stemming from noncommutativity of matrix multiplication. Let be the homogeneous equation associated to the above matrix equation. Its solutions form a vector space of dimension , and are therefore the columns of a square matrix of functions , whose determinant is not the zero function. If , or is a matrix of constants, or, more generally, if commutes with its antiderivative , then one may choose equal the exponential of . In fact, in these cases, one has In the general case there is no closed-form solution for the homogeneous equation, and one has to use either a numerical method, or an approximation method such as Magnus expansion. Knowing the matrix , the general solution of the non-homogeneous equation is where the column matrix is an arbitrary constant of integration. If initial conditions are given as the solution that satisfies these initial conditions is Higher order with variable coefficients A linear ordinary equation of order one with variable coefficients may be solved by quadrature, which means that the solutions may be expressed in terms of integrals. This is not the case for order at least two. This is the main result of Picard–Vessiot theory which was initiated by Émile Picard and Ernest Vessiot, and whose recent developments are called differential Galois theory. The impossibility of solving by quadrature can be compared with the Abel–Ruffini theorem, which states that an algebraic equation of degree at least five cannot, in general, be solved by radicals. This analogy extends to the proof methods and motivates the denomination of differential Galois theory. Similarly to the algebraic case, the theory allows deciding which equations may be solved by quadrature, and if possible solving them. However, for both theories, the necessary computations are extremely difficult, even with the most powerful computers. Nevertheless, the case of order two with rational coefficients has been completely solved by Kovacic's algorithm. Cauchy–Euler equation Cauchy–Euler equations are examples of equations of any order, with variable coefficients, that can be solved explicitly. These are the equations of the form where are constant coefficients. Holonomic functions A holonomic function, also called a D-finite function, is a function that is a solution of a homogeneous linear differential equation with polynomial coefficients. Most functions that are commonly considered in mathematics are holonomic or quotients of holonomic functions. In fact, holonomic functions include polynomials, algebraic functions, logarithm, exponential function, sine, cosine, hyperbolic sine, hyperbolic cosine, inverse trigonometric and inverse hyperbolic functions, and many special functions such as Bessel functions and hypergeometric functions. Holonomic functions have several closure properties; in particular, sums, products, derivative and integrals of holonomic functions are holonomic. Moreover, these closure properties are effective, in the sense that there are algorithms for computing the differential equation of the result of any of these operations, knowing the differential equations of the input. Usefulness of the concept of holonomic functions results of Zeilberger's theorem, which follows. A holonomic sequence is a sequence of numbers that may be generated by a recurrence relation with polynomial coefficients. The coefficients of the Taylor series at a point of a holonomic function form a holonomic sequence. Conversely, if the sequence of the coefficients of a power series is holonomic, then the series defines a holonomic function (even if the radius of convergence is zero). There are efficient algorithms for both conversions, that is for computing the recurrence relation from the differential equation, and vice versa''. It follows that, if one represents (in a computer) holonomic functions by their defining differential equations and initial conditions, most calculus operations can be done automatically on these functions, such as derivative, indefinite and definite integral, fast computation of Taylor series (thanks of the recurrence relation on its coefficients), evaluation to a high precision with certified bound of the approximation error, limits, localization of singularities, asymptotic behavior at infinity and near singularities, proof of identities, etc.
Mathematics
Differential equations
null
379985
https://en.wikipedia.org/wiki/Dall%27s%20porpoise
Dall's porpoise
Dall's porpoise (Phocoenoides dalli) is a species of porpoise endemic to the North Pacific. It is the largest of porpoises and the only member of the genus Phocoenoides. The species is named after American naturalist W. H. Dall. Taxonomy Dall's porpoise is the only member of the genus Phocoenoides. The dalli and truei types were initially described as separate species in 1911, but later studies determined that the available evidence only supported the existence of one species. Currently, these two color morphs are recognized as distinct subspecies, Dall's porpoise (Phocoenoides dalli dalli) and True's porpoise (Phocoenoides dalli truei). Description Dall's porpoises can be easily distinguished from other porpoises and cetacean species within their range. They have a wide, robust body, a comparatively tiny head, and no distinguished beak. Their flippers are positioned at the front of the body and a triangular dorsal fin sits mid-body. Patterns of coloration are highly variable, but Dall's porpoises are mostly black, have white to grey patches on the flank and belly, and frosting on the dorsal fin and trailing edge of the fluke. They are the largest porpoise species, growing up to in length and weighing . Sexual dimorphism is apparent in body size and shape, with mature males being larger, developing a deeper caudal peduncle, and having a dorsal fin that is significantly angled forward in comparison to a female's. Dall's porpoise calves have a greyish coloration with no frosting on flippers and flukes. Calves measure about 100 cm at birth. Growth rates are similar at first, but at about two years old males begin to grow faster than females. Externally, maturity is measured by length which is usually attained at 3–5 years. Sizes vary between populations, but on average females reach a maximum size of 210 cm and males grow to about 220 cm, except in the southern Okhotsk Sea where males can grow as long as 239 cm. Two colormorphs have been identified: the dalli type and truei type. The truei type, found only in the western Pacific, has a white belly patch that extends farther forward across the body than that of the dalli type. Distribution and habitat Dall's porpoises are limited to the North Pacific: in the east from California to the Bering Sea and Okhotsk Sea, and in the west down to the Sea of Japan. They have been sighted as far south as Scammon's Lagoon in Baja California when water temperature was unseasonably cold. Dall's porpoises generally prefer cold waters less than . Although mostly an offshore species, they do occur in deeper coastal waters, near submarine canyons or in fjords. Behavior Foraging Dall's porpoises are opportunistic, hunting a variety of surface and mid-water species. Common prey are mesopelagic fish, such as myctophids, and gonatid squid. Stomach content analyses have also found cases of crustacean consumption, including krill and shrimp, but this is abnormal and likely not an important part of their diet. A previous study revealed that tagged Dall's porpoises spent most of their time within 10 m of the surface, but have been recorded diving to depths of up to 94 m. Social Dall's porpoises live in small, fluid groups of two to ten individuals, but aggregations of hundreds have been reported. They have a polygynous mating system in which males compete for females. During the mating season, a male will select a fertile female and guard her to ensure paternity. While guarding, males may sacrifice opportunities to forage on deep dives. Births usually take place in the summer after a gestation period of 11 to 12 months. Females generally give birth every 3 years, depending on their condition. Life expectancy is about 15 to 20 years, but a lot about their mortality is unknown. Dall's porpoises are prey to transient killer whales. They have, however, been observed in association with resident killer whales, engaging in apparent play behaviors with their calves, and swimming with them. One recognizable Dall's porpoise was observed travelling with the AB pod of resident orca from May through October 1984. Great white sharks are also a known predator, with at least one documented case on the eastern North Pacific Ocean. Movement Dall's porpoises are highly active swimmers. Rapid swimming at the surface creates a characteristic spray called a "rooster tail". They are commonly seen approaching boats to bowride, and they will also ride on the waves formed at the heads of larger swimming whales. Population status Abundance throughout their range and is estimated to be over one million, but current population trends are unknown. Surveys along the coasts of California, Oregon, and Washington between 2008 and 2014 estimated a population abundance of 25,800. Alaska's population is estimated to be 83,400. Abundance in coastal British Columbia is nearly 5,000 individuals. Populations in the western North Pacific are divided by both subspecies and migratory patterns. Abundance of the offshore dalli type is about 162,000. It is estimated that there are about 173,000 dalli type that travel between Japan and the southern Okhotsk Sea. The dalli type that migrates to the Okhotsk Sea in the summer is estimated at 111,000. The population of truei-type porpoises migrating between Japan and the central Okhotsk Sea number about 178,000. Threats Fisheries bycatch Dall's porpoises are vulnerable to fisheries bycatch. Thousands were killed in commercial driftnet fisheries until the United Nations issued a moratorium in the 1990s. Before the moratorium went into effect, 8,000 Dall's porpoises are estimated to have been bycaught in one year alone (1989–1990). Smaller numbers, from several hundred to a few thousand, are estimated to have been bycaught in Japanese salmon fisheries in US waters and in the Bering Sea from 1981 to 1987. Driftnet and trawl fisheries still operate in some areas throughout their range, with particularly high levels of bycatch in Russian waters. Hunting The Dall's porpoise is still harvested for meat in Japan. The number of individuals taken each year increased following the 1980s moratorium on whaling of larger cetacean species. In 1988, more than 45,000 Dall's porpoises were harpooned. In 1990, after international attention was drawn to the issue, the Japanese government introduced a reduction on take. A quota of over 17,000 a year is in effect today (9,000 dalli type in the Japan-southern Okhotsk Sea population; 8,700 from the truei-type population that migrates into the central Okhotsk Sea) making it the largest direct hunt of any cetacean species in the world. The hunt of Dall's porpoises has been criticized by scientific committees which question the sustainability of large quotas on regional populations. Assessments are outdated for these targeted populations, and given the level of annual reported take, there may be regional declines in abundance. Pollution Environmental contaminants, including dichlorodiphenyldichloroethylene (DDE) and polychlorinated biphenyls (PCBs), are another threat to Dall's porpoises. Pollutants accumulate in the blubber layer, and in high concentrations can reduce hormone levels, affect the reproductive system, and result in calf death. Conservation status Dall's porpoise is listed as Least Concern on the IUCN Red List. Levels of both bycatch and commercial hunting are likely underestimates because they account only for reported data; however, there is no evidence for a range-wide decline of the species. The species is also listed on Appendix II of the Convention on the Conservation of Migratory Species of Wild Animals (CMS), and, like all other marine mammal species, is protected in the United States under the Marine Mammal Protection Act (MMPA).
Biology and health sciences
Toothed whale
Animals
380114
https://en.wikipedia.org/wiki/Pontoon%20bridge
Pontoon bridge
A pontoon bridge (or ponton bridge), also known as a floating bridge, uses floats or shallow-draft boats to support a continuous deck for pedestrian and vehicle travel. The buoyancy of the supports limits the maximum load that they can carry. Most pontoon bridges are temporary and used in wartime and civil emergencies. There are permanent pontoon bridges in civilian use that can carry highway traffic. Permanent floating bridges are useful for sheltered water crossings if it is not considered economically feasible to suspend a bridge from anchored piers. Such bridges can require a section that is elevated or can be raised or removed to allow waterborne traffic to pass. Pontoon bridges have been in use since ancient times and have been used to great advantage in many battles throughout history, such as the Battle of Garigliano, the Battle of Oudenarde, the crossing of the Rhine during World War II, the Yom Kippur War, Operation Badr, the Iran–Iraq War's Operation Dawn 8, and most recently, in the 2022 Russian invasion of Ukraine, after crossings over the Dnipro River had been destroyed. Definition A pontoon bridge is a collection of specialized, shallow draft boats or floats, connected together to cross a river or canal, with a track or deck attached on top. The water buoyancy supports the boats, limiting the maximum load to the total and point buoyancy of the pontoons or boats. The supporting boats or floats can be open or closed, temporary or permanent in installation, and made of rubber, metal, wood, or concrete. The decking may be temporary or permanent, and constructed out of wood, modular metal, or asphalt or concrete over a metal frame. Etymology The spelling "ponton" in English dates from at least 1870. The use continued in references found in U.S. patents during the 1890s. It continued to be spelled in that fashion through World War II, when temporary floating bridges were used extensively throughout the European theatre. U.S. combat engineers commonly pronounced the word "ponton" rather than "pontoon" and U.S. military manuals spelled it using a single 'o'. The U.S. military differentiated between the bridge itself ("ponton") and the floats used to provide buoyancy ("pontoon"). The original word was derived from Old French ponton, from Latin ponto ("ferryboat"), from pons ("bridge"). Design When designing a pontoon bridge, the civil engineer must take into consideration Archimedes' principle: Each pontoon can support a load equal to the mass of the water that it displaces. This load includes the mass of the bridge and the pontoon itself. If the maximum load of a bridge section is exceeded, one or more pontoons become submerged. Flexible connections have to allow for one section of the bridge to be weighted down more heavily than the other parts. The roadway across the pontoons should be relatively light, so as not to limit the carrying capacity of the pontoons. The connection of the bridge to shore requires the design of approaches that are not too steep, protect the bank from erosion and provide for movements of the bridge during (tidal) changes of the water level. Floating bridges were historically constructed using wood. Pontoons were formed by simply lashing several barrels together, by rafts of timbers, or by using boats. Each bridge section consisted of one or more pontoons, which were maneuvered into position and then anchored underwater or on land. The pontoons were linked together using wooden stringers called balks. The balks were covered by a series of cross planks called chesses to form the road surface, and the chesses were secured with side guard rails. A floating bridge can be built in a series of sections, starting from an anchored point on the shore. Modern pontoon bridges usually use pre-fabricated floating structures. Most pontoon bridges are designed for temporary use, but bridges across water bodies with a constant water level can remain in place much longer. Hobart Bridge, a long pontoon bridge built 1943 in Hobart, Tasmania was only replaced after 21 years. The fourth Galata Bridge that spans the Golden Horn in Istanbul, Turkey was built in 1912 and operated for 80 years. Provisional and lightweight pontoon bridges are easily damaged. The bridge can be dislodged or inundated when the load limit of the bridge is exceeded. The bridge can be induced to sway or oscillate in a hazardous manner from the swell, from a storm, a flood or a fast moving load. Ice or floating objects (flotsam) can accumulate on the pontoons, increasing the drag from river current and potentially damaging the bridge. See below for floating pontoon failures and disasters. Historic uses Ancient China In ancient China, the Zhou dynasty Chinese text of the Shi Jing (Book of Odes) records that King Wen of Zhou was the first to create a pontoon bridge in the 11th century BC. However, the historian Joseph Needham has pointed out that in all likely scenarios, the temporary pontoon bridge was invented during the 9th or 8th century BC in China, as this part was perhaps a later addition to the book (considering how the book had been edited up until the Han dynasty, 202 BC – 220 AD). Although earlier temporary pontoon bridges had been made in China, the first secure and permanent ones (and linked with iron chains) in China came first during the Qin dynasty (221–207 BC). The later Song dynasty (960–1279 AD) Chinese statesman Cao Cheng once wrote of early pontoon bridges in China (spelling of Chinese in Wade-Giles format): During the Eastern Han dynasty (25–220 AD), the Chinese created a very large pontoon bridge that spanned the width of the Yellow River. There was also the rebellion of Gongsun Shu in 33 AD, where a large pontoon bridge with fortified posts was constructed across the Yangtze River, eventually broken through with ramming ships by official Han troops under Commander Cen Peng. During the late Eastern Han into the Three Kingdoms period, during the Battle of Chibi in 208 AD, the Prime Minister Cao Cao once linked the majority of his fleet together with iron chains, which proved to be a fatal mistake once he was thwarted with a fire attack by Sun Quan's fleet. The armies of Emperor Taizu of Song had a large pontoon bridge built across the Yangtze River in 974 in order to secure supply lines during the Song dynasty's conquest of the Southern Tang. On October 22, 1420, Ghiyasu'd-Din Naqqah, the official diarist of the embassy sent by the Timurid ruler of Persia, Mirza Shahrukh (r. 1404–1447), to the Ming dynasty of China during the reign of the Yongle Emperor (r. 1402–1424), recorded his sight and travel over a large floating pontoon bridge at Lanzhou (constructed earlier in 1372) as he crossed the Yellow River on this day. He wrote that it was: Greco-Roman era The Greek writer Herodotus in his Histories, records several pontoon bridges. Emperor Caligula built a bridge at Baiae in 37 AD. For Emperor Darius I The Great of Persia (522–485 BC), the Greek Mandrocles of Samos once engineered a pontoon bridge that stretched across the Bosporus, linking Asia to Europe, so that Darius could pursue the fleeing Scythians as well as move his army into position in the Balkans to overwhelm Macedon. Other spectacular pontoon bridges were Xerxes' Pontoon Bridges across the Hellespont by Xerxes I in 480 BC to transport his huge army into Europe: According to John Hale's Lords of the Sea, to celebrate the onset of the Sicilian Expedition (415 - 413 B.C.), the Athenian general, Nicias, paid builders to engineer an extraordinary pontoon bridge composed of gilded and tapestried ships for a festival that drew Athenians and Ionians across the sea to the sanctuary of Apollo on Delos. On the occasion when Nicias was a sponsor, young Athenians paraded across the boats, singing as they walked, to give the armada a spectacular farewell. The late Roman writer Vegetius, in his work De Re Militari, wrote: The emperor Caligula is said to have ridden a horse across a pontoon bridge stretching two miles between Baiae and Puteoli while wearing the armour of Alexander the Great to mock a soothsayer who had claimed he had "no more chance of becoming emperor than of riding a horse across the Bay of Baiae". Caligula's construction of the bridge cost a massive sum of money and added to discontent with his rule. Middle Ages During the Middle Ages, pontoons were used alongside regular boats to span rivers during campaigns, or to link communities which lacked resources to build permanent bridges. The Hun army of Attila built a bridge across the Nišava during the siege of Naissus in 442 to bring heavy siege towers within range of the city. Sassanid forces crossed the Euphrates on a quickly built pontoon bridge during the siege of Kallinikos in 542. The Ostrogothic Kingdom constructed a fortified bridge across the Tiber during the siege of Rome in 545 to block Byzantine general Belisarius' relief flotillas to the city. The Avar Khaganate forced Syriac-Roman engineers to construct two pontoon bridges across the Sava during the siege of Sirmium in 580 to completely surround the city with their troops and siege works. Emperor Heraclius crossed the Bosporus on horseback on a large pontoon bridge in 638. The army of the Umayyad Caliphate built a pontoon bridge over the Bosporus in 717 during the siege of Constantinople (717–718). The Carolingian army of Charlemagne constructed a portable pontoon bridge of anchored boats bound together and used it to cross the Danube during campaigns against the Avar Khaganate in the 790s. Charlemagne's army built two fortified pontoon bridges across the Elbe in 789 during a campaign against the Slavic Veleti. The German army of Otto the Great employed three pontoon bridges, made from pre-fabricated materials, to rapidly cross the Recknitz river at the Battle on the Raxa in 955 and win decisively against the Slavic Obotrites. Tenth-Century German Ottonian capitularies demanded that royal fiscal estates maintain watertight, river-fordable wagons for purposes of war. The Danish Army of Cnut the Great completed a pontoon bridge across the Helge River during the Battle of Helgeå in 1026. Crusader forces constructed a pontoon bridge across the Orontes to expedite resupply during the siege of Antioch in December 1097. According to the chronicles, the earliest floating bridge across the Dnieper was built in 1115. It was located near Vyshhorod, Kiev. Bohemian troops under the command of Frederick I, Holy Roman Emperor crossed the Adige in 1157 on a pontoon bridge built in advance by the people of Verona on orders of the German Emperor. The French Royal Army of King Philip II of France constructed a pontoon bridge across the Seine to seize Les Andelys from the English at the siege of Château Gaillard in 1203. During the Fifth Crusade, the Crusaders built two pontoon bridges across the Nile at the siege of Damietta (1218–1219), including one supported by 38 boats. On 27 May 1234, Crusader troops crossed the river Ochtum in Germany on a pontoon bridge during the fight against the Stedingers. Imperial Mongol troops constructed a pontoon bridge at the Battle of Mohi in 1241 to outflank the Hungarian army. The French army of King Louis IX of France crossed the Charente on multiple pontoon bridges during the Battle of Taillebourg on 21 July 1242. Louis IX had a pontoon bridge built across the Nile to provide unimpeded access to troops and supplies in early March 1250 during the Seventh Crusade. A Florentine army erected a pontoon bridge across the Arno during the siege of Pisa in 1406. The English army of John Talbot, 1st Earl of Shrewsbury crossed the Oise across a pontoon bridge of portable leather vessels in 1441. Ottoman engineers built a pontoon bridge across the Golden Horn during the siege of Constantinople (1453), using over a thousand barrels. The bridge was strong enough to support carts. The Ottoman Army constructed a pontoon bridge during the siege of Rhodes (1480). Venetian pioneers built a floating bridge across the Adige at the Battle of Calliano (1487). Early modern period Before the Battle of Worcester, the final battle of the English Civil War, on 30 August 1651, Oliver Cromwell delayed the start of the battle to give time for two pontoon bridges to be constructed, one over the River Severn and the other over the River Teme, close to their confluence. This allowed Cromwell to move his troops West of the Severn during the action on 3 September 1651 and was crucial to the victory by his New Model Army. The Spanish Army constructed a pontoon bridge at the Battle of Río Bueno in 1654. However, as the bridge broke apart it all ended in a sound defeat of the Spanish by local Mapuche-Huilliche forces. French general Jean Lannes's troops built a pontoon bridge to cross the Po river prior to the Battle of Montebello (1800). Napoleon's Grande Armée made extensive use of pontoon bridges at the battles of Aspern-Essling and Wagram under the supervision of General Henri Gatien Bertrand. General Jean Baptiste Eblé's engineers erected four pontoon bridges in a single night across the Dnieper during the Battle of Smolensk (1812). Working in cold water, Eblé's Dutch engineers constructed a 100-meter-long pontoon bridge during the Battle of Berezina to allow the Grande Armée to escape to safety. During the Peninsular War the British army transported "tin pontoons" that were lightweight and could be quickly turned into a floating bridge. Lt Col Charles Pasley of the Royal School of Military Engineering at Chatham England developed a new form of pontoon which was adopted in 1817 by the British Army. Each pontoon was split into two halves, and the two pointed ends could be connected together in locations with tidal flow. Each half was enclosed, reducing the risk of swamping, and the sections bore multiple lashing points. The "Palsey pontoon" lasted until 1836 when it was replaced by the "Blanshard pontoon" which comprised tin cylinders 3 feet wide and 22 feet long, placed 11 feet apart, making the pontoon very buoyant. The pontoon was tested with the Palsey pontoon on the Medway. An alternative proposed by Charles Pasley comprised two copper canoes, each 2 foot 8 inches wide and 22 foot long and coming in two sections which were fastened side by side to make a double canoe raft. Copper was used in preference to fast-corroding tin. Lashed at 10 foot centres, these were good for cavalry, infantry and light guns; lashed at 5 foot centres, heavy cannon could cross. The canoes could also be lashed together to form rafts. One cart pulled by two horse carried two half canoes and stores. A comparison of pontoons used by each nations army shows that almost all were open boats coming in one, two or even three pieces, mainly wood, some with canvas and rubber protection. Belgium used an iron boat; the United States used cylinders split into three. In 1862 the Union forces commanded by Major General Ambrose Burnside were stuck on the wrong side of the Rappahannock River at the Battle of Fredericksburg for lack of the arrival of the pontoon train, resulting in severe losses. The report of this disaster resulted in Britain forming and training a Pontoon Troop of Engineers. During the American Civil War various forms of pontoon bridges were tried and discarded. Wooden pontoons and India rubber bag pontoons shaped like a torpedo proved impractical until the development of cotton-canvas covered pontoons, which required more maintenance but were lightweight and easier to work with and transport. From 1864 a lightweight design known as Cumberland Pontoons, a folding boat system, were widely used during the Atlanta Campaign to transport soldiers and artillery across rivers in the South. In 1872 at a military review before Queen Victoria, a pontoon bridge was thrown across the River Thames at Windsor, Berkshire, where the river was wide. The bridge, comprising 15 pontoons held by 14 anchors, was completed in 22 minutes and then used to move five battalions of troops across the river. It was removed in 34 minutes the next day. At Prairie du Chien, Wisconsin, the Pile-Pontoon Railroad Bridge was constructed in 1874 over the Mississippi River to carry a railroad track connecting that city with Marquette, Iowa. Because the river level could vary by as much as 22 feet, the track was laid on an adjustable platform above the pontoons. This unique structure remained in use until the railroad was abandoned in 1961, when it was removed. The British Blanshard Pontoon stayed in British use until the late 1870s, when it was replaced by the "Blood Pontoon". The Blood Pontoon returned to the open boat system, which enabled use as boats when not needed as pontoons. Side carrying handles helped transportation. The new pontoon proved strong enough to support loaded elephants and siege guns as well as military traction engines. Early 20th century The British Blood Pontoon MkII, which took the original and cut it into two halves, was still in use with the British Army in 1924. The First World War saw developments on "trestles" to form the link between a river bank and the pontoon bridge. Some infantry bridges in WW1 used any material available, including petrol cans as flotation devices. The Kapok Assault Bridge for infantry was developed for the British Army, using kapok fibre-filled canvas float and timber foot walks. America created their own version. Folding Boat Equipment was developed in 1928 and went through several versions until it was used in WW2 to complement the Bailey Pontoon. It had a continuous canvas hinge and could fold flat for storage and transportation. When assembled it could carry 15 men and with two boats and some additional toppings it could transport a 3-ton truck. Further upgrades during WW2 resulted in it moving to a Class 9 bridge. World War II Pontoon bridges were used extensively during World War II, mainly in the European Theater of Operations. The United States was the principal user, with Britain next. United States In the United States, combat engineers were responsible for bridge deployment and construction. These were formed principally into Engineer Combat Battalions, which had a wide range of duties beyond bridging, and specialized units, including Light Ponton Bridge Companies, Heavy Ponton Bridge Battalions, and Engineer Treadway Bridge Companies; any of these could be organically attached to infantry units or directly at the divisional, corps, or army level. American engineers built three types of floating bridges: M1938 infantry footbridges, M1938 ponton bridges, and M1940 treadway bridges, with numerous subvariants of each. These were designed to carry troops and vehicles of varying weight, using either an inflatable pneumatic ponton or a solid aluminum-alloy ponton bridge. Both types of bridges were supported by pontons (known today as "pontoons") fitted with a deck built of balk, which were square, hollow aluminum beams. American Light Ponton Bridge Company An Engineer Light Ponton Company consisted of three platoons: two bridge platoons, each equipped with one unit of M3 pneumatic bridge, and a lightly equipped platoon which had one unit of footbridge and equipment for ferrying. The bridge platoons were equipped with the M3 pneumatic bridge, which was constructed of heavy inflatable pneumatic floats and could handle up to ; this was suitable for all normal infantry division loads without reinforcement, greater with. American Heavy Ponton Bridge Battalion A Heavy Ponton Bridge Battalion was provided with equipage required to provide stream crossing for heavy military vehicles that could not be supported by a light ponton bridge. The Battalion had two lettered companies of two bridge platoons each. Each platoon was equipped with one unit of heavy ponton equipage. The battalion was an organic unit of army and higher echelons. The M1940 could carry up to . The M1 Treadway Bridge could support up to . The roadway, made of steel, could carry up to , while the center section made of thick plywood could carry up to . The wider, heavier tanks used the outside steel treadway while the narrower, lighter jeeps and trucks drove across the bridge with one wheel in the steel treadway and the other on the plywood. American Engineer Treadway Bridge Company An Engineer Treadway Bridge Company consisted of company headquarters and two bridge platoons. It was an organic unit of the armored force, and normally was attached to an Armored Engineer Battalion. Each bridge platoon transported one unit of steel treadway bridge equipage for construction of ferries and bridges in river-crossing operations of the armored division. Stream-crossing equipment included utility powerboats, pneumatic floats, and two units of steel treadway bridge equipment, each of which allowed the engineers to build a floating bridge about in length. Materials and equipment Pneumatic ponton The United States Army Corps of Engineers designed a self-contained bridge transportation and erection system. The Brockway model B666 6x6 truck chassis (also built under license by Corbitt and White) was used to transport both the bridge's steel and rubber components. A single Brockway truck could carry material for of bridge, including two pontons, two steel saddles that were attached to the pontons, and four treadway sections. Each treadway was long with high guardrails on either side of the wide track. The truck was mounted with a hydraulic crane that was used to unload the wide steel treadways. A custom designed twin boom arm was attached to rear of the truck bed and helped unroll and place the heavy inflatable rubber pontoons upon which the bridge was laid. The wheelbase chassis included a front winch and extra-large air-brake tanks that also served to inflate the rubber pontoons before they were placed in the water. A pneumatic float was made of rubberized fabric separated by bulkheads into 12 airtight compartments and inflated with air. The pneumatic float consisted of an outer perimeter tube, a floor, and a removable center tube. The capacity float was wide, long, deep. Solid ponton Solid aluminum-alloy pontons were used in place of pneumatic floats to support heavier bridges and loads. They were also pressed into service for lighter loads as needed. Treadway A treadway bridge was a multi-section, prefabricated floating steel bridge supported by pontoons carrying two metal tracks (or "tread ways") forming a roadway. Depending on its weight class, the treadway bridge was supported either by heavy inflatable pneumatic pontons or by aluminum-alloy half-pontons. The aluminum half-pontons were long overall, wide at the gunwales, and deep except at the bow where the gunwale was raised. The gunwales were center-to-center. At freeboard, the half-ponton has a displacement of . The sides and bow of the half-ponton were gradually sloped, permitting two or more to be nested for transporting or storing. A treadway bridge could be built of floating spans or fixed spans. An M2 treadway bridge was designed to carry artillery, heavy duty trucks, and medium tanks up to . This could be of any length, and was what was used over major river obstacles such as the Rhine and Moselle. Doctrine stated that it would take hours to place a 362-foot section of M2 treadway during daylight and hours at night. Pergrin says that in practise 50 ft/hour of treadway construction was expected, which is a little slower than the speed specified by doctrine. By 1943, combat engineers faced the need for bridges to bear weights of 35 tons or more. To increase weight bearing capacity, they used bigger floats to add buoyancy. This overcame the capacity limitation, but the larger floats were both more difficult to transport to the crossing site and requiring more and larger trucks in the divisional and corps trains. Britain Donald Bailey invented the Bailey bridge, which was made up of modular, pre-fabricated steel trusses capable of carrying up to over spans up to . While typically constructed point-to-point over piers, they could be supported by pontoons as well. The Bailey bridge was used for the first time in 1942. The first version put into service was a Bailey Pontoon and Raft with a single-single Bailey bay supported on two pontoons. A key feature of the Bailey Pontoon was the use of a single span from the bank to the bridge level which eliminated the need for bridge trestles. For lighter vehicle bridges the Folding Boat Equipment could be used and the Kapok Assault Bridge was available for infantry. An open sea type of pontoon, another British war time invention, known by their code names, the Mulberry harbours floated across the English Channel to provide harbours for the June 1944 Allied invasion of Normandy. The dock piers were code named "Whale". These piers were the floating roadways that connected the "Spud" pier heads to the land. These pier heads or landing wharves, at which ships were unloaded each consisted of a pontoon with four legs that rested on the sea bed to anchor the pontoon, yet allowed it to float up and down freely with the tide. "Beetles" were pontoons that supported the "Whale" piers. They were moored in position using wires attached to "Kite" anchors which were also designed by Allan Beckett. These anchors had a high holding power as was demonstrated in D+13 Normandy storm where the British Mulberry survived most of the storm damage whereas the American Mulberry, which only had 20% of its Kite Anchors deployed, was destroyed. Gallery Modern military uses Pontoon bridges were extensively used by both armies and civilians throughout the latter half of the 20th century. From the Post-War period into the early 1980s the U.S. Army and its NATO and other allies employed three main types of pontoon bridge/raft. The M4 bridge featured a lightweight aluminum balk deck supported by rigid aluminum hull pontoons. The M4T6 bridge used the same aluminum balk deck of the M4, but supported instead by inflatable rubber pontoons. The Class 60 bridge consisted of a more robust steel girder and grid deck supported by inflatable rubber pontoons. All three pontoon bridge types were cumbersome to transport and deploy, and slow to assemble, encouraging the development of an easier to transport, deploy and assemble floating bridge. Amphibious float bridges Several alternatives featured a self-propelled amphibious integrated transporter, floating pontoon, bridge deck section that could be delivered and assembled in the water under its own power, linking as many units as required to bridge a gap or form a raft ferry. An early example was the Engin de Franchissement de l’Avant EFA (mobile bridge) amphibious forward crossing apparatus conceived by French General Jean Gillois in 1955. The system consisted of a wheeled amphibious truck equipped with inflatable outboard flotation sponsons and a rotating vehicle bridge deck section. The system was developed by the West German firm Eisenwerke-Kaiserslauter (EWK) and entered production by the French-German consortium Pontesa. The EFA system was first deployed by the French Army in 1965, and subsequently by the West German , British Army, and on a very limited basis by the U.S. Army, where it was referred to as Amphibious River Crossing Equipment (ARCE). Production ended in 1973. The EFA was used in combat by the Israel Defense Forces (IDF), which employed former U.S. The Egyptian Army used the equipment to cross the Suez Canal in their attack on Israeli forces during the Yom Kippur War of 1973. EWK further developed the EFA system into the M2 "Alligator" Amphibious Bridging Vehicle equipped with fold-out aluminum flotation pontoons, which was produced from 1967 to 1970 and sold to the West German, British and Singapore militaries. The M2 was followed by the revised M3 version, entering service in 1996 with Germany, Britain, Taiwan and Singapore. The M3 was used in combat by British Forces during the Iraq War. More recently, Turkey has developed a similar system in the FNSS Samur wheeled amphibious assault bridge, while the Russian PMM-2 and Chinese GZM003 armoured amphibious assault bridge ride on tracks. A similar amphibious system, the Mobile Floating Assault Bridge-Ferry (MFAB-F) was developed in the U.S. by Chrysler between 1959 and 1962. As with the French EFA, the MFAB-F consisted of an amphibious truck with a rotating bridge deck section, but there were no outboard flotation sponsons. The MFAB-F was first deployed by the U.S. Army in 1964 and later by Belgium. An improved version was produced by FMC from 1970 to 1976. The MFAB-F remained in service into the early 1980s before being replaced by a simpler continuous pontoon or "ribbon bridge" system. Ribbon float bridges In the early Cold War period the Soviet Red Army began development of a new kind of continuous pontoon bridge made up of short folding sections or bays that could be transported and deployed rapidly, automatically unfold in the water, and quickly be assembled into a floating bridge of variable length. Known as the PMP Folding Float Bridge, it was first deployed in 1962 and subsequently adopted by Warsaw Pact countries and other states employing Soviet military equipment. The PMP proved its viability in combat when it was used by Egyptian forces to cross the Suez Canal in 1973. Operation Badr, which opened the Yom Kippur War between Egypt and Israel, involved the erection of at least 10 pontoon bridges to cross the Canal. Beginning in 1969 the U.S. Army Mobility Equipment Research and Development Command (MERADCOM) reverse-engineered the Russian PMP design to develop the improved float bridge (IFB), later known as the standard ribbon bridge (SRB). The IFB/SRB was type classified in 1972 and first deployed in service in 1976. It was very similar to the PMP but was constructed of lightweight aluminum instead of heavier steel. In 1977 the West German decided to adopt the SRB with some modifications and improvements, entering service in 1979 as the Faltschwimmbrücke, or Foldable Floating Bridge (FSB). Work on designing an improved version of the U.S. SRB incorporating features of the German FSB began in the 1990s, with first deployment by the U.S. Army in the early 2000s as the improved ribbon bridge (IRB). In addition to the U.S. and Germany, the IFB/SRB/FSB/IRB has been adopted by the Armed Forces of Australia, Brazil, Canada, the Netherlands, Portugal, South Korea and Sweden, among others. Yugoslav wars During the Yugoslav wars of the 1990s, the Maslenica Bridge was destroyed and a short pontoon bridge was built by Croatian civilian and military authorities in July 1993 over a narrow sea outlet in the town of Maslenica, after the territory was retaken from Serbian Krajina. Between 1993 and 1995 the pontoon served as one of the two operational land links toward Dalmatia and Croat- and Bosnian Muslim-held areas of Bosnia-Herzegovina that did not go through Serb-held territory. In 1995 the 502nd and 38th Engineer Companies of the U.S. Army's 130th Engineer Brigade, and the 586th Engineer Company from Ft. Benning GA, operating as part of IFOR assembled a standard ribbon bridge under adverse weather conditions across the Sava River near Županja (between Croatia and Bosnia), with a total length of . It was dismantled in 1996. Iran–Iraq war Numerous pontoon bridges were constructed by the Iranians and Iraqis to cross the various rivers and marshes alongside the Iraqi border. Notable instances include one constructed over the Karkheh river to ambush Iraqi Armor during Operation Nasr, and another where they crossed certain marshes during Operation Dawn 8. They were extremely prominent due to their use in allowing for tanks and transports to cross rivers. Invasion of Iraq The United States Army's 299th Multi-role Bridge Company, USAR deployed a standard ribbon bridge across the Euphrates river at Objective Peach near Al Musayib on the night of 3 April 2003. The 185-meter bridge was built to support retrograde operations because of the heavy-armor traffic crossing a partially destroyed adjacent highway span. "By dawn on 4 April 2003, the 299th Engineer Company had emplaced a 185-meter long Assault Float Bridge—the first time in history that a bridge of its type was built in combat." This took place during the 2003 invasion of Iraq by American and British forces. That same night, the 299th also constructed a single-story Medium Girder Bridge to patch the damage done to the highway span. The 299th was part of the U.S. Army's 3rd Infantry Division as they crossed the border into Iraq on 20 March 2003. Syrian civil war In February 2018, pro-regime fighters used a pontoon bridge to cross the Euphrates river during the Battle of Khasham. Eastern Ukraine offensive In May 2022, Ukrainian forces repelled an attempted Russian military crossing of the Donets river, west of Sievierodonetsk in Luhansk Oblast, during the Eastern Ukraine offensive. At least one Russian battalion tactical group was reportedly destroyed, as well as the pontoon bridge deployed in the crossing. Permanent pontoon bridges in civilian use This design for bridges is also used for permanent bridges designed for highway traffic, pedestrian traffic and bicycles, with sections for boats to ply the waterway being crossed. Seattle in the United States and Kelowna in British Columbia, Canada are two places with permanent pontoon bridges, William R. Bennett Bridge in British Columbia and three in Seattle: Lacey V. Murrow Memorial Bridge, Evergreen Point Floating Bridge, and Homer M. Hadley Memorial Bridge; the latter of which will become the first operational floating railway bridge upon the opening of the final phase of the 2 Line in 2025. There are five pontoon bridges across the Suez Canal. Nordhordland Bridge is a combined cable-stayed and pontoon highway bridge in Norway. Failures and disasters The Saint Isaac's Bridge across the Neva River in Saint Petersburg suffered two disasters, one natural, a gale in 1733, and then a fire in 1916. Floating bridges can be vulnerable to inclement weather, especially strong winds. The U.S. state of Washington is home to some of the longest permanent floating bridges in the world, and two of these failed in part due to strong winds. In 1979, the longest floating bridge crossing salt water, the Hood Canal Bridge, was subjected to winds of , gusting up to . Waves of battered the sides of the bridge, and within a few hours the western of the structure had sunk. It has since been rebuilt. In 1990, the 1940 Lacey V. Murrow Memorial Bridge was closed for renovations. Specifically, the sidewalks were being removed to widen the traffic lanes to the standards mandated by the Interstate Highway System. Engineers realized that jackhammers could not be employed to remove the sidewalks without risking compromising the structural integrity of the entire bridge. As such, a unique process called hydrodemolition was employed, in which powerful jets of water are used to blast away concrete, bit by bit. The water used in this process was temporarily stored in the hollow chambers in the pontoons of the bridge in order to prevent it from contaminating the lake. During a week of rain and strong winds, the watertight doors were not closed and the pontoons filled with water from the storm, in addition to the water from the hydrodemolition. The inundated bridge broke apart and sank. The bridge was rebuilt in 1993. A minor disaster occurs if anchors or connections between the pontoon bridge segments fail. This may happen because of overloading, extreme weather or flood. The bridge disintegrates and parts of it start to float away. Many cases are known. When the Lacey V. Murrow Memorial Bridge sank, it severed the anchor cables of the bridge parallel to it. A powerful tugboat pulled on that bridge against the wind during a subsequent storm, and prevented further damage.
Technology
Transport infrastructure
null
380541
https://en.wikipedia.org/wiki/Skeletal%20muscle
Skeletal muscle
Skeletal muscle (commonly referred to as muscle) is one of the three types of vertebrate muscle tissue, the others being cardiac muscle and smooth muscle. They are part of the voluntary muscular system and typically are attached by tendons to bones of a skeleton. The skeletal muscle cells are much longer than in the other types of muscle tissue, and are also known as muscle fibers. The tissue of a skeletal muscle is striated – having a striped appearance due to the arrangement of the sarcomeres. A skeletal muscle contains multiple fascicles – bundles of muscle fibers. Each individual fiber and each muscle is surrounded by a type of connective tissue layer of fascia. Muscle fibers are formed from the fusion of developmental myoblasts in a process known as myogenesis resulting in long multinucleated cells. In these cells, the nuclei, termed myonuclei, are located along the inside of the cell membrane. Muscle fibers also have multiple mitochondria to meet energy needs. Muscle fibers are in turn composed of myofibrils. The myofibrils are composed of actin and myosin filaments called myofilaments, repeated in units called sarcomeres, which are the basic functional, contractile units of the muscle fiber necessary for muscle contraction. Muscles are predominantly powered by the oxidation of fats and carbohydrates, but anaerobic chemical reactions are also used, particularly by fast twitch fibers. These chemical reactions produce adenosine triphosphate (ATP) molecules that are used to power the movement of the myosin heads. Skeletal muscle comprises about 35% of the body of humans by weight. The functions of skeletal muscle include producing movement, maintaining body posture, controlling body temperature, and stabilizing joints. Skeletal muscle is also an endocrine organ. Under different physiological conditions, subsets of 654 different proteins as well as lipids, amino acids, metabolites and small RNAs are found in the secretome of skeletal muscles. Skeletal muscles are substantially composed of multinucleated contractile muscle fibers (myocytes). However, considerable numbers of resident and infiltrating mononuclear cells are also present in skeletal muscles. In terms of volume, myocytes make up the great majority of skeletal muscle. Skeletal muscle myocytes are usually very large, being about 2–3 cm long and 100 μm in diameter. By comparison, the mononuclear cells in muscles are much smaller. Some of the mononuclear cells in muscles are endothelial cells (which are about 50–70 μm long, 10–30 μm wide and 0.1–10 μm thick), macrophages (21 μm in diameter) and neutrophils (12-15 μm in diameter). However, in terms of nuclei present in skeletal muscle, myocyte nuclei may be only half of the nuclei present, while nuclei from resident and infiltrating mononuclear cells make up the other half. Considerable research on skeletal muscle is focused on the muscle fiber cells, the myocytes, as discussed in detail in the first sections, below. However, recently, interest has also focused on the different types of mononuclear cells of skeletal muscle, as well as on the endocrine functions of muscle, described subsequently, below. Structure Gross anatomy There are more than 600 skeletal muscles in the human body, making up around 40% of body weight in healthy young adults. In Western populations, men have on average around 61% more skeletal muscle than women. Most muscles occur in bilaterally-placed pairs to serve both sides of the body. Muscles are often classed as groups of muscles that work together to carry out an action. In the torso there are several major muscle groups including the pectoral, and abdominal muscles; intrinsic and extrinsic muscles are subdivisions of muscle groups in the hand, foot, tongue, and extraocular muscles of the eye. Muscles are also grouped into compartments including four groups in the arm, and the four groups in the leg. Apart from the contractile part of a muscle consisting of its fibers, a muscle contains a non-contractile part of dense fibrous connective tissue that makes up the tendon at each end. The tendons attach the muscles to bones to give skeletal movement. The length of a muscle includes the tendons. Connective tissue is present in all muscles as deep fascia. Deep fascia specialises within muscles to enclose each muscle fiber as endomysium; each muscle fascicle as perimysium, and each individual muscle as epimysium. Together these layers are called mysia. Deep fascia also separates the groups of muscles into muscle compartments. Two types of sensory receptors found in muscles are muscle spindles, and Golgi tendon organs. Muscle spindles are stretch receptors located in the muscle belly. Golgi tendon organs are proprioceptors located at the myotendinous junction that inform of a muscle's tension. Skeletal muscle cells Skeletal muscle cells are the individual contractile cells within a muscle, and are often termed as muscle fibers. A single muscle such as the biceps in a young adult male contains around 253,000 muscle fibers. Skeletal muscle fibers are multinucleated with the nuclei often referred to as myonuclei. This occurs during myogenesis with the fusion of myoblasts each contributing a nucleus. Fusion depends on muscle-specific proteins known as fusogens called myomaker and myomerger. Many nuclei are needed by the skeletal muscle cell for the large amounts of proteins and enzymes needed to be produced for the cell's normal functioning. A single muscle fiber can contain from hundreds to thousands of nuclei. A muscle fiber for example in the human biceps with a length of 10 cm can have as many as 3,000 nuclei. Unlike in a non-muscle cell where the nucleus is centrally positioned, the myonucleus is elongated and located close to the sarcolemma. The myonuclei are quite uniformly arranged along the fiber with each nucleus having its own myonuclear domain where it is responsible for supporting the volume of cytoplasm in that particular section of the myofiber. A group of muscle stem cells known as myosatellite cells, also satellite cells are found between the basement membrane and the sarcolemma of muscle fibers. These cells are normally quiescent but can be activated by exercise or pathology to provide additional myonuclei for muscle growth or repair. Attachment to tendons Muscles attach to tendons in a complex interface region known as the musculotendinous junction, also known as the myotendinous junction, an area specialised for the primary transmission of force. At the muscle-tendon interface, force is transmitted from the sarcomeres in the muscle cells to the tendon. Muscles and tendons develop in close association, and after their joining at the myotendinous junction they constitute a dynamic unit for the transmission of force from muscle contraction to the skeletal system. Arrangement of muscle fibers Muscle architecture refers to the arrangement of muscle fibers relative to the axis of force generation, which runs from a muscle's origin to its insertion. The usual arrangements are types of parallel, and types of pennate muscle. In parallel muscles, the fascicles run parallel to the axis of force generation, but the fascicles can vary in their relationship to one another, and to their tendons. These variations are seen in fusiform, strap, and convergent muscles. A convergent muscle has a triangular or fan-shape as the fibers converge at its insertion and are fanned out broadly at the origin. A less common example of a parallel muscle is a circular muscle such as the orbicularis oculi, in which the fibers are longitudinally arranged, but create a circle from origin to insertion. These different architectures, can cause variations in the tension that a muscle can create between its tendons. The fibers in pennate muscles run at an angle to the axis of force generation. This pennation angle reduces the effective force of any individual fiber, as it is effectively pulling off-axis. However, because of this angle, more fibers can be packed into the same muscle volume, increasing the physiological cross-sectional area (PCSA). This effect is known as fiber packing, and in terms of force generation, it more than overcomes the efficiency-loss of the off-axis orientation. The trade-off comes in overall speed of muscle shortening and in the total excursion. Overall muscle shortening speed is reduced compared to fiber shortening speed, as is the total distance of shortening. All of these effects scale with pennation angle; greater angles lead to greater force due to increased fiber packing and PCSA, but with greater losses in shortening speed and excursion. Types of pennate muscle are unipennate, bipennate, and multipennate. A unipennate muscle has similarly angled fibers that are on one side of a tendon. A bipennate muscle has fibers on two sides of a tendon. Multipennate muscles have fibers that are oriented at multiple angles along the force-generating axis, and this is the most general and common architecture. Muscle fiber growth Muscle fibers grow when exercised and shrink when not in use. This is due to the fact that exercise stimulates the increase in myofibrils which increase the overall size of muscle cells. Well exercised muscles can not only add more size but can also develop more mitochondria, myoglobin, glycogen and a higher density of capillaries. However, muscle cells cannot divide to produce new cells, and as a result there are fewer muscle cells in an adult than in a newborn. Muscle naming There are a number of terms used in the naming of muscles including those relating to size, shape, action, location, their orientation, and their number of heads. By size brevis means short; longus means long; longissimus means longest; magnus means large; major means larger; maximus means largest; minor means smaller, and minimus smallest; latissimus means widest, and vastus means huge. These terms are often used after the particular muscle such as gluteus maximus, and gluteus minimus. By relative shape deltoid means triangular; quadratus means having four sides; rhomboideus means having a rhomboid shape; teres means round or cylindrical, and trapezius means having a trapezoid shape; serratus means saw-toothed; orbicularis means circular; pectinate means comblike; piriformis means pear-shaped; platys means flat and gracilis means slender. Examples are the pronator teres, and the pronator quadratus. By action abductor moving away from the midline; adductor moving towards the midline; depressor moving downwards; elevator moving upwards; flexor moving that decreases an angle; extensor moving that increase an angle or straightens; pronator moving to face down; supinator moving to face upwards; internal rotator rotating towards the body; external rotator rotating away from the body; sphincter decreases the size, and tensor gives tension to; fixator muscles serve to fix a joint in a given position by stabilizing the prime mover whilst other joints are moving. By number of headsbiceps two; triceps three and quadriceps four. By location named after the near main structure such as the temporal muscle (temporalis) near to the temporal bone. Also supra- above; infra- below, and sub- under. By fascicle orientation Relative to the midline, rectus means parallel to the midline; transverse means perpendicular to the midline, and oblique means diagonal to the midline. Relative to the axis of the generation of force – types of parallel, and types of pennate muscles. Fiber types Broadly there are two types of muscle fiber: Type I, which is slow, and Type II which are fast. Type II has two divisions of type IIA (oxidative), and type IIX (glycolytic), giving three main fiber types. These fibers have relatively distinct metabolic, contractile, and motor unit properties. The table below differentiates these types of properties. These types of properties—while they are partly dependent on the properties of individual fibers—tend to be relevant and measured at the level of the motor unit, rather than individual fiber. Slow oxidative (type I) fibers contract relatively slowly and use aerobic respiration to produce ATP. Fast oxidative (type IIA) fibers have fast contractions and primarily use aerobic respiration, but because they may switch to anaerobic respiration (glycolysis), can fatigue more quickly than slow oxidative fibers. Fast glycolytic (type IIX) fibers have fast contractions and primarily use anaerobic glycolysis. The FG fibers fatigue more quickly than the others. Most skeletal muscles in a human contain(s) all three types, although in varying proportions. Fiber color Traditionally, fibers were categorized depending on their varying color, which is a reflection of myoglobin content. Type I fibers appear red due to the high levels of myoglobin. Red muscle fibers tend to have more mitochondria and greater local capillary density. These fibers are more suited for endurance and are slow to fatigue because they use oxidative metabolism to generate ATP (adenosine triphosphate). Less oxidative Type II fibers are white due to relatively low myoglobin and a reliance on glycolytic enzymes. Twitch speed Fibers can also be classified on their twitch capabilities, into fast and slow twitch. These traits largely, but not completely, overlap the classifications based on color, ATPase, or MHC (myosin heavy chain). Some authors define a fast twitch fiber as one in which the myosin can split ATP very quickly. These mainly include the ATPase type II and MHC type II fibers. However, fast twitch fibers also demonstrate a higher capability for electrochemical transmission of action potentials and a rapid level of calcium release and uptake by the sarcoplasmic reticulum. The fast twitch fibers rely on a well-developed, anaerobic, short term, glycolytic system for energy transfer and can contract and develop tension at 2–3 times the rate of slow twitch fibers. Fast twitch muscles are much better at generating short bursts of strength or speed than slow muscles, and so fatigue more quickly. The slow twitch fibers generate energy for ATP re-synthesis by means of a long term system of aerobic energy transfer. These mainly include the ATPase type I and MHC type I fibers. They tend to have a low activity level of ATPase, a slower speed of contraction with a less well developed glycolytic capacity. Fibers that become slow-twitch develop greater numbers of mitochondria and capillaries making them better for prolonged work. Type distribution Individual muscles tend to be a mixture of various fiber types, but their proportions vary depending on the actions of that muscle. For instance, in humans, the quadriceps muscles contain ~52% type I fibers, while the soleus is ~80% type I. The orbicularis oculi muscle of the eye is only ~15% type I. Motor units within the muscle, however, have minimal variation between the fibers of that unit. It is this fact that makes the size principal of motor unit recruitment viable. The total number of skeletal muscle fibers has traditionally been thought not to change. It is believed there are no sex or age differences in fiber distribution; however, proportions of fiber types vary considerably from muscle to muscle and person to person. Among different species there is much variation in the proportions of muscle fiber types. Sedentary men and women (as well as young children) have 45% type II and 55% type I fibers. People at the higher end of any sport tend to demonstrate patterns of fiber distribution e.g. endurance athletes show a higher level of type I fibers. Sprint athletes, on the other hand, require large numbers of type IIX fibers. Middle-distance event athletes show approximately equal distribution of the two types. This is also often the case for power athletes such as throwers and jumpers. It has been suggested that various types of exercise can induce changes in the fibers of a skeletal muscle. It is thought that by performing endurance type events for a sustained period of time, some of the type IIX fibers transform into type IIA fibers. However, there is no consensus on the subject. It may well be that the type IIX fibers show enhancements of the oxidative capacity after high intensity endurance training which brings them to a level at which they are able to perform oxidative metabolism as effectively as slow twitch fibers of untrained subjects. This would be brought about by an increase in mitochondrial size and number and the associated related changes, not a change in fiber type. Fiber typing methods There are numerous methods employed for fiber-typing, and confusion between the methods is common among non-experts. Two commonly confused methods are histochemical staining for myosin ATPase activity and immunohistochemical staining for myosin heavy chain (MHC) type. Myosin ATPase activity is commonly—and correctly—referred to as simply "fiber type", and results from the direct assaying of ATPase activity under various conditions (e.g. pH). Myosin heavy chain staining is most accurately referred to as "MHC fiber type", e.g. "MHC IIa fibers", and results from determination of different MHC isoforms. These methods are closely related physiologically, as the MHC type is the primary determinant of ATPase activity. However, neither of these typing methods is directly metabolic in nature; they do not directly address oxidative or glycolytic capacity of the fiber. When "type I" or "type II" fibers are referred to generically, this most accurately refers to the sum of numerical fiber types (I vs. II) as assessed by myosin ATPase activity staining (e.g. "type II" fibers refers to type IIA + type IIAX + type IIXA ... etc.). Below is a table showing the relationship between these two methods, limited to fiber types found in humans. Subtype capitalization is used in fiber typing vs. MHC typing, and some ATPase types actually contain multiple MHC types. Also, a subtype B or b is not expressed in humans by either method. Early researchers believed humans to express a MHC IIb, which led to the ATPase classification of IIB. However, later research showed that the human MHC IIb was in fact IIx, indicating that the IIB is better named IIX. IIb is expressed in other mammals, so is still accurately seen (along with IIB) in the literature. Non human fiber types include true IIb fibers, IIc, IId, etc. Further fiber typing methods are less formally delineated, and exist on more of a spectrum. They tend to be focused more on metabolic and functional capacities (i.e., oxidative vs. glycolytic, fast vs. slow contraction time). As noted above, fiber typing by ATPase or MHC does not directly measure or dictate these parameters. However, many of the various methods are mechanistically linked, while others are correlated in vivo. For instance, ATPase fiber type is related to contraction speed, because high ATPase activity allows faster crossbridge cycling. While ATPase activity is only one component of contraction speed, Type I fibers are "slow", in part, because they have low speeds of ATPase activity in comparison to Type II fibers. However, measuring contraction speed is not the same as ATPase fiber typing. Muscle fiber type evolution Almost all multicellular animals depend on muscles to move. Generally, muscular systems of most multicellular animals comprise both slow-twitch and fast-twitch muscle fibers, though the proportions of each fiber type can vary across organisms and environments. The ability to shift their phenotypic fiber type proportions through training and responding to the environment has served organisms well when placed in changing environments either requiring short explosive movements (higher fast twitch proportion) or long duration of movement (higher slow twitch proportion) to survive. Bodybuilding has shown that changes in muscle mass and force production can change in a matter of months. Some examples of this variation are described below. Examples of muscle fiber variation in different animals Invertebrates American lobster, Homarus americanus, has three fiber types including fast twitch fibers, slow-twitch and slow-tonic fibers. Slow-tonic is a slow twitch-fiber that can sustain longer contractions (tonic).  In lobsters, muscles in different body parts vary in the muscle fiber type proportions based on the purpose of the muscle group. Vertebrates In the early development of vertebrate embryos, growth and formation of muscle happens in successive waves or phases of myogenesis. The myosin heavy chain isotype is a major determinant of the specific fiber type. In zebrafish embryos, the first muscle fibers to form are the slow twitch fibers. These cells will undergo migration from their original location to form a monolayer of slow twitch muscle fibers. These muscle fibers undergo further differentiation as the embryo matures. Reptiles In larger animals, different muscle groups will increasingly require different fiber type proportions within muscle for different purposes. Turtles, such as Trachemys scripta elegans, have complementary muscles within the neck that show a potential inverse trend of fiber type percentages (one muscle has high percentage of fast twitch, while the complementary muscle will have a higher percentage of slow twitch fibers). The complementary muscles of turtles had similar percentages of fiber types. Mammals Chimpanzee muscles are composed of 67% fast-twitch fibers and have a maximum dynamic force and power output 1.35 times higher than human muscles of similar size. Among mammals, there is a predominance of type II fibers utilizing glycolytic metabolism. Because of the discrepancy in fast twitch fibers compared to humans, chimpanzees outperform humans in power related tests. Humans, however, will do better at exercise in aerobic range requiring large metabolic costs such as walking (bipedalism). Genetic conservation versus functional conservation Across species, certain gene sequences have been preserved, but do not always have the same functional purpose. Within the zebrafish embryo, the Prdm1 gene down-regulates the formation of new slow twitch fibers through direct and indirect mechanisms such as Sox6 (indirect). In mice, the Prdm1 gene is present but does not control slow muscle genes in mice through Sox6. Plasticity In addition to having a genetic basis, the composition of muscle fiber types is flexible and can vary with a number of different environmental factors. This plasticity can, arguably, be the strongest evolutionary advantage among organisms with muscle. In fish, different fiber types are expressed at different water temperatures. Cold temperatures require more efficient metabolism within muscle and fatigue resistance is important. While in more tropical environments, fast powerful movements (from higher fast-twitch proportions) may prove more beneficial in the long run. In rodents such as rats, the transitory nature of their muscle is highly prevalent. They have high percentage of hybrid muscle fibers and have up to 60% in fast-to-slow transforming muscle. Environmental influences such as diet, exercise and lifestyle types have a pivotal role in proportions of fiber type in humans. Aerobic exercise will shift the proportions towards slow twitch fibers, while explosive powerlifting and sprinting will transition fibers towards fast twitch. In animals, "exercise training" will look more like the need for long durations of movement or short explosive movements to escape predators or catch prey. Microanatomy Skeletal muscle exhibits a distinctive banding pattern when viewed under the microscope due to the arrangement of two contractile proteins myosin, and actin – that are two of the myofilaments in the myofibrils. The myosin forms the thick filaments, and actin forms the thin filaments, and are arranged in repeating units called sarcomeres. The interaction of both proteins results in muscle contraction. The sarcomere is attached to other organelles such as the mitochondria by intermediate filaments in the cytoskeleton. The costamere attaches the sarcomere to the sarcolemma. Every single organelle and macromolecule of a muscle fiber is arranged to ensure that it meets desired functions. The cell membrane is called the sarcolemma with the cytoplasm known as the sarcoplasm. In the sarcoplasm are the myofibrils. The myofibrils are long protein bundles about one micrometer in diameter. Pressed against the inside of the sarcolemma are the unusual flattened myonuclei. Between the myofibrils are the mitochondria. While the muscle fiber does not have smooth endoplasmic cisternae, it contains sarcoplasmic reticulum. The sarcoplasmic reticulum surrounds the myofibrils and holds a reserve of the calcium ions needed to cause a muscle contraction. Periodically, it has dilated end sacs known as terminal cisternae. These cross the muscle fiber from one side to the other. In between two terminal cisternae is a tubular infolding called a transverse tubule (T tubule). T tubules are the pathways for action potentials to signal the sarcoplasmic reticulum to release calcium, causing a muscle contraction. Together, two terminal cisternae and a transverse tubule form a triad. Development All muscles are derived from paraxial mesoderm. During embryonic development in the process of somitogenesis the paraxial mesoderm is divided along the embryo's length to form somites, corresponding to the segmentation of the body most obviously seen in the vertebral column. Each somite has three divisions, sclerotome (which forms vertebrae), dermatome (which forms skin), and myotome (which forms muscle). The myotome is divided into two sections, the epimere and hypomere, which form epaxial and hypaxial muscles, respectively. The only epaxial muscles in humans are the erector spinae and small vertebral muscles, and are innervated by the dorsal rami of the spinal nerves. All other muscles, including those of the limbs are hypaxial, and innervated by the ventral rami of the spinal nerves. During development, myoblasts (muscle progenitor cells) either remain in the somite to form muscles associated with the vertebral column or migrate out into the body to form all other muscles. Myoblast migration is preceded by the formation of connective tissue frameworks, usually formed from the somatic lateral plate mesoderm. Myoblasts follow chemical signals to the appropriate locations, where they fuse into elongated multinucleated skeletal muscle cells. Between the tenth and the eighteenth weeks of gestation, all muscle cells have fast myosin heavy chains; two myotube types become distinguished in the developing fetus – both expressing fast chains but one expressing fast and slow chains. Between 10 and 40 per cent of the fibers express the slow myosin chain. Fiber types are established during embryonic development and are remodelled later in the adult by neural and hormonal influences. The population of satellite cells present underneath the basal lamina is necessary for the postnatal development of muscle cells. Function The primary function of muscle is contraction. Following contraction, skeletal muscle functions as an endocrine organ by secreting myokines – a wide range of cytokines and other peptides that act as signalling molecules. Myokines in turn are believed to mediate the health benefits of exercise. Myokines are secreted into the bloodstream after muscle contraction. Interleukin 6 (IL-6) is the most studied myokine, other muscle contraction-induced myokines include BDNF, FGF21, and SPARC. Muscle also functions to produce body heat. Muscle contraction is responsible for producing 85% of the body's heat. This heat produced is as a by-product of muscular activity, and is mostly wasted. As a homeostatic response to extreme cold, muscles are signaled to trigger contractions of shivering in order to generate heat. Contraction Contraction is achieved by the muscle's structural unit, the muscle fiber, and by its functional unit, the motor unit. Muscle fibers are excitable cells stimulated by motor neurons. The motor unit consists of a motor neuron and the many fibers that it makes contact with. A single muscle is stimulated by many motor units. Muscle fibers are subject to depolarization by the neurotransmitter acetylcholine, released by the motor neurons at the neuromuscular junctions. In addition to the actin and myosin myofilaments in the myofibrils that make up the contractile sarcomeres, there are two other important regulatory proteins – troponin and tropomyosin, that make muscle contraction possible. These proteins are associated with actin and cooperate to prevent its interaction with myosin. Once a cell is sufficiently stimulated, the cell's sarcoplasmic reticulum releases ionic calcium (Ca2+), which then interacts with the regulatory protein troponin. Calcium-bound troponin undergoes a conformational change that leads to the movement of tropomyosin, subsequently exposing the myosin-binding sites on actin. This allows for myosin and actin ATP-dependent cross-bridge cycling and shortening of the muscle. Excitation-contraction coupling Excitation contraction coupling is the process by which a muscular action potential in the muscle fiber causes the myofibrils to contract. This process relies on a direct coupling between the sarcoplasmic reticulum calcium release channel RYR1 (ryanodine receptor 1), and voltage-gated L-type calcium channels (identified as dihydropyridine receptors, DHPRs). DHPRs are located on the sarcolemma (which includes the surface sarcolemma and the transverse tubules), while the RyRs reside across the SR membrane. The close apposition of a transverse tubule and two SR regions containing RyRs is described as a triad and is predominantly where excitation–contraction coupling takes place. Excitation–contraction coupling occurs when depolarization of skeletal muscle cell results in a muscle action potential, which spreads across the cell surface and into the muscle fiber's network of T-tubules, thereby depolarizing the inner portion of the muscle fiber. Depolarization of the inner portions activates dihydropyridine receptors in the terminal cisternae, which are close to ryanodine receptors in the adjacent sarcoplasmic reticulum. The activated dihydropyridine receptors physically interact with ryanodine receptors to activate them via foot processes (involving conformational changes that allosterically activates the ryanodine receptors). As the ryanodine receptors open, is released from the sarcoplasmic reticulum into the local junctional space and diffuses into the bulk cytoplasm to cause a calcium spark. The sarcoplasmic reticulum has a large calcium buffering capacity partially due to a calcium-binding protein called calsequestrin. The near synchronous activation of thousands of calcium sparks by the action potential causes a cell-wide increase in calcium giving rise to the upstroke of the calcium transient. The released into the cytosol binds to Troponin C by the actin filaments, to allow crossbridge cycling, producing force and, in some situations, motion. The sarco/endoplasmic reticulum calcium-ATPase (SERCA) actively pumps back into the sarcoplasmic reticulum. As declines back to resting levels, the force declines and relaxation occurs. Muscle movement The efferent leg of the peripheral nervous system is responsible for conveying commands to the muscles and glands, and is ultimately responsible for voluntary movement. Nerves move muscles in response to voluntary and autonomic (involuntary) signals from the brain. Deep muscles, superficial muscles, muscles of the face and internal muscles all correspond with dedicated regions in the primary motor cortex of the brain, directly anterior to the central sulcus that divides the frontal and parietal lobes. In addition, muscles react to reflexive nerve stimuli that do not always send signals all the way to the brain. In this case, the signal from the afferent fiber does not reach the brain, but produces the reflexive movement by direct connections with the efferent nerves in the spine. However, the majority of muscle activity is volitional, and the result of complex interactions between various areas of the brain. Nerves that control skeletal muscles in mammals correspond with neuron groups along the primary motor cortex of the brain's cerebral cortex. Commands are routed through the basal ganglia and are modified by input from the cerebellum before being relayed through the pyramidal tract to the spinal cord and from there to the motor end plate at the muscles. Along the way, feedback, such as that of the extrapyramidal system contribute signals to influence muscle tone and response. Deeper muscles such as those involved in posture often are controlled from nuclei in the brain stem and basal ganglia. Proprioception In skeletal muscles, muscle spindles convey information about the degree of muscle length and stretch to the central nervous system to assist in maintaining posture and joint position. The sense of where our bodies are in space is called proprioception, the perception of body awareness, the "unconscious" awareness of where the various regions of the body are located at any one time. Several areas in the brain coordinate movement and position with the feedback information gained from proprioception. The cerebellum and red nucleus in particular continuously sample position against movement and make minor corrections to assure smooth motion. Energy consumption Muscular activity accounts for much of the body's energy consumption. All muscle cells produce adenosine triphosphate (ATP) molecules which are used to power the movement of the myosin heads. Muscles have a short-term store of energy in the form of creatine phosphate which is generated from ATP and can regenerate ATP when needed with creatine kinase. Muscles also keep a storage form of glucose in the form of glycogen. Glycogen can be rapidly converted to glucose when energy is required for sustained, powerful contractions. Within the voluntary skeletal muscles, the glucose molecule can be metabolized anaerobically in a process called glycolysis which produces two ATP and two lactic acid molecules in the process (in aerobic conditions, lactate is not formed; instead pyruvate is formed and transmitted through the citric acid cycle). Muscle cells also contain globules of fat, which are used for energy during aerobic exercise. The aerobic energy systems take longer to produce the ATP and reach peak efficiency, and requires many more biochemical steps, but produces significantly more ATP than anaerobic glycolysis. Cardiac muscle on the other hand, can readily consume any of the three macronutrients (protein, glucose and fat) aerobically without a 'warm up' period and always extracts the maximum ATP yield from any molecule involved. The heart, liver and red blood cells will also consume lactic acid produced and excreted by skeletal muscles during exercise. Skeletal muscle uses more calories than other organs. At rest it consumes 54.4 kJ/kg (13.0 kcal/kg) per day. This is larger than adipose tissue (fat) at 18.8 kJ/kg (4.5 kcal/kg), and bone at 9.6 kJ/kg (2.3 kcal/kg). Efficiency The efficiency of human muscle has been measured (in the context of rowing and cycling) at 18% to 26%. The efficiency is defined as the ratio of mechanical work output to the total metabolic cost, as can be calculated from oxygen consumption. This low efficiency is the result of about 40% efficiency of generating ATP from food energy, losses in converting energy from ATP into mechanical work inside the muscle, and mechanical losses inside the body. The latter two losses are dependent on the type of exercise and the type of muscle fibers being used (fast-twitch or slow-twitch). For an overall efficiency of 20 percent, one watt of mechanical power is equivalent to 4.3 kcal per hour. For example, one manufacturer of rowing equipment calibrates its rowing ergometer to count burned calories as equal to four times the actual mechanical work, plus 300 kcal per hour, this amounts to about 20 percent efficiency at 250 watts of mechanical output. The mechanical energy output of a cyclic contraction can depend upon many factors, including activation timing, muscle strain trajectory, and rates of force rise & decay. These can be synthesized experimentally using work loop analysis. Muscle strength Muscle strength is a result of three overlapping factors: physiological strength (muscle size, cross sectional area, available crossbridging, responses to training), neurological strength (how strong or weak is the signal that tells the muscle to contract), and mechanical strength (muscle's force angle on the lever, moment arm length, joint capabilities). Vertebrate muscle typically produces approximately of force per square centimeter of muscle cross-sectional area when isometric and at optimal length. Some invertebrate muscles, such as in crab claws, have much longer sarcomeres than vertebrates, resulting in many more sites for actin and myosin to bind and thus much greater force per square centimeter at the cost of much slower speed. The force generated by a contraction can be measured non-invasively using either mechanomyography or phonomyography, be measured in vivo using tendon strain (if a prominent tendon is present), or be measured directly using more invasive methods. The strength of any given muscle, in terms of force exerted on the skeleton, depends upon length, shortening speed, cross sectional area, pennation, sarcomere length, myosin isoforms, and neural activation of motor units. Significant reductions in muscle strength can indicate underlying pathology, with the chart at right used as a guide. The maximum holding time for a contracted muscle depends on its supply of energy and is stated by Rohmert's law to exponentially decay from the beginning of exertion. The "strongest" human muscle Since three factors affect muscular strength simultaneously and muscles never work individually, it is misleading to compare strength in individual muscles, and state that one is the "strongest". But below are several muscles whose strength is noteworthy for different reasons. In ordinary parlance, muscular "strength" usually refers to the ability to exert a force on an external object—for example, lifting a weight. By this definition, the masseter or jaw muscle is the strongest. The 1992 Guinness Book of Records records the achievement of a bite strength of for 2 seconds. What distinguishes the masseter is not anything special about the muscle itself, but its advantage in working against a much shorter lever arm than other muscles. If "strength" refers to the force exerted by the muscle itself, e.g., on the place where it inserts into a bone, then the strongest muscles are those with the largest cross-sectional area. This is because the tension exerted by an individual skeletal muscle fiber does not vary much. Each fiber can exert a force on the order of 0.3 micronewton. By this definition, the strongest muscle of the body is usually said to be the quadriceps femoris or the gluteus maximus. Because muscle strength is determined by cross-sectional area, a shorter muscle will be stronger "pound for pound" (i.e., by weight) than a longer muscle of the same cross-sectional area. The myometrial layer of the uterus may be the strongest muscle by weight in the female body. At the time when an infant is delivered, the entire uterus weighs about 1.1 kg (40 oz). During childbirth, the uterus exerts 100 to 400 N (25 to 100 lbf) of downward force with each contraction. The external muscles of the eye are conspicuously large and strong in relation to the small size and weight of the eyeball. It is frequently said that they are "the strongest muscles for the job they have to do" and are sometimes claimed to be "100 times stronger than they need to be." However, eye movements (particularly saccades used on facial scanning and reading) do require high speed movements, and eye muscles are exercised nightly during rapid eye movement sleep. The statement that "the tongue is the strongest muscle in the body" appears frequently in lists of surprising facts, but it is difficult to find any definition of "strength" that would make this statement true. The tongue consists of eight muscles, not one. Force generation Muscle force is proportional to physiological cross-sectional area (PCSA), and muscle velocity is proportional to muscle fiber length. The torque around a joint, however, is determined by a number of biomechanical parameters, including the distance between muscle insertions and pivot points, muscle size and architectural gear ratio. Muscles are normally arranged in opposition so that when one group of muscles contracts, another group relaxes or lengthens. Antagonism in the transmission of nerve impulses to the muscles means that it is impossible to fully stimulate the contraction of two antagonistic muscles at any one time. During ballistic motions such as throwing, the antagonist muscles act to 'brake' the agonist muscles throughout the contraction, particularly at the end of the motion. In the example of throwing, the chest and front of the shoulder (anterior deltoid) contract to pull the arm forward, while the muscles in the back and rear of the shoulder (posterior deltoid) also contract and undergo eccentric contraction to slow the motion down to avoid injury. Part of the training process is learning to relax the antagonist muscles to increase the force input of the chest and anterior shoulder. Contracting muscles produce vibration and sound. Slow twitch fibers produce 10 to 30 contractions per second (10 to 30 Hz). Fast twitch fibers produce 30 to 70 contractions per second (30 to 70 Hz). The vibration can be witnessed and felt by highly tensing one's muscles, as when making a firm fist. The sound can be heard by pressing a highly tensed muscle against the ear, again a firm fist is a good example. The sound is usually described as a rumbling sound. Some individuals can voluntarily produce this rumbling sound by contracting the tensor tympani muscle of the middle ear. The rumbling sound can also be heard when the neck or jaw muscles are highly tensed. Signal transduction pathways Skeletal muscle fiber-type phenotype in adult animals is regulated by several independent signaling pathways. These include pathways involved with the Ras/mitogen-activated protein kinase (MAPK) pathway, calcineurin, calcium/calmodulin-dependent protein kinase IV, and the peroxisome proliferator γ coactivator 1 (PGC-1). The Ras/MAPK signaling pathway links the motor neurons and signaling systems, coupling excitation and transcription regulation to promote the nerve-dependent induction of the slow program in regenerating muscle. Calcineurin, a Ca2+/calmodulin-activated phosphatase implicated in nerve activity-dependent fiber-type specification in skeletal muscle, directly controls the phosphorylation state of the transcription factor NFAT, allowing for its translocation to the nucleus and leading to the activation of slow-type muscle proteins in cooperation with myocyte enhancer factor 2 (MEF2) proteins and other regulatory proteins. Ca2+/calmodulin-dependent protein kinase activity is also upregulated by slow motor neuron activity, possibly because it amplifies the slow-type calcineurin-generated responses by promoting MEF2 transactivator functions and enhancing oxidative capacity through stimulation of mitochondrial biogenesis. Contraction-induced changes in intracellular calcium or reactive oxygen species provide signals to diverse pathways that include the MAPKs, calcineurin and calcium/calmodulin-dependent protein kinase IV to activate transcription factors that regulate gene expression and enzyme activity in skeletal muscle. PGC1-α (PPARGC1A), a transcriptional coactivator of nuclear receptors important to the regulation of a number of mitochondrial genes involved in oxidative metabolism, directly interacts with MEF2 to synergistically activate selective slow twitch (ST) muscle genes and also serves as a target for calcineurin signaling. A peroxisome proliferator-activated receptor δ (PPARδ)-mediated transcriptional pathway is involved in the regulation of the skeletal muscle fiber phenotype. Mice that harbor an activated form of PPARδ display an "endurance" phenotype, with a coordinated increase in oxidative enzymes and mitochondrial biogenesis and an increased proportion of ST fibers. Thus—through functional genomics—calcineurin, calmodulin-dependent kinase, PGC-1α, and activated PPARδ form the basis of a signaling network that controls skeletal muscle fiber-type transformation and metabolic profiles that protect against insulin resistance and obesity. The transition from aerobic to anaerobic metabolism during intense work requires that several systems are rapidly activated to ensure a constant supply of ATP for the working muscles. These include a switch from fat-based to carbohydrate-based fuels, a redistribution of blood flow from nonworking to exercising muscles, and the removal of several of the by-products of anaerobic metabolism, such as carbon dioxide and lactic acid. Some of these responses are governed by transcriptional control of the fast twitch (FT) glycolytic phenotype. For example, skeletal muscle reprogramming from an ST glycolytic phenotype to an FT glycolytic phenotype involves the Six1/Eya1 complex, composed of members of the Six protein family. Moreover, the hypoxia-inducible factor 1-α (HIF1A) has been identified as a master regulator for the expression of genes involved in essential hypoxic responses that maintain ATP levels in cells. Ablation of HIF-1α in skeletal muscle was associated with an increase in the activity of rate-limiting enzymes of the mitochondria, indicating that the citric acid cycle and increased fatty acid oxidation may be compensating for decreased flow through the glycolytic pathway in these animals. However, hypoxia-mediated HIF-1α responses are also linked to the regulation of mitochondrial dysfunction through the formation of excessive reactive oxygen species in mitochondria. Other pathways also influence adult muscle character. For example, physical force inside a muscle fiber may release the transcription factor serum response factor from the structural protein titin, leading to altered muscle growth. Exercise Physical exercise is often recommended as a means of improving motor skills, fitness, muscle and bone strength, and joint function. Exercise has several effects upon muscles, connective tissue, bone, and the nerves that stimulate the muscles. One such effect is muscle hypertrophy, an increase in size of muscle due to an increase in the number of muscle fibers or cross-sectional area of myofibrils. Muscle changes depend on the type of exercise used. Generally, there are two types of exercise regimes, aerobic and anaerobic. Aerobic exercise (e.g. marathons) involves activities of low intensity but long duration, during which the muscles used are below their maximal contraction strength. Aerobic activities rely on aerobic respiration (i.e. citric acid cycle and electron transport chain) for metabolic energy by consuming fat, protein, carbohydrates, and oxygen. Muscles involved in aerobic exercises contain a higher percentage of Type I (or slow-twitch) muscle fibers, which primarily contain mitochondrial and oxidation enzymes associated with aerobic respiration. On the contrary, anaerobic exercise is associated with activities of high intensity but short duration, such as sprinting or weight lifting. The anaerobic activities predominately use Type II, fast-twitch, muscle fibers. Type II muscle fibers rely on glucogenesis for energy during anaerobic exercise. During anaerobic exercise, type II fibers consume little oxygen, protein and fat, produce large amounts of lactic acid and are fatigable. Many exercises are partially aerobic and anaerobic; for example, soccer and rock climbing. The presence of lactic acid has an inhibitory effect on ATP generation within the muscle. It can even stop ATP production if the intracellular concentration becomes too high. However, endurance training mitigates the buildup of lactic acid through increased capillarization and myoglobin. This increases the ability to remove waste products, like lactic acid, out of the muscles in order to not impair muscle function. Once moved out of muscles, lactic acid can be used by other muscles or body tissues as a source of energy, or transported to the liver where it is converted back to pyruvate. In addition to increasing the level of lactic acid, strenuous exercise results in the loss of potassium ions in muscle. This may facilitate the recovery of muscle function by protecting against fatigue. Delayed onset muscle soreness is pain or discomfort that may be felt one to three days after exercising and generally subsides two to three days later. Once thought to be caused by lactic acid build-up, a more recent theory is that it is caused by tiny tears in the muscle fibers caused by eccentric contraction, or unaccustomed training levels. Since lactic acid disperses fairly rapidly, it could not explain pain experienced days after exercise. Clinical significance Muscle disease Diseases of skeletal muscle are termed myopathies, while diseases of nerves are called neuropathies. Both can affect muscle function or cause muscle pain, and fall under the umbrella of neuromuscular disease. The cause of many myopathies is attributed to mutations in the various associated muscle proteins. Some inflammatory myopathies include polymyositis and inclusion body myositis Neuromuscular diseases affect the muscles and their nervous control. In general, problems with nervous control can cause spasticity or paralysis, depending on the location and nature of the problem. A number of movement disorders are caused by neurological disorders such as Parkinson's disease and Huntington's disease where there is central nervous system dysfunction. Symptoms of muscle diseases may include weakness, spasticity, myoclonus and myalgia. Diagnostic procedures that may reveal muscular disorders include testing creatine kinase levels in the blood and electromyography (measuring electrical activity in muscles). In some cases, muscle biopsy may be done to identify a myopathy, as well as genetic testing to identify DNA abnormalities associated with specific myopathies and dystrophies. A non-invasive elastography technique that measures muscle noise is undergoing experimentation to provide a way of monitoring neuromuscular disease. The sound produced by a muscle comes from the shortening of actomyosin filaments along the axis of the muscle. During contraction, the muscle shortens along its length and expands across its width, producing vibrations at the surface. Hypertrophy Independent of strength and performance measures, muscles can be induced to grow larger by a number of factors, including hormone signaling, developmental factors, strength training, and disease. Contrary to popular belief, the number of muscle fibres cannot be increased through exercise. Instead, muscles grow larger through a combination of muscle cell growth as new protein filaments are added along with additional mass provided by undifferentiated satellite cells alongside the existing muscle cells. Biological factors such as age and hormone levels can affect muscle hypertrophy. During puberty in males, hypertrophy occurs at an accelerated rate as the levels of growth-stimulating hormones produced by the body increase. Natural hypertrophy normally stops at full growth in the late teens. As testosterone is one of the body's major growth hormones, on average, men find hypertrophy much easier to achieve than women. Taking additional testosterone or other anabolic steroids will increase muscular hypertrophy. Muscular, spinal and neural factors all affect muscle building. Sometimes a person may notice an increase in strength in a given muscle even though only its opposite has been subject to exercise, such as when a bodybuilder finds her left biceps stronger after completing a regimen focusing only on the right biceps. This phenomenon is called cross education. Atrophy Every day between one and two percent of muscle is broken down and rebuilt. Inactivity, malnutrition, disease, and aging can increase the breakdown leading to muscle atrophy or sarcopenia. Sarcopenia is commonly an age-related process that can cause frailty and its consequences. A decrease in muscle mass may be accompanied by a smaller number and size of the muscle cells as well as lower protein content. Human spaceflight, involving prolonged periods of immobilization and weightlessness is known to result in muscle weakening and atrophy resulting in a loss of as much as 30% of mass in some muscles. Such consequences are also noted in some mammals following hibernation. Many diseases and conditions including cancer, AIDS, and heart failure can cause muscle loss known as cachexia. Research Myopathies have been modeled with cell culture systems of muscle from healthy or diseased tissue biopsies. Another source of skeletal muscle and progenitors is provided by the directed differentiation of pluripotent stem cells. Research on skeletal muscle properties uses many techniques. Electrical muscle stimulation is used to determine force and contraction speed at different frequencies related to fiber-type composition and mix within an individual muscle group. In vitro muscle testing is used for more complete characterization of muscle properties. The electrical activity associated with muscle contraction is measured via electromyography (EMG). Skeletal muscle has two physiological responses: relaxation and contraction. The mechanisms for which these responses occur generate electrical activity measured by EMG. Specifically, EMG can measure the action potential of a skeletal muscle, which occurs from the hyperpolarization of the motor axons from nerve impulses sent to the muscle. EMG is used in research for determining if the skeletal muscle of interest is being activated, the amount of force generated, and an indicator of muscle fatigue. The two types of EMG are intra-muscular EMG and the most common, surface EMG. The EMG signals are much greater when a skeletal muscle is contracting verses relaxing. However, for smaller and deeper skeletal muscles the EMG signals are reduced and therefore are viewed as a less valued technique for measuring the activation. In research using EMG, a maximal voluntary contraction (MVC) is commonly performed on the skeletal muscle of interest, to have reference data for the rest of the EMG recordings during the main experimental testing for that same skeletal muscle. Research into the development of artificial muscles includes the use of electroactive polymers. Mononuclear cells of skeletal muscle Nuclei present in skeletal muscle are about 50% myocyte nuclei and 50% mononuclear cell nuclei. Mononuclear cells found in skeletal muscle tissue samples from mice and humans can be identified by messenger RNA transcription of cell type markers. Cameron et al. identified nine cell types. They include endothelial cells that line capillaries (45% of cells), fibro-adipogenic progenitors (FAPs)(20%), pericytes (14%) and endothelial-like pericytes (4%). Another 9% of mononuclear cells are muscle stem cells, adjacent to muscle fiber cells. Types of lymphoid cells (such as B-cells and T-cells) (3%) and myeloid cells such as macrophages (2%) made up most of the remaining mononuclear cells of skeletal muscle. In addition, Cameron et al. also identified two types of myocyte cells, Type I and Type II. Each of the different types of cells in skeletal muscle was found to express different sets of genes. The median number of genes expressed in each of the nine different cell types was 1,331 genes. When a biopsy is taken from a thigh muscle, however, the biopsy contains all the different cell types. Mixed together, in a biopsy of human thigh skeletal muscle, there are 13,026 to 13,108 genes with detected expression. Endocrine functions of skeletal muscle As pointed out in the Introduction to this article, under different physiological conditions, subsets of 654 different proteins as well as lipids, amino acids, metabolites and small RNAs occur in the secretome of skeletal muscles. As described in the Wikipedia article "List of human endocrine organs and actions", skeletal muscle is identified as an endocrine organ due to its secretion of cytokines and other peptides produced by skeletal muscle as signaling molecules. Iizuka et al., indicated that skeletal muscle is an endocrine organ because it "synthesizes and secretes multiple factors, and these muscle derived-factors exert beneficial effects on peripheral and remote organs." The altered secretomes after endurance training or resistance training as well as the secretome of sedentary muscle appear to have many effects on distant tissues. Sedentary skeletal muscle mass affects executive mental function A study in Canada tested the effect of muscle mass on mental functions during aging. An expectation of the study was that the endocrine components of the secretome specific to skeletal muscle could protect cognitive functions. The skeletal muscle mass of arms and legs of 8,279 Canadians over the age of 65 and in average health was measured at baseline and after three years. Of these individuals, 1,605 participants (19.4%) were considered to have a low skeletal muscle mass at baseline, with less than 7.30 kg/m2 for males, and less than 5.42 kg/m2 for females (levels defined as sarcopenia in Canada). Executive mental function, memory and psychomotor speed were each measured at baseline and after three years. Executive mental function was measured with standard tests, including the ability to say the sequence 1-A, 2-B, 3-C…, to name a number of animals in one minute, and with the Stroop test. The study found that those individuals with lower skeletal muscle mass at the start of the study declined in their executive mental function considerably more sharply than those with higher muscle mass. Memory, as well as psychomotor speed, on the other hand, did not correlate with skeletal muscle mass. Thus, larger muscle mass, with a concomitantly larger secretome, appeared to have the endocrine function of protecting the executive mental function of individuals over the age of 65. Walking, using skeletal muscles, affects mortality Paluch et al. compared the average number of steps walked per day to the risk of mortality, both for adults over 60 years old and for adults under 60 years old. The study was a meta-analysis of 15 studies, which, combined, evaluated 47,471 adults over a period of 7 years. Individuals were divided into approximately equal quartiles. The lowest quartile averaged 3,553 steps/day, the second quartile 5,801 steps/day, the third quartile 7,842 steps/day and the fourth quartile 10,901 steps/day. The briskness of walking, adjusted for the volume of walking, did not affect mortality. However, the number of steps/day was clearly related to mortality. When risk of mortality for those over 60 years old was set at 1.0 for the lowest quartile of steps/day, the relative risk of mortality for the second, third and fourth quartiles were 0.56, 0.45, and 0.35, respectively. For those under 60 years of age, the results were less pronounced. For those under 60 years of age, with the first quartile risk of mortality set at 1.0, the second, third and fourth quartile relative risks of mortality were 0.57, 0.42 and 0.53, respectively. Thus, use of skeletal muscles in walking has a large effect, especially among older individuals, on mortality. Skeletal muscle secretome alters with exercise Williams et al. obtained biopsies of a thigh skeletal muscle (vastus lateralis muscle) of eight 23-year old, originally sedentary, Caucasian males. Biopsies were taken both before and after a six-week long endurance exercise training program. The exercise consisted of riding a stationary bicycle for one hour, five days a week for six weeks. Of the 13,108 genes with detected expression in the muscle biopsies, 641 genes were upregulated after endurance training and 176 genes were downregulated. Of the 817 total altered genes, 531 were identified as being in the secretome by either or both of Uniprot or Exocarta, or else by studies investigating the secretome of muscle cells. Because many of the exercise-regulated genes are identified as secreted, this indicates that much of the effect of exercise has an endocrine rather than metabolic function. The main pathways found to be affected by secreted exercise-regulated proteins were related to cardiac, cognitive, kidney and platelet functions. Exercise-trained effects are mediated by epigenetic mechanisms Between 2012 and 2019, at least 25 reports indicated a major role of epigenetic mechanisms in skeletal muscle responses to exercise. Epigenetic alterations often occur by adding methyl groups to cytosines in the DNA or removing methyl groups from the cytosines of DNA, especially at CpG sites. Methylations of cytosines can cause the DNA to be compacted into heterochromatin, thus inhibiting access of other molecules to the DNA. Epigenetic alterations also often occur through acetylations or deacetylations of the histone tails within chromatin. DNA in the nucleus generally consists of segments of 146 base pairs of DNA wrapped around eight tightly connected histones (and each histone also has a loose tail) in a structure called a nucleosome and one segment of DNA is connected to an adjacent DNA segment on a nucleosome by linker DNA. When histone tails are acetylated, they usually cause loosening of the DNA around the nucleosome, leading to increased accessibility of the DNA. Exercise-induced regulation of genes in muscles Gene expression in muscle is largely regulated, as in tissues generally, by regulatory DNA sequences, especially enhancers. Enhancers are non-coding sequences in the genome that activate the expression of distant target genes, by looping around and interacting with the promoters of their target genes (see Figure "Regulation of transcription in mammals"). As reported by Williams et al., the average distance in the loop between the connected enhancers and promoters of genes is 239,000 nucleotide bases. Exercise-induced alteration to gene expression by DNA methylation or demethylation Endurance muscle training alters muscle gene expression by epigenetic DNA methylation or de-methylation of CpG sites within enhancers. In a study by Lindholm et al., twenty-three individuals who were about 27 years old and sedentary volunteered to have endurance training on only one leg during 3 months. The other leg was used as an untrained control leg. The training consisted of one-legged knee extension training for 3 month (45 min, 4 sessions per week). Skeletal muscle biopsies from the vastus lateralis (a thigh muscle) were taken both before training began and 24 hours after the last training session from each of the legs. The endurance-trained leg, compared to the untrained leg, had significant DNA methylation changes at 4,919 sites across the genome. The sites of altered DNA methylation were predominantly in enhancers. Transcriptional analysis, using RNA sequencing, identified 4,076 differentially expressed genes. The transcriptionally upregulated genes were associated with enhancers that had a significant decrease in DNA methylation, while transcriptionally downregulated genes were associated with enhancers that had increased DNA methylation. Increased methylation was mainly associated with genes involved in structural remodeling of the muscle and glucose metabolism. Enhancers with decreased methylation were associated with genes functioning in inflammatory or immunological processes and in transcriptional regulation. Exercise-induced long-term alteration of gene expression by histone acetylation or deacetylation As indicated above, after exercise, epigenetic alterations to enhancers alter long-term expression of hundreds of muscle genes. This includes genes producing proteins secreted into the systemic circulation, many of which may act as endocrine messengers. Six sedentary, about 23 years old, Caucasian males provided vastus lateralis (a thigh muscle) biopsies before entering an exercise program (six weeks of 60-minute sessions of riding a stationary cycle, five days per week). Four days after this exercise program was completed, the expression of many genes was persistently epigentically altered. The alterations altered acetylations and deacetylations of the histone tails located in the enhancers controlling the genes with altered expression. Up-regulated genes were associated with epigenetic acetylations added at histone 3 lysine 27 (H3K27ac) of nucleosomes located at their enhancers. Down-regulated genes were associated with the removal of epigenetic acetylations at H3K27 in nucleosomes located at their enhancers (see Figure "A nucleosome with histone tails set for transcriptional activation"). Biopsies of the vastus lateralis muscle showed expression of 13,108 genes at baseline before the exercise training program. Four days after the exercise program was completed, biopsies of the same muscles showed altered gene expression, with 641 genes up-regulated and 176 genes down-regulated. Williams et al. identified 599 enhancer-gene interactions, covering 491 enhancers and 268 genes (multiple enhancers were found connected to some genes), where both the enhancer and the connected target gene were coordinately either upregulated or downregulated after exercise training.
Biology and health sciences
Biology
null
380781
https://en.wikipedia.org/wiki/Common%20raven
Common raven
The common raven or northern raven (Corvus corax) is a large all-black passerine bird. It is the most widely distributed of all corvids, found across the Northern Hemisphere. There are 11 accepted subspecies with little variation in appearance, although recent research has demonstrated significant genetic differences among populations from various regions. It is one of the two largest corvids, alongside the thick-billed raven, and is the heaviest passerine bird; at maturity, the common raven averages in length and in weight, though up to in the heaviest individuals. Although their typical lifespan is considerably shorter, common ravens can live more than 23 years in the wild. Young birds may travel in flocks but later mate for life, with each mated pair defending a territory. Common ravens have coexisted with humans for thousands of years and in some areas have been so numerous that people have regarded them as pests. Part of their success as a species is due to their omnivorous diet; they are extremely versatile and opportunistic in finding sources of nutrition, feeding on carrion, insects, cereal grains, berries, fruit, small animals, nesting birds, and food waste. Some notable feats of problem-solving provide evidence that the common raven is unusually intelligent. Over the centuries, the raven has been the subject of mythology, folklore, art, and literature. In many cultures, including the indigenous cultures of Scandinavia, ancient Ireland and Wales, Bhutan, the northwest coast of North America, and Siberia and northeast Asia, the common raven has been revered as a spiritual figure or godlike creature. Taxonomy The common raven was one of the many species originally described, with its type locality given as Europe, by Carl Linnaeus in his landmark 1758 10th edition of Systema Naturae, and it still bears its original name of Corvus corax. It is the type species of the genus Corvus, derived from the Latin word for 'raven'. The specific epithet corax is the Latinized form of the Greek word , meaning 'raven' or 'crow'. The modern English word raven has cognates in many other Germanic languages, including Old Norse (and subsequently modern Icelandic) and Old High German , all which descend from Proto-Germanic . An old Scottish word or , akin to the French , has been used for both this bird and the carrion crow. Collective nouns for a group of ravens (or at least the common raven) include "unkindness" and "conspiracy". Classification The closest relatives of the common raven are the brown-necked raven (C. ruficollis), the pied crow (C. albus) of Africa, and the Chihuahuan raven (C. cryptoleucus) of the North American Southwest. Most authorities, including the IOC World Bird List and the Handbook of the Birds of the World, currently accept 11 subspecies, though some only accept eight; of the six subspecies accepted by IOC and HBW in the Western Palearctic region, only four are accepted by Shirihai. Evolutionary history The common raven evolved in the Old World and crossed the Bering land bridge into North America. Recent genetic studies, which examined the DNA of common ravens from across the world, have determined that the birds fall into at least two clades: a California clade (subspecies C. c. clarionensis), found only in the southwestern United States, and a Holarctic clade, found across the rest of the Northern Hemisphere. Birds from both clades look alike, but the groups are genetically distinct and began to diverge about two million years ago. The findings indicate that based on mitochondrial DNA, common ravens from the rest of North America are more closely related to those in Europe and Asia, than to those in the California clade, and that common ravens in the California clade are more closely related to the Chihuahuan raven (C. cryptoleucus) than to those in the Holarctic clade. Ravens in the Holarctic clade are more closely related to the pied crow (C. albus) than they are to the California clade. Thus, the common raven species as traditionally delimited is considered to be paraphyletic. One explanation for these genetic findings is that common ravens settled in California at least two million years ago and became separated from their relatives in Europe and Asia during a glacial period. One million years ago, a group from the California clade evolved into a new species, the Chihuahuan raven. Other members of the Holarctic clade arrived later in a separate migration from Asia, perhaps at the same time as humans and wolves about 15,000 years ago. A 2011 study suggested that there are no restrictions on gene flow between the Californian and Holarctic common raven groups, and that the lineages can remerge, effectively reversing a potential speciation. A recent study of raven mitochondrial DNA showed that the isolated population from the Canary Islands is distinct from other populations. The study did not include any individuals from the North African population, and its position is therefore unclear, though its morphology is very close to the population of the Canaries (to the extent that the two are often considered part of a single subspecies). Description A mature common raven ranges between and has a wingspan of . Recorded weights range from , thus making the common raven one of the heaviest passerines. Birds from colder regions such as the Himalayas and Greenland are generally larger with slightly larger bills, while those from warmer regions are smaller with proportionally smaller bills. Representative of the size variation in the species, ravens from California weighed an average of , those from Alaska weighed an average of and those from Nova Scotia weighed an average of . The bill is large and slightly curved, with a culmen length of , one of the largest bills amongst passerines (only the thick-billed raven and white-necked raven have larger bills). It has a longish, strongly graduated tail, at , and mostly iridescent black plumage, and a dark brown iris. The throat feathers are elongated and pointed and the bases of the neck feathers are pale brownish-grey. The legs and feet are stout and strong, with a tarsus length of . The juvenile plumage is similar but duller, with a blue-grey iris, and pinkish gape at first. Apart from its greater size, the common raven differs from related crows by having a larger and heavier black beak, shaggy feathers around the throat, longer bristles above the beak, and a longer, wedge-shaped tail. Flying ravens are distinguished from crows by their tail shape, larger wing area, and more stable soaring style, which generally involves less wing flapping. Despite their bulk, ravens are easily as agile in flight as their smaller cousins. In flight the feathers produce a creaking sound that has been likened to the rustle of silk. The voice of ravens is also quite distinct, its usual call being a deep croak of a much more sonorous quality than a crow's call, though the calls of other ravens like the fan-tailed raven and brown-necked raven can be confused where they occur together with common ravens in parts of southwest Asia and northern Africa; of these two, the fan-tailed raven is more similar in calls but has a very different shape with its broad wings and very short tail, while the brown-necked raven can be very hard to distinguish on plumage, but has somewhat more crow-like calls. In North America, the Chihuahuan raven (C. cryptoleucus) is fairly similar to the relatively small common ravens of the American southwest and is best distinguished by the still relatively smaller size of its bill, beard and body and relatively longer tail. The all-black carrion crow (C. corone) and rook (C. frugilegus) in Europe may suggest a raven due to their largish bill but are still distinctly smaller and have the wing and tail shapes typical of crows. In the Faroe Islands, a now-extinct pied colour morph of this species existed, known as the pied raven; the ordinary black-coloured common ravens remain widespread in the archipelago. White ravens are occasionally found in the wild. Some in British Columbia lacked the pink eyes of an albino, and were instead leucistic, a condition where an animal lacks any of several different types of pigment, not simply melanin. Common ravens have a wide range of vocalizations which are of interest to ornithologists. Gwinner carried out important studies in the early 1960s, recording and photographing his findings in great detail. Fifteen to 30 categories of calls have been recorded for this species, most of which are used for social interaction. Calls recorded include alarm calls, chase calls, and flight calls. The species has a distinctive, deep, resonant prruk-prruk-prruk call, which to experienced listeners is unlike that of any other corvid. Its very wide and complex vocabulary includes a high, knocking toc-toc-toc, a dry, grating kraa, a low guttural rattle and some calls of an almost musical nature. Like other corvids, the common raven can mimic sounds from their environment, including human speech. Non-vocal sounds produced by the common raven include wing whistles and bill snapping. Clapping or clicking has been observed more often in females than in males. If a member of a pair is lost, its mate reproduces the calls of its lost partner to encourage its return. Distribution and habitat The common raven can thrive in varied climates; it has the largest range of any member of the genus, and one of the largest of any passerine. They range throughout the Holarctic from Arctic and temperate habitats in North America and Eurasia to the deserts of North Africa, and to islands in the Pacific Ocean. In the British Isles, they are more common in Scotland, Wales, western England and Ireland, having been eradicated from other areas by gamekeeping interests. In Tibet, they have been recorded at altitudes up to 5,000 m (16,400 ft), and as high as 6,350 m (20,600 ft) on Mount Everest. The population sometimes known as the 'Punjab raven', part of the subspecies Corvus corax laurencei (sometimes misnamed C. c. subcorax) occurs in the Sindh district of Pakistan and adjoining regions of northwestern India. They are generally resident within their range for the whole year. In his 1950 work, Grønlands Fugle [Birds of Greenland], noted ornithologist Finn Salomonsen indicated that common ravens did not overwinter in the Arctic. However, in Arctic Canada and Alaska, they are found year-round. Young birds may disperse locally. In the United Kingdom, the range is currently increasing after improved legal protection, but illegal persecution by gamekeepers remains a problem in many areas. It favours mountainous or coastal terrain, but can also be found in parks with tall trees suitable for use as habitation. Its population is at its most dense in the north and west of the country, though the species is expanding its population southwards. Most common ravens prefer wooded areas with large expanses of open land nearby, or coastal regions for their nesting sites and feeding grounds. In some areas of dense human population, such as California in the United States, they take advantage of a plentiful food supply and have seen a surge in their numbers. On coasts, individuals of this species are often evenly distributed and prefer to build their nest sites along sea cliffs. Common ravens are often located in coastal regions because these areas provide easy access to water and a variety of food sources. Also, coastal regions have stable weather patterns without extreme cold or hot temperatures. In general, common ravens live in a wide array of environments but prefer heavily contoured landscapes. When the environment changes in vast degrees, these birds will respond with a stress response. The hormone known as corticosterone is activated by the hypothalamic–pituitary–adrenal axis. Corticosterone is activated when the bird is exposed to stress, such as migrating great distances. Behaviour Common ravens usually travel in mated pairs, although young birds may form flocks. Relationships between common ravens are often quarrelsome, yet they demonstrate considerable devotion to their families. Predation Owing to its size, gregariousness and its defensive abilities, the common raven has few natural predators. Predators of its eggs and chicks include martens, large owls, and sometimes eagles. Ravens are quite vigorous at defending their young and are usually successful at driving off perceived threats. They attack potential predators by flying at them and lunging with their large bills. Humans are occasionally attacked if they get close to a raven nest, though serious injuries are unlikely. There are a few records of large birds of prey taking ravens; more rarely, large mammalian predators such as lynxes, coyotes and cougars have also attacked ravens. This principally occurs at a nest site and when other prey for the carnivores are scarce. In North America, predators of ravens have reportedly included great horned owls, American goshawks, bald eagles, golden eagles and red-tailed hawks. It is possible that the hawk species only attack young ravens; in one instance a peregrine falcon swooped at a newly fledged raven but was chased off by the parent ravens. Ravens wary around novel carrion sites, and in North America, have been recorded waiting for the presence of American crows and blue jays before approaching to eat. In Eurasia, their reported predators include, in addition to golden eagles, Eurasian eagle-owls, white-tailed eagles, Steller's sea-eagles, eastern imperial eagles and gyrfalcons. Because they are potentially hazardous prey for raptorial birds, raptors must usually take them by surprise and most attacks are on fledgling ravens. Breeding Juveniles begin to court at a very early age, but may not bond for another two or three years. Aerial acrobatics, demonstrations of intelligence, and ability to provide food are key behaviours of courting. Once paired, they tend to nest together for life, usually in the same location. Instances of non-monogamy have been observed in common ravens, by males visiting a female's nest when her mate is away. Breeding pairs must have a territory of their own before they begin nest-building and reproduction, and thus they aggressively defend a territory and its food resources. Nesting territories vary in size according to the density of food resources in the area. The nest is a deep bowl made of large sticks (up to 150 cm long and 2.5 cm thick) and twigs, bound with an inner layer of roots, mud, and bark and lined with a softer material, such as deer fur. The nest is usually placed in a large tree or on a cliff ledge, or less frequently in old buildings or utility poles. Females usually lay between four to six (rarely two to seven) pale bluish-green, brown-blotched eggs. Incubation is about 18 to 21 days, by the female only. The male may stand or crouch over the young, sheltering but not actually brooding them. The young fledge at 35 to 49 days, and are fed by both parents. They stay with their parents for another six months after fledging. In most of their range, egg-laying begins in late January or February, but it can be as late as April in colder climates such as Greenland and Tibet. In Pakistan, egg-laying takes place in December, but in north Africa (subspecies C. c. tingitanus), later than in Europe, in late March or early April. Eggs and hatchlings are preyed on, rarely, by large hawks and eagles, large owls, martens and canids. The adults, which are very rarely preyed upon, are often successful in defending their young from these predators, due to their numbers, large size and cunning. They have been observed dropping stones on potential predators that venture close to their nests. Common ravens can be very long-lived, especially in captive or protected conditions; individuals at the Tower of London have lived for more than 40 years. Their lifespans in the wild are shorter, typically 10 to 15 years. The longest known lifespan of a ringed wild common raven was 23 years, 3 months, which among passerines only is surpassed by a few Australian species such as the satin bowerbird. Feeding Common ravens are omnivorous and highly opportunistic: their diet may vary widely with location, season and serendipity. For example, those foraging on tundra on the Arctic North Slope of Alaska obtained about half their energy needs from predation, mainly of microtine rodents, and half by scavenging, mainly of caribou and ptarmigan carcasses. In some places they are mainly scavengers, feeding on carrion as well as the associated maggots and carrion beetles. With large-bodied carrion, which they are not equipped to tear through as well as birds such as the much larger and hook-billed vultures, they must wait for the prey to be torn open by another predator or flayed by other means. They are also known to eat the afterbirth of ewes and other large mammals. Plant food includes cereal grains, acorns, buds, berries and fruit. They prey on small invertebrates, amphibians, reptiles, small mammals and birds. Ravens may also consume the undigested portions of animal faeces, and human food waste. They store surplus food items, especially those containing fat, and will learn to hide such food out of the sight of other common ravens. Ravens also raid the food caches of other species, such as the Arctic fox. They often associate with another canine, the wolf, as a kleptoparasite, following to scavenge wolf-kills in winter, but also co-operatively, having been observed to lead hunting wolf packs to potential prey that only the ravens can see from the air. Ravens are regular predators at bird nests, brazenly picking off eggs, nestlings and sometimes adult birds when they spot an opportunity. They are considered perhaps the primary natural threat to the nesting success of the critically endangered California condor, since they readily take condor eggs and are very common in the areas where the species is being re-introduced. On the other hand, when they defend their own adjacent nests, they may incidentally benefit condors since they chase golden eagles out of the area that may otherwise prey upon larger nestling and fledging condors. Although condors recognise ravens as threats and will chase them away, their usual nest sites are poorly concealed from ravens; the reason is unknown, but it may be due to the condor's lower aerial manoeuvrability, or a holdover from times when condor populations were denser, nest sites more limiting, and ravens less abundant. Common ravens nesting near sources of human garbage included a higher percentage of food waste in their diet, birds nesting near roads consumed more road-killed vertebrates, and those nesting far from these sources of food ate more arthropods and plant material. Fledging success was higher for those using human garbage as a food source. In contrast, a 1984–1986 study of common raven diet in an agricultural region of southwestern Idaho found that cereal grains were the principal constituent of pellets, though small mammals, grasshoppers, cattle carrion and birds were also eaten. One behaviour is recruitment, where juvenile ravens call other ravens to a food bonanza, usually a carcass, with a series of loud yells. In Ravens in Winter, Bernd Heinrich posited that this behaviour evolved to allow the juveniles to outnumber the resident adults, thus allowing them to feed on the carcass without being chased away. A more mundane explanation is that individuals co-operate in sharing information about carcasses of large mammals because they are too big for just a few birds to exploit. Experiments with baits however show that such recruitment behaviour is independent of the size of the bait. Furthermore, there has been research suggesting that the common raven is involved in seed dispersal. In the wild, the common raven chooses the best habitat and disperses seeds in locations best suited for its survival. Intelligence The brain of the common raven is among the largest of any bird species. Specifically, their hyperpallium is large for a bird. They display ability in problem-solving, as well as other cognitive processes such as imitation and insight. Linguist Derek Bickerton, building on the work of biologist Bernd Heinrich, has argued that ravens are one of only four known animals (the others being bees, ants, and humans) who have demonstrated displacement, the capacity to communicate about objects or events that are distant in space or time. Subadult ravens roost together at night, but usually forage alone during the day. However, when one discovers a large carcass guarded by a pair of adult ravens, the unmated raven will return to the roost and communicate the find. The following day, a flock of unmated ravens will fly to the carcass and chase off the adults. Bickerton argues that the advent of linguistic displacement was perhaps the most important event in the evolution of human language, and that ravens are the only other vertebrate to share this with humans. One experiment designed to evaluate insight and problem-solving ability involved a piece of meat attached to a string hanging from a perch. To reach the food, the bird needed to stand on the perch, pull the string up a little at a time, and step on the loops to gradually shorten the string. Four of five common ravens eventually succeeded, and "the transition from no success (ignoring the food or merely yanking at the string) to constant reliable access (pulling up the meat) occurred with no demonstrable trial-and-error learning." This supports the hypothesis that common ravens are 'inventors', implying that they can solve problems. Many of the feats of common ravens were formerly argued to be stereotyped innate behaviour, but it now has been established that their aptitudes for solving problems individually and learning from each other reflect a flexible capacity for intelligent insight unusual among non-human animals. Another experiment showed that some common ravens could intentionally deceive their conspecifics. A study published in 2011 found that ravens can recognise when they are given an unfair trade during reciprocal interactions with conspecifics or humans, retaining memory of the interaction for a prolonged period of time. Birds that were given a fair trade by experimenters were found to prefer interacting with these experimenters compared to those that did not. Furthermore, ravens in the wild have also been observed to stop cooperating with other ravens if they observe them cheating during group tasks. Common ravens have been observed calling wolves to the site of dead animals. The wolves open the carcass, leaving the scraps more accessible to the birds. They watch where other common ravens bury their food and remember the locations of each other's food caches, so they can steal from them. This type of theft occurs so regularly that common ravens will fly extra distances from a food source to find better hiding places for food. They have also been observed pretending to make a cache without actually depositing the food, presumably to confuse onlookers. Common ravens are known to steal and cache shiny objects such as pebbles, pieces of metal, and golf balls. One theory is that they hoard shiny objects to impress other ravens. Other research indicates that juveniles are deeply curious about all new things, and that common ravens retain an attraction to bright, round objects based on their similarity to bird eggs. Mature birds lose their intense interest in the unusual, and become highly neophobic. The first large-scale assessment of ravens' cognitive abilities suggests that, by four months of age, ravens do about as well as adult chimpanzees and orangutans on tests of causal reasoning, social learning, theory of mind, etc. Play There has been increasing recognition of the extent to which birds engage in play. Juvenile common ravens are among the most playful of bird species. They have been observed to slide down snowbanks, apparently purely for fun. They even engage in games with other species, such as playing catch-me-if-you-can with wolves, otters and dogs. Common ravens are known for spectacular aerobatic displays, such as flying in loops or interlocking talons with each other in flight. They are also one of only a few wild animals who make their own toys. They have been observed breaking off twigs to play with socially. Relationship with humans Conservation and management Compared to many smaller Corvus species (such as carrion crow and American crow), ravens prefer undisturbed mountain or forest habitat or rural areas over urban areas. In other areas, their numbers have increased dramatically and they have become agricultural pests. Common ravens can cause damage to crops, such as nuts and grain, or can harm livestock, particularly by killing young goat kids, lambs and calves. Ravens generally attack the faces of young livestock, but the more common raven behaviour of scavenging may be misidentified as predation by ranchers. In the western Mojave Desert, human settlement and land development have led to an estimated 16-fold increase in the common raven population over 25 years. Towns, landfills, sewage treatment plants and artificial ponds create sources of food and water for scavenging birds. Ravens also find nesting sites in utility poles and ornamental trees, and are attracted to roadkill on highways. The explosion in the common raven population in the Mojave has raised concerns for the desert tortoise, a threatened species. Common ravens prey upon juvenile tortoises, which have soft shells and move slowly. Plans to control the population have included shooting and trapping birds, as well as contacting landfill operators to ask that they reduce the amount of exposed garbage. A hunting bounty as a method of control was historically used in Finland from the mid-18th century until 1923. Culling has taken place to a limited extent in Alaska, where the population increase in common ravens is threatening the vulnerable Steller's eider (Polysticta stelleri). Ravens, like other corvids, are definitive hosts of West Nile Virus (WNV). The transmission can be from infected birds to humans, and ravens are susceptible to WNV. However, in a 2010 study, it was shown that the California Common Ravens did not have a high positivity rate of WNV. Cultural depictions Across its range in the Northern Hemisphere, and throughout human history, the common raven has been a powerful symbol and a popular subject of mythology and folklore. In some Western traditions, ravens have long been considered to be birds of ill omen, death and evil in general, in part because of the negative symbolism of their all-black plumage and the eating of carrion. In Sweden, ravens are known as the ghosts of murdered people, and in Germany as the souls of the damned. In Danish folklore, valravne that ate a king's heart gained human knowledge, could perform great malicious acts, could lead people astray, had superhuman powers, and were "terrible animals". It continues to be used as a symbol in areas where it once had mythological status: as the national bird of Bhutan (kings of Bhutan wear the Raven Crown), official bird of the Yukon territory, and on the coat of arms of the Isle of Man (once a Viking colony). In Persia and Arabia the raven was held as a bird of bad omen but a 14th-century Arabic work reports use of the raven in falconry. The modern unisex given name Raven is derived from the English word "raven". As a masculine name, Raven parallels the Old Norse Hrafn, and Old English *Hræfn, which were both bynames and personal names. Mythology In Tlingit and Haida cultures, Raven was both a trickster and creator god. Related beliefs are widespread among the peoples of Siberia and northeastern Asia. The Kamchatka Peninsula, for example, was supposed to have been created by the raven god Kutkh. There are several references to common ravens in the Old Testament of the Bible and it is an aspect of Mahakala in Bhutanese mythology. In Norse mythology, Huginn (from the Old Norse for "thought") and Muninn (from the Old Norse for "memory" or "mind") are a pair of ravens that fly all over the world of humans, Midgard, and bring the god Odin information. Additionally among the Norse, raven banner standards were carried by such figures as the Jarls of Orkney, King Cnut the Great of England, Norway and Denmark, and Harald Hardrada. In the British Isles, ravens also were symbolic to the Celts. In Irish mythology, the goddess Morrígan alighted on the hero Cú Chulainn's shoulder in the form of a raven after his death. In Welsh mythology they were associated with the Welsh god Brân the Blessed, whose name translates to "raven" and "crow". According to the Mabinogion, Brân's head was buried in the White Hill of London as a talisman against invasion. A legend developed that England would not fall to a foreign invader as long as there were ravens at the Tower of London; although this is often thought to be an ancient belief, the official Tower of London historian, Geoff Parnell, believes that this is actually a romantic Victorian invention. In the Jewish, Christian and Islamic traditions, the raven was the first animal to be released from Noah's Ark. "So it came to pass, at the end of forty days, that Noah opened the window of the ark which he had made. Then he sent out a raven, which kept going to and fro until the waters had dried up from the earth. He also sent out from himself a dove, to see if the waters had receded from the face of the ground." The raven is mentioned 12 times in the Bible. In the New Testament Jesus tells a parable using the raven to show how people should rely on God for their needs and not riches (Luke 12:24). The raven is also mentioned in the Quran at the story of Cain and Abel. Adam's firstborn son Cain kills his brother Abel, but he does not know what to do with the corpse: "Then Allah sent a raven scratching up the ground, to show him how to hide his brother's naked corpse. He said: Woe unto me! Am I not able to be as this raven and so hide my brother's naked corpse? And he became repentant."
Biology and health sciences
Corvoidea
null
381132
https://en.wikipedia.org/wiki/Mara%20%28mammal%29
Mara (mammal)
Maras, subfamily Dolichotinae, are a group of rodents in the family Caviidae. These large relatives of guinea pigs are common in the Patagonian steppes of Argentina, but also live in Paraguay and elsewhere in South America. There are two extant species, the Patagonian mara of the genus Dolichotis and the Chacoan mara of the genus Pediolagus. Traditionally this species was also thought to belong to Dolichotis; however, a 2020 study by the American Society of Mammalogists found significant difference between the two mara species to warrant resurrecting the genus Pediolagus for it. Several extinct genera are also known. Description Maras have stocky bodies, three sharp-clawed digits on the hind feet, and four digits of the fore feet. Maras have been described as resembling long-legged rabbits; while standing, they can also resemble a small ungulate. Patagonian maras can run at speeds up to . The Patagonian species can weigh over in adulthood. The average weight of adult male Patagonian maras is and in adult females is . Meanwhile, the Chacoan mara, though still large for a rodent, is much smaller, weighing around . Most maras have brown heads and bodies, dark (almost black) rumps with a white fringe around the base, and white bellies. Maras may amble, hop in a rabbit-like fashion, gallop, or bounce on all fours. They have been known to leap up to . Maras mate for life, and may have from one to three offspring each year. Mara young are very well-developed, and can start grazing within 24 hours. They use a crèche system, where one pair of adults keeps watch all the young in the crèche. If they spot danger, the young rush below ground into a burrow, and the adults are left to run to escape. Genera Dolichotis Pediolagus †Eodolichotis †Pliodolichotis †Propediolagus †Rhodanodolichotis Interaction with humans Patagonian maras are often kept in zoos or as pets, and are also known as "Patagonian cavies" or "Patagonian hares". They can be quite social with humans if raised with human interaction from a young age, though they avoid people in the wild. Maras may even change their habits from coming out in day to becoming nocturnal, simply to avoid social interaction. Gallery
Biology and health sciences
Rodents
Animals
381186
https://en.wikipedia.org/wiki/Reflex
Reflex
In biology, a reflex, or reflex action, is an involuntary, unplanned sequence or action and nearly instantaneous response to a stimulus. Reflexes are found with varying levels of complexity in organisms with a nervous system. A reflex occurs via neural pathways in the nervous system called reflex arcs. A stimulus initiates a neural signal, which is carried to a synapse. The signal is then transferred across the synapse to a motor neuron, which evokes a target response. These neural signals do not always travel to the brain, so many reflexes are an automatic response to a stimulus that does not receive or need conscious thought. Many reflexes are fine-tuned to increase organism survival and self-defense. This is observed in reflexes such as the startle reflex, which provides an automatic response to an unexpected stimulus, and the feline righting reflex, which reorients a cat's body when falling to ensure safe landing. The simplest type of reflex, a short-latency reflex, has a single synapse, or junction, in the signaling pathway. Long-latency reflexes produce nerve signals that are transduced across multiple synapses before generating the reflex response. Types of human reflexes Autonomic vs skeletal reflexes Reflex is an anatomical concept and it refers to a loop consisting, in its simplest form, of a sensory nerve, the input, and a motor nerve, the output. Autonomic does not mean automatic. The term autonomic is an anatomical term and it refers to a type of nervous system in animals and humans that is very primitive. Skeletal or somatic are, similarly, anatomical terms that refer to a type of nervous system that is more recent in terms of evolutionary development. There are autonomic reflexes and skeletal, somatic reflexes. Myotatic reflexes The myotatic or muscle stretch reflexes (sometimes known as deep tendon reflexes) provide information on the integrity of the central nervous system and peripheral nervous system. This information can be detected using electromyography (EMG). Generally, decreased reflexes indicate a peripheral problem, and lively or exaggerated reflexes a central one. A stretch reflex is the contraction of a muscle in response to its lengthwise stretch. Biceps reflex (C5, C6) Brachioradialis reflex (C5, C6, C7) Extensor digitorum reflex (C6, C7) Triceps reflex (C6, C7, C8) Patellar reflex or knee-jerk reflex (L2, L3, L4) Ankle jerk reflex (Achilles reflex) (S1, S2) While the reflexes above are stimulated mechanically, the term H-reflex refers to the analogous reflex stimulated electrically, and tonic vibration reflex for those stimulated to vibration. Tendon reflex A tendon reflex is the contraction of a muscle in response to striking its tendon. The Golgi tendon reflex is the inverse of a stretch reflex. Reflexes involving cranial nerves Reflexes usually only observed in human infants Newborn babies have a number of other reflexes which are not seen in adults, referred to as primitive reflexes. These automatic reactions to stimuli enable infants to respond to the environment before any learning has taken place. They include: Asymmetrical tonic neck reflex Palmomental reflex Moro reflex, also known as the startle reflex Palmar grasp reflex Rooting reflex Sucking reflex Symmetrical tonic neck reflex Tonic labyrinthine reflex Other kinds of reflexes Other reflexes found in the central nervous system include: Abdominal reflexes (T6-L1) Gastrocolic reflex Anocutaneous reflex (S2-S4) Baroreflex Cough reflex Cremasteric reflex (L1-L2) Diving reflex Lazarus sign Muscular defense Photic sneeze reflex Scratch reflex Sneeze Startle response Withdrawal reflex Crossed extensor reflex Many of these reflexes are quite complex, requiring a number of synapses in a number of different nuclei in the central nervous system (e.g., the escape reflex). Others of these involve just a couple of synapses to function (e.g., the withdrawal reflex). Processes such as breathing, digestion, and the maintenance of the heartbeat can also be regarded as reflex actions, according to some definitions of the term. Grading In medicine, reflexes are often used to assess the health of the nervous system. Doctors will typically grade the activity of a reflex on a scale from 0 to 4. While 2+ is considered normal, some healthy individuals are hypo-reflexive and register all reflexes at 1+, while others are hyper-reflexive and register all reflexes at 3+. Depending on where you are, another way of grading is from –4 (absent) to +4 (clonus), where 0 is "normal". Reflex modulation Some might imagine that reflexes are immutable. In reality, however, most reflexes are flexible and can be substantially modified to match the requirements of the behavior in both vertebrates and invertebrates. A good example of reflex modulation is the stretch reflex. When a muscle is stretched at rest, the stretch reflex leads to contraction of the muscle, thereby opposing stretch (resistance reflex). This helps to stabilize posture. During voluntary movements, however, the intensity (gain) of the reflex is reduced or its sign is even reversed. This prevents resistance reflexes from impeding movements. The underlying sites and mechanisms of reflex modulation are not fully understood. There is evidence that the output of sensory neurons is directly modulated during behavior—for example, through presynaptic inhibition. The effect of sensory input upon motor neurons is also influenced by interneurons in the spinal cord or ventral nerve cord and by descending signals from the brain. Other reflexes Breathing can also be considered both involuntary and voluntary, since breath can be held through internal intercostal muscles. History The concept of reflexes dates back to the 17th century with René Descartes. Descartes introduced the idea in his work "Treatise on Man", published posthumously in 1664. He described how the body could perform actions automatically in response to external stimuli without conscious thought. Descartes used the analogy of a mechanical statue to explain how sensory input could trigger motor responses in a deterministic and automatic manner. The term "reflex" was introduced in the 19th century by the English physiologist Marshall Hall, who is credited with formulating the concept of reflex action and explaining it scientifically. He introduced the term to describe involuntary movements triggered by external stimuli, which are mediated by the spinal cord and the nervous system, distinct from voluntary movements controlled by the brain. Hall's significant work on reflex function was detailed in his 1833 paper, "On the Reflex Function of the Medulla Oblongata and Medulla Spinalis," published in the Philosophical Transactions of the Royal Society, where he provided a clear account of how reflex actions were mediated by the spinal cord, independent of the brain's conscious control, distinguishing them from other neural activities.
Biology and health sciences
Medical procedures
null
381325
https://en.wikipedia.org/wiki/Sedan%20%28automobile%29
Sedan (automobile)
A sedan or saloon (British English) is a passenger car in a three-box configuration with separate compartments for an engine, passengers, and cargo. The first recorded use of sedan in reference to an automobile body occurred in 1912. The name derives from the 17th-century litter known as a sedan chair, a one-person enclosed box with windows and carried by porters. Variations of the sedan style include the close-coupled sedan, club sedan, convertible sedan, fastback sedan, hardtop sedan, notchback sedan, and sedanet. Definition A sedan () is a car with a closed body (i.e., a fixed metal roof) with the engine, passengers, and cargo in separate compartments. This broad definition does not differentiate sedans from various other car body styles. Still, in practice, the typical characteristics of sedans are: a B-pillar (between the front and rear windows) that supports the roof; two rows of seats; a three-box design with the engine at the front and the cargo area at the rear; a less steeply sloping roofline than a coupé results in increased headroom for rear passengers and a less sporting appearance; and a rear interior volume of at least . It is sometimes suggested that sedans must have four doors (to provide a simple distinction between sedans and two-door coupés); others state that a sedan can have four or two doors. Although the sloping rear roofline defined the coupe, the design element has become common on many body styles with manufacturers increasingly "cross-pollinating" the style so that terms such as sedan and coupé have been loosely interpreted as "'four-door coupes' - an inherent contradiction in terms." When a manufacturer produces two-door sedan and four-door sedan versions of the same model, the shape and position of the greenhouse on both versions may be identical, with only the B-pillar positioned further back to accommodate the longer doors on the two-door versions. Etymology A sedan chair, a sophisticated litter, is an enclosed box with windows used to transport one seated person. Porters at the front and rear carry the chair with horizontal poles. Litters date back to long before ancient Egypt, India, and China. Sedan chairs were developed in the 1630s. Etymologists suggest the name of the chair very probably came through varieties of Italian from the Latin sedere, or the Proto-Indo-European root "sed-" meaning "to sit." The first recorded use of sedan for an automobile body occurred in 1912 when the Studebaker Four and Studebaker Six models were marketed as sedans. There were fully enclosed automobile bodies before 1912. Long before that time, the same fully enclosed but horse-drawn carriages were known as a brougham in the United Kingdom, berline in France, and berlina in Italy; the latter two have become the terms for sedans in these countries. It is sometimes stated that the 1899 Renault Voiturette Type B (a 2-seat car with an extra external seat for a footman/mechanic) was the first sedan, since it is the first known car to be produced with a roof. A one-off instance of similar coachwork is also known in a 1900 De Dion-Bouton Type D. A sedan is typically considered to be a fixed-roof car with at least four seats. Based on this definition, the earliest sedan was the 1911 Speedwell, which was manufactured in the United States. International terminology In American English, Latin American Spanish, and Brazilian Portuguese, the term sedan is used (accented as sedán in Spanish). In British English, a car of this configuration is called a saloon (). Hatchback sedans are known simply as hatchbacks (not hatchback saloons); long-wheelbase luxury saloons with a division between the driver and passengers are limousines. In Australia and New Zealand, sedan is now predominantly used; they were previously simply cars. In the 21st century, saloon remains in the long-established names of particular motor races. In other languages, sedans are known as berline (French), berlina (European Spanish, European Portuguese, Romanian, and Italian), though they may include hatchbacks. These names, like the sedan, all come from forms of passenger transport used before the advent of automobiles. In German, a sedan is called Limousine and a limousine is a Stretch-Limousine. In the United States, two-door sedan models were marketed as Tudor in the Ford Model A (1927–1931) series. Automakers use different terms to differentiate their products and for Ford's sedan body styles "the tudor (2-door) and fordor (4-door) were marketing terms designed to stick in the minds of the public." Ford continued to use the Tudor name for 5-window coupes, 2-door convertibles, and roadsters since all had two doors. The Tudor name was also used to describe the Škoda 1101/1102 introduced in 1946. The public popularized the name for a two-door model and was then applied by the automaker to the entire line that included a four-door sedan and station wagon versions. Standard styles Notchback sedans In the United States, the notchback sedan distinguishes models with a horizontal trunk lid. The term is generally only referred to in marketing when it is necessary to differentiate between two sedan body styles (e.g., notchback and fastback) of the same model range. Liftback sedans Several sedans have a fastback profile but a hatchback-style tailgate is hinged at the roof. Examples include the Peugeot 309, Škoda Octavia, Hyundai Elantra XD, Chevrolet Malibu Maxx, BMW 4 Series Grand Coupe, Audi A5 Sportback, and Tesla Model S. The names hatchback and sedan are often used to differentiate between body styles of the same model. To avoid confusion, the term hatchback sedan is not often used. Fastback sedans There have been many sedans with a fastback style. Hardtop sedans Hardtop sedans were a popular body style in the United States from the 1950s to the 1970s. Hardtops are manufactured without a B-pillar leaving uninterrupted open space or, when closed, glass along the side of the vehicle. The top was intended to look like a convertible's top. However, it was fixed and made of hard material that did not fold. All manufacturers in the United States from the early 1950s into the 1970s provided at least a two-door hardtop model in their range and a four-door hardtop. The lack of side bracing demanded a strong, heavy chassis frame to combat unavoidable flexing. The pillarless design was also available in four-door models using unibody construction. For example, Chrysler moved to unibody designs for most of its models in 1960 and American Motors Corporation (AMC) offered four-door sedans, as well a four-door station wagon from 1958 until 1960 in the Rambler and Ambassador series. In 1973, the US government passed Federal Motor Vehicle Safety Standard 216 creating a standard roof strength test to measure the integrity of roof structure in motor vehicles to come into effect some years later. The objective was to reduce deaths and injuries due to the car's roof crushing into the passenger compartment in case of a rollover crash. Hardtop sedan body style production ended with the 1978 Chrysler Newport. Roofs were often available with standard or optional vinyl cover. The structural B-pillar design was minimized by styling methods like matt black finishes. Stylists and engineers soon developed more subtle solutions. Mid-20th century variations Close-coupled sedans A close-coupled sedan is a body style produced in the United States during the 1920s and 1930s. Their two-box boxy styling made these sedans more like crossover vehicles than traditional three-box sedans. Like other close-coupled body styles, the rear seats are farther forward than a regular sedan. This reduced the length of the body; close-coupled sedans, also known as town sedans, were the shortest of the sedan models offered. Models of close-coupled sedans include the Chrysler Imperial, Duesenberg Model A, and Packard 745 Coach sedans A two-door sedan for four or five passengers but with less room for passengers than a standard sedan. A Coach body has no external trunk for luggage. Haajanen says it can be difficult to tell the difference between a Club and a Brougham and a Coach body, as if manufacturers were more concerned with marketing their product than adhering to strict body style definitions. Close-coupled saloons Close-coupled saloons originated as four-door thoroughbred sporting horse-drawn carriages with little room for rear passengers' feet. In automotive use, manufacturers in the United Kingdom used the term to develop the chummy body, where passengers were forced to be friendly because they were tightly packed. They provided weather protection for extra passengers in what would otherwise be a two-seater car. Two-door versions would be described in the United States and France as coach bodies. A postwar example is the Rover 3 Litre Coupé. Club sedans Produced in the United States from the mid-1920s to the mid-1950s, the name club sedan was used for highly appointed models using the sedan chassis. Some people describe a club sedan as a two-door vehicle with a body style otherwise identical to the sedan models in the range. Others describe a club sedan as having either two or four doors and a shorter roof and therefore less interior space than the other sedan models in the range. Club sedan originates from a railroad train's club carriage (e.g.,, the lounge or parlour carriage). Sedanets From the 1910s to the 1950s, several United States manufacturers have named models either Sedanet or Sedanette. The term originated as a smaller version of the sedan; however, it has also been used for convertibles and fastback coupes. Models that have been called Sedanet or Sedanette include the 1917 Dort Sedanet, King, 1919 Lexington, 1930s Cadillac Fleetwood Sedanette, 1949 Cadillac Series 62 Sedanette, 1942-1951 Buick Super Sedanet, and 1956 Studebaker.
Technology
Motorized road transport
null
381399
https://en.wikipedia.org/wiki/Hydroelectricity
Hydroelectricity
Hydroelectricity, or hydroelectric power, is electricity generated from hydropower (water power). Hydropower supplies 15% of the world's electricity, almost 4,210 TWh in 2023, which is more than all other renewable sources combined and also more than nuclear power. Hydropower can provide large amounts of low-carbon electricity on demand, making it a key element for creating secure and clean electricity supply systems. A hydroelectric power station that has a dam and reservoir is a flexible source, since the amount of electricity produced can be increased or decreased in seconds or minutes in response to varying electricity demand. Once a hydroelectric complex is constructed, it produces no direct waste, and almost always emits considerably less greenhouse gas than fossil fuel-powered energy plants. However, when constructed in lowland rainforest areas, where part of the forest is inundated, substantial amounts of greenhouse gases may be emitted. Construction of a hydroelectric complex can have significant environmental impact, principally in loss of arable land and population displacement. They also disrupt the natural ecology of the river involved, affecting habitats and ecosystems, and siltation and erosion patterns. While dams can ameliorate the risks of flooding, dam failure can be catastrophic. In 2021, global installed hydropower electrical capacity reached almost 1,400 GW, the highest among all renewable energy technologies. Hydroelectricity plays a leading role in countries like Brazil, Norway and China. but there are geographical limits and environmental issues. Tidal power can be used in coastal regions. China added 24 GW in 2022, accounting for nearly three-quarters of global hydropower capacity additions. Europe added 2 GW, the largest amount for the region since 1990. Meanwhile, globally, hydropower generation increased by 70 TWh (up 2%) in 2022 and remains the largest renewable energy source, surpassing all other technologies combined. History Hydropower has been used since ancient times to grind flour and perform other tasks. In the late 18th century hydraulic power provided the energy source needed for the start of the Industrial Revolution. In the mid-1700s, French engineer Bernard Forest de Bélidor published Architecture Hydraulique, which described vertical- and horizontal-axis hydraulic machines, and in 1771 Richard Arkwright's combination of water power, the water frame, and continuous production played a significant part in the development of the factory system, with modern employment practices. In the 1840s, hydraulic power networks were developed to generate and transmit hydro power to end users. By the late 19th century, the electrical generator was developed and could now be coupled with hydraulics. The growing demand arising from the Industrial Revolution would drive development as well. In 1878, the world's first hydroelectric power scheme was developed at Cragside in Northumberland, England, by William Armstrong. It was used to power a single arc lamp in his art gallery. The old Schoelkopf Power Station No. 1, US, near Niagara Falls, began to produce electricity in 1881. The first Edison hydroelectric power station, the Vulcan Street Plant, began operating September 30, 1882, in Appleton, Wisconsin, with an output of about 12.5 kilowatts. By 1886 there were 45 hydroelectric power stations in the United States and Canada; and by 1889 there were 200 in the United States alone. At the beginning of the 20th century, many small hydroelectric power stations were being constructed by commercial companies in mountains near metropolitan areas. Grenoble, France held the International Exhibition of Hydropower and Tourism, with over one million visitors 1925. By 1920, when 40% of the power produced in the United States was hydroelectric, the Federal Power Act was enacted into law. The Act created the Federal Power Commission to regulate hydroelectric power stations on federal land and water. As the power stations became larger, their associated dams developed additional purposes, including flood control, irrigation and navigation. Federal funding became necessary for large-scale development, and federally owned corporations, such as the Tennessee Valley Authority (1933) and the Bonneville Power Administration (1937) were created. Additionally, the Bureau of Reclamation which had begun a series of western US irrigation projects in the early 20th century, was now constructing large hydroelectric projects such as the 1928 Hoover Dam. The United States Army Corps of Engineers was also involved in hydroelectric development, completing the Bonneville Dam in 1937 and being recognized by the Flood Control Act of 1936 as the premier federal flood control agency. Hydroelectric power stations continued to become larger throughout the 20th century. Hydropower was referred to as "white coal". Hoover Dam's initial power station was the world's largest hydroelectric power station in 1936; it was eclipsed by the Grand Coulee Dam in 1942. The Itaipu Dam opened in 1984 in South America as the largest, producing , but was surpassed in 2008 by the Three Gorges Dam in China at . Hydroelectricity would eventually supply some countries, including Norway, Democratic Republic of the Congo, Paraguay and Brazil, with over 85% of their electricity. Future potential In 2021 the International Energy Agency (IEA) said that more efforts are needed to help limit climate change. Some countries have highly developed their hydropower potential and have very little room for growth: Switzerland produces 88% of its potential and Mexico 80%. In 2022, the IEA released a main-case forecast of 141 GW generated by hydropower over 2022–2027, which is slightly lower than deployment achieved from 2017–2022. Because environmental permitting and construction times are long, they estimate hydropower potential will remain limited, with only an additional 40 GW deemed possible in the accelerated case. Modernization of existing infrastructure In 2021 the IEA said that major modernisation refurbishments are required. Generating methods Conventional (dams) Most hydroelectric power comes from the potential energy of dammed water driving a water turbine and generator. The power extracted from the water depends on the volume and on the difference in height between the source and the water's outflow. This height difference is called the head. A large pipe (the "penstock") delivers water from the reservoir to the turbine. Pumped-storage This method produces electricity to supply high peak demands by moving water between reservoirs at different elevations. At times of low electrical demand, the excess generation capacity is used to pump water into the higher reservoir, thus providing demand side response. When the demand becomes greater, water is released back into the lower reservoir through a turbine. In 2021 pumped-storage schemes provided almost 85% of the world's 190 GW of grid energy storage and improve the daily capacity factor of the generation system. Pumped storage is not an energy source, and appears as a negative number in listings. Run-of-the-river Run-of-the-river hydroelectric stations are those with small or no reservoir capacity, so that only the water coming from upstream is available for generation at that moment, and any oversupply must pass unused. A constant supply of water from a lake or existing reservoir upstream is a significant advantage in choosing sites for run-of-the-river. Tide A tidal power station makes use of the daily rise and fall of ocean water due to tides; such sources are highly predictable, and if conditions permit construction of reservoirs, can also be dispatchable to generate power during high demand periods. Less common types of hydro schemes use water's kinetic energy or undammed sources such as undershot water wheels. Tidal power is viable in a relatively small number of locations around the world. Sizes, types and capacities of hydroelectric facilities The classification of hydropower plants starts with two top-level categories: small hydropower plants (SHP) and large hydropower plants (LHP). The classification of a plant as an SHP or LHP is primarily based on its nameplate capacity, the threshold varies by the country, but in any case a plant with the capacity of 50 MW or more is considered an LHP. As an example, for China, SHP power is below 25 MW, for India - below 15 MW, most of Europe - below 10 MW. The SHP and LHP categories are further subdivided into many subcategories that are not mutually exclusive. For example, a low-head hydro power plant with hydrostatic head of few meters to few tens of meters can be classified either as an SHP or an LHP. The other distinction between SHP and LHP is the degree of the water flow regulation: a typical SHP primarily uses the natural water discharge with very little regulation in comparison to an LHP. Therefore, the term SHP is frequently used as a synonym for the run-of-the-river power plant. Large facilities The largest power producers in the world are hydroelectric power stations, with some hydroelectric facilities capable of generating more than double the installed capacities of the current largest nuclear power stations. Although no official definition exists for the capacity range of large hydroelectric power stations, facilities from over a few hundred megawatts are generally considered large hydroelectric facilities. Currently, only seven facilities over () are in operation worldwide, see table below. Small Small hydro is hydroelectric power on a scale serving a small community or industrial plant. The definition of a small hydro project varies but a generating capacity of up to 10 megawatts (MW) is generally accepted as the upper limit. This may be stretched to and in Canada and the United States. Small hydro stations may be connected to conventional electrical distribution networks as a source of low-cost renewable energy. Alternatively, small hydro projects may be built in isolated areas that would be uneconomic to serve from a grid, or in areas where there is no national electrical distribution network. Since small hydro projects usually have minimal reservoirs and civil construction work, they are seen as having a relatively low environmental impact compared to large hydro. This decreased environmental impact depends strongly on the balance between stream flow and power production. Micro Micro hydro means hydroelectric power installations that typically produce up to of power. These installations can provide power to an isolated home or small community, or are sometimes connected to electric power networks. There are many of these installations around the world, particularly in developing nations as they can provide an economical source of energy without purchase of fuel. Micro hydro systems complement photovoltaic solar energy systems because in many areas water flow, and thus available hydro power, is highest in the winter when solar energy is at a minimum. Pico Pico hydro is hydroelectric power generation of under . It is useful in small, remote communities that require only a small amount of electricity. For example, the 1.1 kW Intermediate Technology Development Group Pico Hydro Project in Kenya supplies 57 homes with very small electric loads (e.g., a couple of lights and a phone charger, or a small TV/radio). Even smaller turbines of 200–300 W may power a few homes in a developing country with a drop of only . A Pico-hydro setup is typically run-of-the-river, meaning that dams are not used, but rather pipes divert some of the flow, drop this down a gradient, and through the turbine before returning it to the stream. Underground An underground power station is generally used at large facilities and makes use of a large natural height difference between two waterways, such as a waterfall or mountain lake. A tunnel is constructed to take water from the high reservoir to the generating hall built in a cavern near the lowest point of the water tunnel and a horizontal tailrace taking water away to the lower outlet waterway. Calculating available power A simple formula for approximating electric power production at a hydroelectric station is: where is power (in watts) (eta) is the coefficient of efficiency (a unitless, scalar coefficient, ranging from 0 for completely inefficient to 1 for completely efficient). (rho) is the density of water (~1000 kg/m3) is the volumetric flow rate (in m3/s) is the mass flow rate (in kg/s) (Delta h) is the change in height (in meters) is acceleration due to gravity (9.8 m/s2) Efficiency is often higher (that is, closer to 1) with larger and more modern turbines. Annual electric energy production depends on the available water supply. In some installations, the water flow rate can vary by a factor of 10:1 over the course of a year. Properties Advantages Flexibility Hydropower is a flexible source of electricity since stations can be ramped up and down very quickly to adapt to changing energy demands. Hydro turbines have a start-up time of the order of a few minutes. Although battery power is quicker its capacity is tiny compared to hydro. It takes less than 10 minutes to bring most hydro units from cold start-up to full load; this is quicker than nuclear and almost all fossil fuel power. Power generation can also be decreased quickly when there is a surplus power generation. Hence the limited capacity of hydropower units is not generally used to produce base power except for vacating the flood pool or meeting downstream needs. Instead, it can serve as backup for non-hydro generators. High value power The major advantage of conventional hydroelectric dams with reservoirs is their ability to store water at low cost for dispatch later as high value clean electricity. In 2021, the IEA estimated that the "reservoirs of all existing conventional hydropower plants combined can store a total of 1,500 terawatt-hours (TWh) of electrical energy in one full cycle" which was "about 170 times more energy than the global fleet of pumped storage hydropower plants". Battery storage capacity is not expected to overtake pumped storage during the 2020s. When used as peak power to meet demand, hydroelectricity has a higher value than baseload power and a much higher value compared to intermittent energy sources such as wind and solar. Hydroelectric stations have long economic lives, with some plants still in service after 50–100 years. Operating labor cost is also usually low, as plants are automated and have few personnel on site during normal operation. Where a dam serves multiple purposes, a hydroelectric station may be added with relatively low construction cost, providing a useful revenue stream to offset the costs of dam operation. It has been calculated that the sale of electricity from the Three Gorges Dam will cover the construction costs after 5 to 8 years of full generation. However, some data shows that in most countries large hydropower dams will be too costly and take too long to build to deliver a positive risk adjusted return, unless appropriate risk management measures are put in place. Suitability for industrial applications While many hydroelectric projects supply public electricity networks, some are created to serve specific industrial enterprises. Dedicated hydroelectric projects are often built to provide the substantial amounts of electricity needed for aluminium electrolytic plants, for example. The Grand Coulee Dam switched to support Alcoa aluminium in Bellingham, Washington, United States for American World War II airplanes before it was allowed to provide irrigation and power to citizens (in addition to aluminium power) after the war. In Suriname, the Brokopondo Reservoir was constructed to provide electricity for the Alcoa aluminium industry. New Zealand's Manapouri Power Station was constructed to supply electricity to the aluminium smelter at Tiwai Point. Reduced CO2 emissions Since hydroelectric dams do not use fuel, power generation does not produce carbon dioxide. While carbon dioxide is initially produced during construction of the project, and some methane is given off annually by reservoirs, hydro has one of the lowest lifecycle greenhouse gas emissions for electricity generation. The low greenhouse gas impact of hydroelectricity is found especially in temperate climates. Greater greenhouse gas emission impacts are found in the tropical regions because the reservoirs of power stations in tropical regions produce a larger amount of methane than those in temperate areas. Like other non-fossil fuel sources, hydropower also has no emissions of sulfur dioxide, nitrogen oxides, or other particulates. Other uses of the reservoir Reservoirs created by hydroelectric schemes often provide facilities for water sports, and become tourist attractions themselves. In some countries, aquaculture in reservoirs is common. Multi-use dams installed for irrigation support agriculture with a relatively constant water supply. Large hydro dams can control floods, which would otherwise affect people living downstream of the project. Managing dams which are also used for other purposes, such as irrigation, is complicated. Disadvantages In 2021 the IEA called for "robust sustainability standards for all hydropower development with streamlined rules and regulations". Ecosystem damage and loss of land Large reservoirs associated with traditional hydroelectric power stations result in submersion of extensive areas upstream of the dams, sometimes destroying biologically rich and productive lowland and riverine valley forests, marshland and grasslands. Damming interrupts the flow of rivers and can harm local ecosystems, and building large dams and reservoirs often involves displacing people and wildlife. The loss of land is often exacerbated by habitat fragmentation of surrounding areas caused by the reservoir. Hydroelectric projects can be disruptive to surrounding aquatic ecosystems both upstream and downstream of the plant site. Generation of hydroelectric power changes the downstream river environment. Water exiting a turbine usually contains very little suspended sediment, which can lead to scouring of river beds and loss of riverbanks. The turbines also will kill large portions of the fauna passing through, for instance 70% of the eel passing a turbine will perish immediately. Since turbine gates are often opened intermittently, rapid or even daily fluctuations in river flow are observed. Drought and water loss by evaporation Drought and seasonal changes in rainfall can severely limit hydropower. Water may also be lost by evaporation. Siltation and flow shortage When water flows it has the ability to transport particles heavier than itself downstream. This has a negative effect on dams and subsequently their power stations, particularly those on rivers or within catchment areas with high siltation. Siltation can fill a reservoir and reduce its capacity to control floods along with causing additional horizontal pressure on the upstream portion of the dam. Eventually, some reservoirs can become full of sediment and useless or over-top during a flood and fail. Changes in the amount of river flow will correlate with the amount of energy produced by a dam. Lower river flows will reduce the amount of live storage in a reservoir therefore reducing the amount of water that can be used for hydroelectricity. The result of diminished river flow can be power shortages in areas that depend heavily on hydroelectric power. The risk of flow shortage may increase as a result of climate change. One study from the Colorado River in the United States suggest that modest climate changes, such as an increase in temperature in 2 degree Celsius resulting in a 10% decline in precipitation, might reduce river run-off by up to 40%. Brazil in particular is vulnerable due to its heavy reliance on hydroelectricity, as increasing temperatures, lower water flow and alterations in the rainfall regime, could reduce total energy production by 7% annually by the end of the century. Methane emissions (from reservoirs) Lower positive impacts are found in the tropical regions. In lowland rainforest areas, where inundation of a part of the forest is necessary, it has been noted that the reservoirs of power plants produce substantial amounts of methane. This is due to plant material in flooded areas decaying in an anaerobic environment and forming methane, a greenhouse gas. According to the World Commission on Dams report, where the reservoir is large compared to the generating capacity (less than 100 watts per square metre of surface area) and no clearing of the forests in the area was undertaken prior to impoundment of the reservoir, greenhouse gas emissions from the reservoir may be higher than those of a conventional oil-fired thermal generation plant. In boreal reservoirs of Canada and Northern Europe, however, greenhouse gas emissions are typically only 2% to 8% of any kind of conventional fossil-fuel thermal generation. A new class of underwater logging operation that targets drowned forests can mitigate the effect of forest decay. Relocation Another disadvantage of hydroelectric dams is the need to relocate the people living where the reservoirs are planned. In 2000, the World Commission on Dams estimated that dams had physically displaced 40–80 million people worldwide. Failure risks Because large conventional dammed-hydro facilities hold back large volumes of water, a failure due to poor construction, natural disasters or sabotage can be catastrophic to downriver settlements and infrastructure. During Typhoon Nina in 1975 Banqiao Dam in Southern China failed when more than a year's worth of rain fell within 24 hours (see 1975 Banqiao Dam failure). The resulting flood resulted in the deaths of 26,000 people, and another 145,000 from epidemics. Millions were left homeless. The creation of a dam in a geologically inappropriate location may cause disasters such as 1963 disaster at Vajont Dam in Italy, where almost 2,000 people died. The Malpasset Dam failure in Fréjus on the French Riviera (Côte d'Azur), southern France, collapsed on December 2, 1959, killing 423 people in the resulting flood. Smaller dams and micro hydro facilities create less risk, but can form continuing hazards even after being decommissioned. For example, the small earthen embankment Kelly Barnes Dam failed in 1977, twenty years after its power station was decommissioned, causing 39 deaths. Comparison and interactions with other methods of power generation Hydroelectricity eliminates the flue gas emissions from fossil fuel combustion, including pollutants such as sulfur dioxide, nitric oxide, carbon monoxide, dust, and mercury in the coal. Hydroelectricity also avoids the hazards of coal mining and the indirect health effects of coal emissions. In 2021 the IEA said that government energy policy should "price in the value of the multiple public benefits provided by hydropower plants". Nuclear power Nuclear power is relatively inflexible; although it can reduce its output reasonably quickly. Since the cost of nuclear power is dominated by its high infrastructure costs, the cost per unit energy goes up significantly with low production. Because of this, nuclear power is mostly used for baseload. By way of contrast, hydroelectricity can supply peak power at much lower cost. Hydroelectricity is thus often used to complement nuclear or other sources for load following. Country examples where they are paired in a close to 50/50 share include the electric grid in Switzerland, the Electricity sector in Sweden and to a lesser extent, Ukraine and the Electricity sector in Finland. Wind power Wind power goes through predictable variation by season, but is intermittent on a daily basis. Maximum wind generation has little relationship to peak daily electricity consumption, the wind may peak at night when power is not needed or be still during the day when electrical demand is highest. Occasionally weather patterns can result in low wind for days or weeks at a time, a hydroelectric reservoir capable of storing weeks of output is useful to balance generation on the grid. Peak wind power can be offset by minimum hydropower and minimum wind can be offset with maximum hydropower. In this way the easily regulated character of hydroelectricity is used to compensate for the intermittent nature of wind power. Conversely, in some cases wind power can be used to spare water for later use in dry seasons. An example of this is Norway's trading with Sweden, Denmark, the Netherlands, Germany and the UK. Norway is 98% hydropower, while its flatland neighbors have wind power. In areas that do not have hydropower, pumped storage serves a similar role, but at a much higher cost and 20% lower efficiency. Hydro power by country In 2022, hydro generated 4,289 TWh, 15% of total electricity and half of renewables. Of the world total, China (30%) produced the most, followed by Brazil (10%), Canada (9.2%), the United States (5.8%) and Russia (4.6%). Paraguay produces nearly all of its electricity from hydro and exports far more than it uses. Larger plants tend to be built and operated by national governments, so most capacity (70%) is publicly owned, despite the fact that most plants (nearly 70%) are owned and operated by the private sector, as of 2021. The following table lists these data for each country: total generation from hydro in terawatt-hours, percent of that country's generation that was hydro, total hydro capacity in gigawatts, percent growth in hydro capacity, and the hydro capacity factor for that year. Data are sourced from Ember dating to the year 2023 unless otherwise specified. Only includes countries with more than 1 TWh of generation. Links for each location go to the relevant hydro power page, when available. Economics The weighted average cost of capital is a major factor.
Technology
Energy and fuel
null
381805
https://en.wikipedia.org/wiki/Franck%E2%80%93Hertz%20experiment
Franck–Hertz experiment
The Franck–Hertz experiment was the first electrical measurement to clearly show the quantum nature of atoms. It was presented on April 24, 1914, to the German Physical Society in a paper by James Franck and Gustav Hertz. Franck and Hertz had designed a vacuum tube for studying energetic electrons that flew through a thin vapor of mercury atoms. They discovered that, when an electron collided with a mercury atom, it could lose only a specific quantity (4.9 electron volts) of its kinetic energy before flying away. This energy loss corresponds to decelerating the electron from a speed of about 1.3 million metres per second to zero. A faster electron does not decelerate completely after a collision, but loses precisely the same amount of its kinetic energy. Slower electrons merely bounce off mercury atoms without losing any significant speed or kinetic energy. These experimental results proved to be consistent with the Bohr model for atoms that had been proposed the previous year by Niels Bohr. The Bohr model was a precursor of quantum mechanics and of the electron shell model of atoms. Its key feature was that an electron inside an atom occupies one of the atom's "quantum energy levels". Before the collision, an electron inside the mercury atom occupies its lowest available energy level. After the collision, the electron inside occupies a higher energy level with 4.9 electronvolts (eV) more energy. This means that the electron is more loosely bound to the mercury atom. There were no intermediate levels or possibilities in Bohr's quantum model. This feature was "revolutionary" because it was inconsistent with the expectation that an electron could be bound to an atom's nucleus by any amount of energy. In a second paper presented in May 1914, Franck and Hertz reported on the light emission by the mercury atoms that had absorbed energy from collisions. They showed that the wavelength of this ultraviolet light corresponded exactly to the 4.9 eV of energy that the flying electron had lost. The relationship of energy and wavelength had also been predicted by Bohr because he had followed the structure laid out by Hendrik Lorentz at the 1911 Solvay Congress. At Solvay, Hendrik Lorentz suggested after Einstein’s talk on quantum structure that the energy of a rotator be set equal to nhv. Therefore, Bohr had followed the instructions given in 1911 and copied the formula proposed by Lorentz and others into his 1913 atomic model. Lorentz had been correct. The quantization of the atoms matched his formula incorporated into the Bohr model. After a presentation of these results by Franck a few years later, Albert Einstein is said to have remarked, "It's so lovely it makes you cry." On December 10, 1926, Franck and Hertz were awarded the 1925 Nobel Prize in Physics "for their discovery of the laws governing the impact of an electron upon an atom". Experiment Franck and Hertz's original experiment used a heated vacuum tube containing a drop of mercury; they reported a tube temperature of 115 °C, at which the vapor pressure of mercury is about 100 pascals (about a thousandth of the atmospheric pressure). A contemporary Franck–Hertz tube is shown in the photograph. It is fitted with three electrodes: an electron-emitting, hot cathode; a metal mesh grid; and an anode. The grid's voltage is positive relative to the cathode, so that electrons emitted from the hot cathode are drawn to it. The electric current measured in the experiment is due to electrons that pass through the grid and reach the anode. The anode's electric potential is slightly negative relative to the grid, so that electrons that reach the anode have at least a corresponding amount of kinetic energy after passing the grid. The graphs published by Franck and Hertz (see figure) show the dependence of the electric current flowing out of the anode upon the electric potential between the grid and the cathode. At low potential differences—up to 4.9 volts—the current through the tube increased steadily with increasing potential difference. This behavior is typical of true vacuum tubes that don't contain mercury vapor; larger voltages lead to larger "space-charge limited current". At 4.9 volts the current drops sharply, almost back to zero. The current then increases steadily once again as the voltage is increased further, until 9.8 volts is reached (exactly 4.9+4.9 volts). At 9.8 volts a similar sharp drop is observed. While it isn't evident in the original measurements of the figure, this series of dips in current at approximately 4.9 volt increments continues to potentials of at least 70 volts. Franck and Hertz noted in their first paper that the 4.9 eV characteristic energy of their experiment corresponded well to one of the wavelengths of light emitted by mercury atoms in gas discharges. They were using a quantum relationship between the energy of excitation and the corresponding wavelength of light, which they broadly attributed to Johannes Stark and to Arnold Sommerfeld; it predicts that 4.9 eV corresponds to light with a 254 nm wavelength. The same relationship was also incorporated in Einstein's 1905 photon theory of the photoelectric effect. In a second paper, Franck and Hertz reported the optical emission from their tubes, which emitted light with a single prominent wavelength 254 nm. The figure at the right shows the spectrum of a Franck–Hertz tube; nearly all of the light emitted has a single wavelength. For reference, the figure also shows the spectrum for a mercury gas discharge light, which emits light at several wavelengths besides 254 nm. The figure is based on the original spectra published by Franck and Hertz in 1914. The fact that the Franck–Hertz tube emitted just the single wavelength, corresponding nearly exactly to the voltage period they had measured, was very important. Modeling of electron collisions with atoms Franck and Hertz explained their experiment in terms of elastic and inelastic collisions between the electrons and the mercury atoms. Slowly moving electrons collide elastically with the mercury atoms. This means that the direction in which the electron is moving is altered by the collision, but its speed is unchanged. An elastic collision is illustrated in the figure, where the length of the arrow indicates the electron's speed. The mercury atom is unaffected by the collision, mostly because it is about four hundred thousand times more massive than an electron. When the speed of the electron exceeds about 1.3 million metres per second, collisions with a mercury atom become inelastic. This speed corresponds to a kinetic energy of 4.9 eV, which is deposited into the mercury atom. As shown in the figure, the electron's speed is reduced, and the mercury atom becomes "excited". A short time later, the 4.9 eV of energy that was deposited into the mercury atom is released as ultraviolet light that has a wavelength of precisely 254 nm. Following light emission, the mercury atom returns to its original, unexcited state. If electrons emitted from the cathode flew freely until they arrived at the grid, they would acquire a kinetic energy that's proportional to the voltage applied to the grid. 1 eV of kinetic energy corresponds to a potential difference of 1 volt between the grid and the cathode. Elastic collisions with the mercury atoms increase the time it takes for an electron to arrive at the grid, but the average kinetic energy of electrons arriving there isn't much affected. When the grid voltage reaches 4.9 V, electron collisions near the grid become inelastic, and the electrons are greatly slowed. The kinetic energy of a typical electron arriving at the grid is reduced so much that it cannot travel further to reach the anode, whose voltage is set to slightly repel electrons. The current of electrons reaching the anode falls, as seen in the graph. Further increases in the grid voltage restore enough energy to the electrons that suffered inelastic collisions that they can again reach the anode. The current rises again as the grid potential rises beyond 4.9 V. At 9.8 V, the situation changes again. Electrons that have traveled roughly halfway from the cathode to the grid have already acquired enough energy to suffer a first inelastic collision. As they continue slowly towards the grid from the midway point, their kinetic energy builds up again, but as they reach the grid they can suffer a second inelastic collision. Once again, the current to the anode drops. At intervals of 4.9 volts this process will repeat; each time the electrons will undergo one additional inelastic collision. Early quantum theory While Franck and Hertz were unaware of it when they published their experiments in 1914, in 1913 Niels Bohr had published a model for atoms that was very successful in accounting for the optical properties of atomic hydrogen. These were usually observed in gas discharges, which emitted light at a series of wavelengths. Ordinary light sources like incandescent light bulbs emit light at all wavelengths. Bohr had calculated the wavelengths emitted by hydrogen very accurately. The fundamental assumption of the Bohr model concerns the possible binding energies of an electron to the nucleus of an atom. The atom can be ionized if a collision with another particle supplies at least this binding energy. This frees the electron from the atom, and leaves a positively charged ion behind. There is an analogy with satellites orbiting the Earth. Every satellite has its own orbit, and practically any orbital distance, and any satellite binding energy, is possible. Since an electron is attracted to the positive charge of the atomic nucleus by a similar force, so-called "classical" calculations suggest that any binding energy should also be possible for electrons. However, Bohr assumed that only a specific series of binding energies occur, which correspond to the "quantum energy levels" for the electron. An electron is normally found in the lowest energy level, with the largest binding energy. Additional levels lie higher, with smaller binding energies. Intermediate binding energies lying between these levels are not permitted. This was a revolutionary assumption. Franck and Hertz had proposed that the 4.9 V characteristic of their experiments was due to ionization of mercury atoms by collisions with the flying electrons emitted at the cathode. In 1915 Bohr published a paper noting that the measurements of Franck and Hertz were more consistent with the assumption of quantum levels in his own model for atoms. In the Bohr model, the collision excited an internal electron within the atom from its lowest level to the first quantum level above it. The Bohr model also predicted that light would be emitted as the internal electron returned from its excited quantum level to the lowest one; its wavelength corresponded to the energy difference of the atom's internal levels, which has been called the Bohr relation. Franck and Hertz's observation of emission from their tube at 254 nm was also consistent with Bohr's perspective. Writing following the end of World War I in 1918, Franck and Hertz had largely adopted the Bohr perspective for interpreting their experiment, which has become one of the experimental pillars of quantum mechanics. As Abraham Pais described it, "Now the beauty of Franck and Hertz's work lies not only in the measurement of the energy loss E2-E1 of the impinging electron, but they also observed that, when the energy of that electron exceeds 4.9 eV, mercury begins to emit ultraviolet light of a definite frequency ν as defined in the above formula. Thereby they gave (unwittingly at first) the first direct experimental proof of the Bohr relation!" Franck himself emphasized the importance of the ultraviolet emission experiment in an epilogue to the 1960 Physical Science Study Committee (PSSC) film about the Franck–Hertz experiment. Experiment with neon In instructional laboratories, the Franck–Hertz experiment is often done using neon gas, which shows the onset of inelastic collisions with a visible orange glow in the vacuum tube, and which also is non-toxic, should the tube be broken. With mercury tubes, the model for elastic and inelastic collisions predicts that there should be narrow bands between the anode and the grid where the mercury emits light, but the light is ultraviolet and invisible. With neon, the Franck–Hertz voltage interval is 18.7 volts, and an orange glow appears near the grid when 18.7 volts is applied. This glow will move closer to the cathode with increasing accelerating potential, and indicates the locations where electrons have acquired the 18.7 eV required to excite a neon atom. At 37.4 volts two distinct glows will be visible: one midway between the cathode and grid, and one right at the accelerating grid. Higher potentials, spaced at 18.7 volt intervals, will result in additional glowing regions in the tube. An additional advantage of neon for instructional laboratories is that the tube can be used at room temperature. However, the wavelength of the visible emission is much longer than predicted by the Bohr relation and the 18.7 V interval. A partial explanation for the orange light involves two atomic levels lying 16.6 eV and 18.7 eV above the lowest level. Electrons excited to the 18.7 eV level fall to the 16.6 eV level, with concomitant orange light emission.
Physical sciences
Atomic physics
Physics
382187
https://en.wikipedia.org/wiki/Macroscopic%20scale
Macroscopic scale
The macroscopic scale is the length scale on which objects or phenomena are large enough to be visible with the naked eye, without magnifying optical instruments. It is the opposite of microscopic. Overview When applied to physical phenomena and bodies, the macroscopic scale describes things as a person can directly perceive them, without the aid of magnifying devices. This is in contrast to observations (microscopy) or theories (microphysics, statistical physics) of objects of geometric lengths smaller than perhaps some hundreds of micrometres. A macroscopic view of a ball is just that: a ball. A microscopic view could reveal a thick round skin seemingly composed entirely of puckered cracks and fissures (as viewed through a microscope) or, further down in scale, a collection of molecules in a roughly spherical shape (as viewed through an electron microscope). An example of a physical theory that takes a deliberately macroscopic viewpoint is thermodynamics. An example of a topic that extends from macroscopic to microscopic viewpoints is histology. Not quite by the distinction between macroscopic and microscopic, classical and quantum mechanics are theories that are distinguished in a subtly different way. At first glance one might think of them as differing simply in the size of objects that they describe, classical objects being considered far larger as to mass and geometrical size than quantal objects, for example a football versus a fine particle of dust. More refined consideration distinguishes classical and quantum mechanics on the basis that classical mechanics fails to recognize that matter and energy cannot be divided into infinitesimally small parcels, so that ultimately fine division reveals irreducibly granular features. The criterion of fineness is whether or not the interactions are described in terms of the Planck constant. Roughly speaking, classical mechanics considers particles in mathematically idealized terms even as fine as geometrical points with no magnitude, still having their finite masses. Classical mechanics also considers mathematically idealized extended materials as geometrically continuously substantial. Such idealizations are useful for most everyday calculations, but may fail entirely for molecules, atoms, photons, and other elementary particles. In many ways, classical mechanics can be considered a mainly macroscopic theory. On the much smaller scale of atoms and molecules, classical mechanics may fail, and the interactions of particles are then described by quantum mechanics. Near the absolute minimum of temperature, the Bose–Einstein condensate exhibits effects on macroscopic scale that demand description by quantum mechanics. In the quantum measurement problem the issue of what constitutes macroscopic and what constitutes the quantum world is unresolved and possibly unsolvable. The related correspondence principle can be articulated thus: every macroscopic phenomena can be formulated as a problem in quantum theory. A violation of the correspondence principle would thus ensure an empirical distinction between the macroscopic and the quantum. In pathology, macroscopic diagnostics generally involves gross pathology, in contrast to microscopic histopathology. The term "megascopic" is a synonym. "Macroscopic" may also refer to a "larger view", namely a view available only from a large perspective (a hypothetical "macroscope"). A macroscopic position could be considered the "big picture". High energy physics compared to low energy physics Particle physics, dealing with the smallest physical systems, is also known as high energy physics. Physics of larger length scales, including the macroscopic scale, is also known as low energy physics. Intuitively, it might seem incorrect to associate "high energy" with the physics of very small, low mass–energy systems, like subatomic particles. By comparison, one gram of hydrogen, a macroscopic system, has ~ times the mass–energy of a single proton, a central object of study in high energy physics. Even an entire beam of protons circulated in the Large Hadron Collider, a high energy physics experiment, contains ~ protons, each with of energy, for a total beam energy of ~ or ~ 336.4 MJ, which is still ~ times lower than the mass–energy of a single gram of hydrogen. Yet, the macroscopic realm is "low energy physics", while that of quantum particles is "high energy physics". The reason for this is that the "high energy" refers to energy at the quantum particle level. While macroscopic systems indeed have a larger total energy content than any of their constituent quantum particles, there can be no experiment or other observation of this total energy without extracting the respective amount of energy from each of the quantum particles – which is exactly the domain of high energy physics. Daily experiences of matter and the Universe are characterized by very low energy. For example, the photon energy of visible light is about 1.8 to 3.2 eV. Similarly, the bond-dissociation energy of a carbon-carbon bond is about 3.6 eV. This is the energy scale manifesting at the macroscopic level, such as in chemical reactions. Even photons with far higher energy, gamma rays of the kind produced in radioactive decay, have photon energy that is almost always between and – still two orders of magnitude lower than the mass–energy of a single proton. Radioactive decay gamma rays are considered as part of nuclear physics, rather than high energy physics. Finally, when reaching the quantum particle level, the high energy domain is revealed. The proton has a mass–energy of ~ ; some other massive quantum particles, both elementary and hadronic, have yet higher mass–energies. Quantum particles with lower mass–energies are also part of high energy physics; they also have a mass–energy that is far higher than that at the macroscopic scale (such as electrons), or are equally involved in reactions at the particle level (such as neutrinos). Relativistic effects, as in particle accelerators and cosmic rays, can further increase the accelerated particles' energy by many orders of magnitude, as well as the total energy of the particles emanating from their collision and annihilation.
Physical sciences
Physics basics: General
Physics
382344
https://en.wikipedia.org/wiki/Agkistrodon%20piscivorus
Agkistrodon piscivorus
Agkistrodon piscivorus is a species of venomous snake, a pit viper in the subfamily Crotalinae of the family Viperidae. It is one of the world's few semiaquatic vipers (along with the Florida cottonmouth), and is native to the Southeastern United States. As an adult, it is large and capable of delivering a painful and potentially fatal bite. When threatened, it may respond by coiling its body and displaying its fangs. Individuals may bite when feeling threatened or being handled in any way. It tends to be found in or near water, particularly in slow-moving and shallow lakes, streams, and marshes. It is a capable swimmer, and like several species of snakes, is known to occasionally enter bays and estuaries and swim between barrier islands and the mainland. The generic name is derived from the Greek words "fish-hook, hook" and "tooth", and the specific name comes from the Latin 'fish' and '(I) eat greedily, devour'; thus, the scientific name translates to "hook-toothed fish-eater". Common names include cottonmouth, northern cottonmouth, water moccasin, swamp moccasin, black moccasin, and simply viper. Many of the common names refer to the threat display, in which this species often stands its ground and gapes at an intruder, exposing the white lining of its mouth. Many scientists dislike the use of the term water moccasin since it can lead to confusion between the venomous cottonmouth and nonvenomous water snakes. Taxonomy and etymology Common names This is a list of common names for A. piscivorus, some of which also refer to other species: aquatic moccasin black moccasin black snake black water viper blunt-tail moccasin Congo copperhead cottonmouth cotton-mouthed snake cottonmouth rattler cottonmouth water moccasin gaper:USGS gapper highland moccasin lake moccasin lowland moccasin mangrove rattler moccasin moccasin snake North American cottonmouth snake North American water moccasin North American water viper pond moccasin pond rattler river moccasin river rattler rusty moccasin saltwater rattler short-tailed moccasin short-tail rattler small-tailed cottonmouth snap-jaw stub-tail stub-tail snake stump moccasin stump-tail moccasin stump-tail viper swamp lion swamp moccasin swamp rattler Texas moccasin trap jaw Troost's moccasin true horn snake true water moccasin viper water copperhead water mamba water moccasin water mokeson water pilot water pit rattler water pit viper water rattlesnake water viper white-mouth moccasin white-mouth rattler worm-tailed viper Subspecies and taxonomic history For many decades, one species with three subspecies were formally recognized: eastern cottonmouth, A. p. piscivorus (Lacépède, 1789); western cottonmouth, A. p. leucostoma (Troost, 1836); and Florida cottonmouth, A. p. conanti Gloyd, 1969. However, a molecular (DNA) based study was published in 2014, applying phylogenetic theories (one implication being no subspecies are recognized), changing the long-standing taxonomy. The resulting and current taxonomic arrangement recognizes two species and no subspecies. The western cottonmouth (A. p. leucostoma) was synonymized with the eastern cottonmouth (A. p. piscivorus) into one species (with the oldest published name, A. p. piscivorus, having priority). The Florida cottonmouth (A. p. conanti) is now recognized as a separate species. Agkistrodon piscivorus (Lacépéde, 1789), northern cottonmouth Agkistrodon conanti Gloyd, 1969, Florida cottonmouth (south Georgia and Florida peninsula) Anatomy and description Agkistrodon piscivorus is the largest species of the genus Agkistrodon. Adults commonly exceed in total length (including tail); females are typically smaller than males. Total length, per one study of adults, was . Average body mass has been found to be in males and in females. Occasionally, individuals may exceed in total length, especially in the eastern part of the range. Although larger ones have purportedly been seen in the wild, according to Gloyd and Conant (1990), the largest recorded specimen of A. p. piscivorus was in total length, based on a specimen caught in the Dismal Swamp region and given to the Philadelphia Zoological Garden. This snake had apparently been injured during capture, died several days later, and was measured when straight and relaxed. Large specimens can be extremely bulky, with the mass of a specimen of about in total length known to weigh . Many would assume that the morphology of an aquatic snake should have a small, narrow head that tapers towards the back to minimize drag in the water, especially when capturing prey. However, the pit vipers, and particularly Cottonmouths, display a contradicting structure, with its bulky, triangular head, which would be assumed to be poorly suited to water, yet it is not the case. The broad head is distinct from the neck, and the snout is blunt in profile with the rim of the top of the head extending forwards slightly further than the mouth. Substantial cranial plates are present, although the parietal plates are often fragmented, especially towards the rear. A loreal scale is absent. Six to 9 supralabials and eight to 12 infralabials are seen. At midbody, it has 23–27 rows of dorsal scales. All dorsal scale rows have keels, although those on the lowermost scale rows are weak. In males/females, the ventral scales number 130-145/128-144 and the subcaudals 38-54/36-50. Many of the latter may be divided. Though most specimens are almost or even totally black, (with the exception of the head and facial markings), the color pattern may consist of a brown, gray, tan, yellowish-olive, or blackish ground color, which is overlaid with a series of 10–17 dark brown to almost black crossbands. These crossbands, which usually have black edges, are sometimes broken along the dorsal midline to form a series of staggered halfbands on either side of the body. These crossbands are visibly lighter in the center, almost matching the ground color, often contain irregular dark markings, and extend well down onto the ventral scales. The dorsal banding pattern fades with age, so older individuals are an almost uniform olive-brown, grayish-brown, or black. The belly is white, yellowish-white, or tan, marked with dark spots, and becomes darker posteriorly. The amount of dark pigment on the belly varies from virtually none to almost completely black. The head is a more or less uniform brown color, especially in A. p. piscivorus. Subadult specimens may exhibit the same kind of dark, parietal spots characteristic of A. contortrix, but sometimes these are still visible in adults. Eastern populations have a broad, dark, postocular stripe, bordered with pale pigment above and below, that is faint or absent in western populations. The underside of the head is generally whitish, cream, or tan. Juvenile and subadult specimens generally have a more contrasting color pattern, with dark crossbands on a lighter ground color. The ground color is then tan, brown, or reddish-brown. The tip of the tail is usually yellowish, becoming greenish-yellow or greenish in subadults, and then black in adults. On some juveniles, the banding pattern can also be seen on the tail. Young snakes wiggle the tips of their tails to lure prey animals. This species is often confused with the copperhead, A. contortrix. This is especially true for juveniles, but differences exist. A. piscivorus has broad, dark stripes on the sides of its head that extend back from the eyes, whereas A. contortrix has only a thin, dark line that divides the pale supralabials from the somewhat darker color of the head. The watersnakes of the genus Nerodia are also similar in appearance, being thick-bodied with large heads, but they have round pupils, no loreal pit, a single anal plate, subcaudal scales that are divided throughout, and a distinctive overall color pattern. Venom Agkistrodon piscivorus venom is more toxic than that of A. contortrix, and is rich with powerful cytotoxic venom that destroys tissue. Although deaths are rare, the bite can leave scars, and on occasion, require amputation. Absent an anaphylactic reaction in a bitten individual, however, the venom does not cause systemic reactions in victims and does not contain neurotoxic components present in numerous rattlesnake species. Bites can be effectively treated with CroFab antivenom; this serum is derived using venom components from four species of American pit vipers (the eastern and western diamondback rattlesnakes, the Mojave rattlesnake, and the cottonmouth). Bites from the cottonmouth are relatively frequent in the lower Mississippi River Valley and along the coast of the Gulf of Mexico, although fatalities are rare. Allen and Swindell (1948) compiled a record of A. piscivorus bites in Florida from newspaper accounts and data from the Bureau of Vital Statistics: 1934, eight bites and three fatalities (no further fatalities were recorded after this year); 1935, 10; 1936, 16; 1937, 7; 1938, 6; 1939, 5; 1940, 3; 1941, 6; 1942, 3; 1943, 1; 1944, 3; 1998, 1. Wright and Wright (1957) report having encountered these snakes on countless occasions, often almost stepping on them, but never being bitten. In addition, they heard of no reports of any bites among 400 cypress cutters in the Okefenokee Swamp during the entire summer of 1921. These accounts suggest that the species is not particularly aggressive. Studies show that stressed snakes are more likely to strike. This action comes as a predator defense mechanism. Snakes with elevated hormone levels are more likely to strike. Additionally, larger snakes are more likely to strike than smaller snakes. Brown (1973) gave an average venom yield (dried) of 125 mg, with a range of 80–237 mg, along with values of 4.0, 2.2, 2.7, 3.5, 2.0 mg/kg IV, 4.8, 5.1, 4.0, 5.5, 3.8, 6.8 mg/kg IP and 25.8 mg/kg SC for toxicity. Wolff and Githens (1939) described a specimen that yielded 3.5 ml of venom during the first extraction and 4.0 ml five weeks later (1.094 grams of dried venom). The human lethal dose is unknown, but has been estimated at 100–150 mg. Symptoms commonly include ecchymosis and swelling. The pain is generally more severe than bites from the copperhead, but less so than those from rattlesnakes (Crotalus spp.). The formation of vesicles and bullae is less common than with rattlesnake bites, although necrosis can occur. Myokymia is sometimes reported.<ref name="Nor-C&L04">Norris R (2004). "Venom Poisoning in North American Reptiles". In: Campbell JA, Lamar WW (2004). The Venomous Reptiles of the Western Hemisphere. Ithaca and London: Comstock Publishing Associates. 870 pp. 1,500 plates. .</ref> However, the venom has strong proteolytic activity that can lead to severe tissue destruction. Geographic range A. piscivorus is found in the eastern US from the Great Dismal Swamp in southeast Virginia, south through the Florida peninsula and west to Arkansas, eastern and southern Oklahoma, and western and southern Georgia (excluding Lake Lanier and Lake Allatoona). A few records exist of the species being found along the Rio Grande in Texas, but these are thought to represent disjunct populations, now possibly eradicated. The type locality given is "Carolina", although Schmidt (1953) proposed this be restricted to the area around Charleston, South Carolina. Snakes observed in the northern areas of this range are typically larger older individuals. Campbell and Lamar (2004) mentioned this species as being found in Alabama, Arkansas, Florida, Georgia, Illinois, Indiana, Kentucky, Louisiana, Mississippi, Missouri, North Carolina, Oklahoma, South Carolina, Tennessee, Texas, and Virginia. Maps provided by Campbell and Lamar (2004) and Wright and Wright (1957) also indicate its presence in Western and Middle Tennessee and extreme southeastern Kansas, and limit it to the western part of Kentucky. In Georgia, it is found in the southern half of the state up to a few kilometers north of the Fall Line with few exceptions. Its range also includes the Ohio River Valley as far north as southern Indiana, and it inhabits many barrier islands off the coasts of the states where it is found. Habitat Agkistrodon piscivorus is the most aquatic species of the genus Agkistrodon, and is usually associated with bodies of water, such as creeks, streams, marshes, swamps, and the shores of ponds and lakes. This species is pretty unique from others of its commonly confused species as cottonmouths appear to be floating on top of the water rather than swimming with its body beneath the surface. The U.S. Navy (1991) describes it as inhabiting swamps, shallow lakes, and sluggish streams, but it is usually not found in swift, deep, cool water. Behler and King (1979) list its habitats as including lowland swamps, lakes, rivers, bayheads, sloughs, irrigation ditches, canals, rice fields, and small, clear, rocky, mountain streams. It is also found in brackish-water habitats and is sometimes seen swimming in salt water. It has been much more successful at colonizing Atlantic and Gulf Coast barrier islands than the copperhead. Even on these islands, though, it tends to favor freshwater marshes. A study by Dunson and Freda (1985) describes it as not being particularly salt-tolerant. The snake is not limited to aquatic habitats, however, as Gloyd and Conant (1990) mentioned large specimens have been found more than a mile (1.6 km) from water. In various locations, the species is well-adapted to less moist environments, such as palmetto thickets, pine-palmetto forest, pine woods in East Texas, pine flatwoods in Florida, eastern deciduous dune forest, dune and beach areas, riparian forest, and prairies. Behavior and ecology In tests designed to measure the various behavioral responses by wild specimens to encounters with people, 23 of 45 (51%) tried to escape, while 28 of 36 (78%) resorted to threat displays and other defensive tactics. Only when they were picked up with a mechanical hand were they likely to bite. When sufficiently stressed or threatened, this species engages in a characteristic threat display that includes vibrating its tail and throwing its head back with its mouth open to display the startlingly white interior, often making a loud hiss while the neck and front part of the body are pulled into an S-shaped position.<ref name="C&G-G&C90">Carpenter, Charles C.; Gillingham, James C. (1990). "Ritualized Behavior in Agkistrodon and Allied Genera". pp. 523–531. In: Gloyd HK, Conant R (1990). Snakes of the Agkistrodon Complex: A Monographic Review. Society for the Study of Amphibians and Reptiles. 614 pp. 52 plates. LCCN 89-50342. .</ref> Many of its common names, including "cottonmouth" and "gaper", refer to this behavior, while its habit of snapping its jaws shut when anything touches its mouth has earned it the name "trap jaw" in some areas. Other defensive responses can include flattening the body and emitting a strong, pungent secretion from the anal glands located at the base of the tail. This musk may be ejected in thin jets if the snake is sufficiently agitated or restrained. The smell has been likened to that of a billy goat, as well as to a genus of common flood-plain weeds, Pluchea, that also have a penetrating odor. Harmless watersnakes of the genus Nerodia are often mistaken for it. These are also semiaquatic, thick-bodied snakes with large heads that can be aggressive when provoked, but they behave differently. For example, watersnakes usually flee quickly into the water, while A. piscivorus often stands its ground with its threat display. In addition, watersnakes do not vibrate their tails when excited. A. piscivorus usually holds its head at an angle around 45° when swimming or crawling. Brown (1973) considered their heavy muscular bodies to be a striking characteristic, stating this made it difficult to hold them for venom extraction owing to their strength. This species may be active during the day and at night, but on bright, sunny days, they are usually found coiled or stretched out in the shade. In the morning and on cool days, they can often be seen basking in the sunlight. They often emerge at sunset to warm themselves on warm ground (i.e., sidewalks, roads) and then become very active throughout the night, when they are usually found swimming or crawling. Contrary to popular belief, they are capable of biting while under water. In the north, they hibernate during the winter. Niell (1947, 1948) made observations in Georgia, and noted they were one of the last species to seek shelter, often being found active until the first heavy frosts. At this point, they moved to higher ground and could be found in rotting pine stumps by tearing away the bark. These snakes could be quite active upon discovery and would then attempt to burrow more deeply into the soft wood or escape to the nearest water. In southeastern Virginia, Wood (1954) reported seeing migratory behavior in late October and early November. During a period of three or four days, as many as 50 individuals could be seen swimming across Back Bay from the bayside swamps of the barrier islands to the mainland. He suggested this might have something to do with hibernating habits. In the southern parts of its range, hibernation may be short or omitted altogether. Hunting and diet Raymond Ditmars (1912) described A. piscivorus as carnivorous. Its diet includes mammals, birds, amphibians, fish, eggs, insects, other snakes, small turtles, and small alligators. Cannibalism has also been reported. Normally, though, the bulk of its diet consists of fish and frogs. On occasion, juvenile specimens feed on invertebrates. Catfish (especially of the genus Ictalurus) are often eaten, although the sharp spines sometimes cause injuries. Toads of the genus Bufo are apparently avoided. Common prey species include southern leopard frogs, bass, juvenile black rat snakes, young common snapping turtles, and North American least shrews. Many authors have described the prey items taken under natural circumstances. Although fish and frogs are their most common prey, they eat almost any small vertebrate. Fish are captured by cornering them in shallow water, usually against the bank or under logs. They take advantage when bodies of water begin to dry up in the summer or early fall and gorge themselves on the resulting high concentrations of fish and tadpoles. They are surprisingly unsuccessful at seizing either live or dead fish under water. They are opportunistic hunters and sometimes eat carrion, making them one of the few snakes to do so. Campbell and Lamar (2004) described having seen them feeding on fish heads and viscera that had been thrown into the water from a dock. Heinrich and Studenroth (1996) reported an occasion in which an individual was seen feeding on the butchered remains of a feral hog (Sus scrofa) that had been thrown into Cypress Creek. Northern cottonmouths have an unusual feeding adaptation that allows them to adhere to prey through rotation of their head during swallowing because it aids the jaws in clearing the prey and contributes to the advance of the jaws along the prey. Conant (1929) gave a detailed account of the feeding behavior of a captive specimen from South Carolina. When prey was introduced, the snake quickly became attentive and made an attack. Frogs and small birds were seized and held until movement stopped. Larger prey was approached in a more cautious manner; a rapid strike was executed after which the snake would withdraw. In 2.5 years, the snake had accepted three species of frogs, including a large bullfrog, a spotted salamander, water snakes, garter snakes, sparrows, young rats, and three species of mice. Brimley (1944) described a captive specimen that ate copperheads (A. contortrix), as well as members of its own species, keeping its fangs embedded in its victims until they had been immobilized. A 2018 study found that northern cottonmouths on a diet of only fish when compared to a diet of mice had to eat 20% more to achieve the same growth. There have been several studies focusing on the types of prey that cottonmouths consume, and analyzing the differences between juveniles, adult males, and adult females. It has been found that adult males and females target different prey types and sizes. Observations and stomach analyses show that adult males consume fish, whereas adult females mainly consume other squamates, in particularly snakes. In this same research, it was concluded that the prey size increased with the size of the snake for both juvenile and adults, both male and female. Young individuals have yellowish or greenish tail tips and engage in caudal luring. The tail tip is wriggled to lure prey, such as frogs and lizards, within striking distance. Wharton (1960) observed captive specimens exhibiting this behavior between 07:20 and 19:40 hours, which suggests it is a daytime activity. In August 2020 and May 2021, individuals found in Florida were observed to have consumed introduced Burmese pythons (Python bivittatus). Burmese pythons are an invasive species in Florida with the capacity to inflict great damage to the local ecosystem, so it is hoped that A. piscivorus may be in the process of modifying its diet to enable it to hunt the pythons. Natural predators Agkistrodon piscivorus is preyed upon by snapping turtles (Chelydra serpentina), falcons, American alligators (Alligator mississippiensis), horned owls (Bubo virginianus), eagles, red-shouldered hawks (Buteo lineatus), loggerhead shrikes (Lanius ludovicianus), and large wading birds, such as herons, cranes, and egrets. It is also preyed upon by ophiophagous snakes, including their own species. Humphreys (1881) described how a specimen was killed and eaten by a captive kingsnake. On the other hand, Neill (1947) reported captive kingsnakes (Lampropeltis getula) were loath to attack them, being successfully repelled with "body blows". Also called body-bridging, this is a specific defensive behavior against ophiophagous snakes, first observed in certain rattlesnake (Crotalus) species by Klauber (1927), that involves raising a section of the middle of the body above the ground to varying heights. This raised loop may then be held in this position for varying amounts of time, shifted in position, or moved towards the attacker. In the latter case, it is often flipped or thrown vigorously in the direction of the assailant. In A. piscivorus, the loop is raised laterally, with the belly facing towards the attacker. Reproduction Agkistrodon piscivorus is ovoviviparous, with females usually giving birth to one to 16 live young and possibly as many as 20. Litters of six to eight are the most common. Neonates are in length (excluding runts), with the largest belonging to A. p. conanti and A. p. leucostoma the smallest. If weather conditions are favorable and food is readily available, growth is rapid and females may reproduce at less than three years of age and a total length of as little as . They will also only reproduce every other year, unless optimal conditions are met for them to go through the reproduction process. The young are born in August or September, while mating may occur during any of the warmer months of the year, at least in certain parts of its range. Regarding A. p. piscivorus, an early account by Stejneger (1895) described a pair in the Berlin Zoological Garden that mated on January 21, 1873, after which eight neonates were discovered in the cage on July 16 of that year. The young were each in length and thick. They shed for the first time within two weeks, after which they accepted small frogs, but not fish. Combat behavior between males has been reported on a number of occasions, and is very similar in form to that seen in many other viperid species. An important factor in sexual selection, it allows for the establishment and recognition of dominance as males compete for access to sexually active females. A few accounts exist that describe females defending their newborn litters. Wharten (1960, 1966) reported several cases where females found near their young stood their ground and considered these to be examples of guarding behavior. Another case was described by Walters and Card (1996) in which a female was found at the entrance of a chamber with seven neonates crawling on or around her. When one of the young was moved a short distance from the chamber, she seemed to be agitated and faced the intruder. Eventually, all of her offspring retreated into the chamber, but the female remained at the entrance, ready to strike. One study stated that females will remain with their young for one to two weeks until the young finishes their first shed cycle. Facultative parthenogenesis Parthenogenesis is a natural form of reproduction in which growth and development of embryos occur without fertilization. A. piscivorus can reproduce by facultative parthenogenesis, that is, they are capable of switching from a sexual mode of reproduction to an asexual mode. This likely involves recombination at the tips of the chromosomes, which leads to genome wide homozygosity. The result is the expression of deleterious recessive alleles and often to developmental failure (inbreeding depression). Both captive-born and wild-born A. piscivorus specimens appear to be capable of this form of parthenogenesis. Conservation status The species A. piscivorus is classified as least concern on the IUCN Red List (v3.1, 2007). Species are listed as such due to their wide distribution, presumed large population, or because they are unlikely to be declining fast enough to qualify for listing in a more threatened category. When last assessed in 2007, the population trend was stable. Constant persecution of the species and drainage of wetland habitat prior to development has taken a heavy toll on local populations. Despite this, it remains a common species in many areas.Mehrtens JM (1987). Living Snakes of the World in Color. New York: Sterling Publishers. 480 pp. . In Indiana, the cottonmouth is listed as an endangered species.
Biology and health sciences
Reptiles
null
382507
https://en.wikipedia.org/wiki/Office
Office
An office is a space where the employees of an organization perform administrative work in order to support and realize the various goals of the organization. The word "office" may also denote a position within an organization with specific duties attached to it (see officer or official); the latter is an earlier usage, as "office" originally referred to the location of one's duty. In its adjective form, the term "office" may refer to business-related tasks. In law, a company or organization has offices in any place where it has an official presence, even if that presence consists of a storage silo. For example, instead of a more traditional establishment with a desk and chair, an office is also an architectural and design phenomenon, including small offices, such as a bench in the corner of a small business or a room in someone's home (see small office/home office), entire floors of buildings, and massive buildings dedicated entirely to one company. In modern terms, an office is usually the location where white-collar workers carry out their functions. In classical antiquity, offices were often part of a palace complex or a large temple. In the High Middle Ages (1000–1300), the medieval chancery acted as a sort of office, serving as the space where records and laws were stored and copied. With the growth of large, complex organizations in the 18th century, the first purpose-built office spaces were constructed. As the Industrial Revolution intensified in the 18th and 19th centuries, the industries of banking, rail, insurance, retail, petroleum, and telegraphy grew dramatically, requiring many clerks. As a result, more office space was assigned to house their activities. The time-and-motion study, pioneered in manufacturing by F. W. Taylor (1856–1915), led to the "Modern Efficiency Desk" of 1915. Its flat top, with drawers below, was designed to allow managers an easy view of their workers. By the middle of the 20th century, it became apparent that an efficient office required additional control over privacy, and gradually the cubicle system evolved. History The word "office" stems from the Latin "officium" and its equivalents in various Romance languages. An officium was not necessarily a place, but often referred instead to human staff members of an organization, or even the abstract notion of a formal position like a magistrate. The elaborate Roman bureaucracy would not be equaled for centuries in the West after the fall of Rome, with areas partially reverting to illiteracy. Further east, the Byzantine Empire and varying Islamic caliphates preserved a more sophisticated administrative culture. Offices in classical antiquity were often part of a palace complex or a large temple. There was often a room where scrolls were kept and scribes did their work. Ancient texts mentioning the work of scribes allude to the existence of such "offices". These rooms are sometimes called "libraries" by some archaeologists because of scrolls' association with literature. They were, however, closer to modern offices because the scrolls were meant for record-keeping and other management functions, not for poetry or works of fiction. Middle Ages The High Middle Ages (1000–1300) saw the rise of the medieval chancery, which was the place where most government letters were written and laws were copied within a kingdom. The rooms of the chancery often had walls full of pigeonholes, constructed to hold rolled-up pieces of parchment for safekeeping or ready reference. This kind of structure was a precursor to the modern bookshelf. The introduction of the printing press during the Renaissance did not impact the setup and function of these government offices significantly. Medieval paintings and tapestries often show people in their private offices handling record-keeping books or writing on scrolls of parchment. Before the invention of the printing press and its wider distribution, there was often no clear cultural distinction between a private office and a private library; books were both read and written at the same desk or table, as were personal and professional accounts and letters. During the 13th century, the English word "office" first began to appear when referring to a position involving specific professional duties (for example, "the office of the....") Geoffrey Chaucer appears to have first used the word in 1395 to mean a place where business is transacted in The Canterbury Tales. As mercantilism became the dominant economic theory of the Renaissance, merchants tended to conduct their business in buildings that also sometimes housed people doing retail sales, warehousing, and clerical work. During the 15th century, the population density in many cities reached a point where merchants began to use stand-alone buildings to conduct their businesses. A distinction began to develop between religious, administrative/military, and commercial uses for buildings. The emergence of the modern office The first purpose-built office spaces were constructed in the 18th century to suit the needs of large and growing organizations such as the Royal Navy and the East India Company. The Old Admiralty (Ripley Building) was built in 1726 and was the first purpose-built office building in Great Britain. As well as offices, the building housed a board room and apartments for the Lords of the Admiralty. In the 1770s, many scattered offices for the Royal Navy were gathered into Somerset House, the first block purpose-built for office work. The East India House was built in 1729 on Leadenhall Street as the headquarters from which the East India Company administered its Indian colonial possessions. The Company developed a very complex bureaucracy for the task, necessitating thousands of office employees to process the required paperwork. The Company recognized the benefits of centralized administration and required that all workers sign in and out at the central office each day. As the Industrial Revolution intensified in the 18th and 19th centuries, the industries of banking, rail, insurance, retail, petroleum, and telegraphy dramatically grew in size and complexity. Increasingly large number of clerks were needed to handle order processing, accounting, and document filing, and these clerks needed to be housed in increasingly specialized spaces. Most of the desks of the era were top-heavy and had a cubicle-like appearance, with paper storage bins extending above the desk-work area, offering workers some degree of privacy. The relatively high price of land in the central core of cities led to the first multi-story buildings, which were limited to about 10 stories until the use of iron and steel allowed for higher structures. The first purpose-built office block was the Brunswick Building, built in Liverpool in 1841. The invention of the safety elevator in 1852 by Elisha Otis enabled the rapid upward escalation of buildings. By the end of the 19th century, larger office buildings frequently contained large glass atriums to allow light into the complex and improve air circulation. 20th century By 1906, Sears, Roebuck, and Co. had opened their headquarters operation in a building in Chicago, at the time the largest building in the world. The time and motion study, pioneered in manufacturing by F. W. Taylor and later applied to the office environment by Frank and Lillian Gilbreth, led to the idea that managers needed to play an active role in directing the work of subordinates to increase the efficiency of the workplace. F.W. Taylor advocated the use of large, open floor plans and desks that faced supervisors. As a result, in 1915, the Equitable Life Insurance Company in New York City introduced the "Modern Efficiency Desk" with a flat top and drawers below, designed to allow managers an easy view of the workers. This led to a demand for large square footage per floor in buildings, and a return to the open spaces that were seen in pre–industrial revolution buildings. However, by the midpoint of the 20th century, it became apparent that an efficient office required more privacy in order to combat tedium, increase productivity, and encourage creativity. In 1964, the Herman Miller (office equipment) company contracted Robert Propst, a prolific industrial designer. Propst came up with the concept of the Action Office, which later evolved into the cubicle office furniture system. Offices in Japan have developed unique characteristics partly as a result of the country's unique business culture. Japanese offices tend to follow open plan layouts in an 'island-style' arrangement, which promotes teamwork and top-down management. They also use uchi-awase (informal meetings) and ringi-sho (consensus systems) to encourage input on policies from as many groups throughout the office as possible. Office spaces The main purpose of an office environment is to support its occupants in performing their jobs—preferably at minimum cost and with maximum satisfaction. Different people performing different tasks will require different office spaces, or spaces that can handle a variety of uses. To aid decision-making in workplace and office design, one can distinguish three different types of office spaces: workspaces, meeting spaces, and support spaces. For new or developing businesses, remote satellite offices and project rooms, or serviced offices, can provide a simple solution and provide all of the former types of space. Workspaces Workspaces in an office are typically used for conventional office activities such as reading, writing, and computer work. There are each supporting different activities. Open office: an open workspace for more than ten people; suitable for activities that demand frequent communication or routine activities that need relatively little concentration. Team space: a semi-enclosed workspace for two to eight people; suitable for teamwork which demands frequent internal communication and a medium level of concentration. Cubicle: a semi-enclosed workspace for one person; suitable for activities that demand medium concentration and medium interaction. Office Pod: ideal for fostering privacy in today's bustling open-plan offices. It provides a cost-effective and efficient way to ensure privacy and continuity during conversations, calls, and video conferences. Private office: an enclosed workspace for one person; suitable for activities that are confidential, demand a lot of concentration, or include many small meetings. Shared office: a compact, semi-private workspace designed for two or three individuals, facilitating both focused work and small group collaboration. Team room: an enclosed workspace for four to ten people; suitable for teamwork that may be confidential and demands frequent internal communication. Study booth: an enclosed workspace for one person; suitable for short-term activities that demand concentration or confidentiality. Work lounge: a lounge-like workspace for two to six people; suitable for short-term activities that demand collaboration and/or allow impromptu interaction. Touch down: an open workspace for one person; suitable for short-term activities that require little concentration and low interaction. Meeting spaces Meeting spaces in an office typically use interactive processes, be they quick conversations or intensive brainstorming. There are each supporting different activities. Small meeting room: an enclosed meeting space for two to four people; suitable for both formal and informal interaction. Medium meeting room: an enclosed meeting space for four to ten people; suitable for both formal and informal interaction. Large meeting room: an enclosed meeting space for ten or more people; suitable for formal interaction. Small meeting space: an open or semi-open meeting space for two to four persons; suitable for short, informal interaction. Medium meeting space: an open or semi-open meeting space for four to ten persons; suitable for short, informal interaction. Large meeting space: an open or semi-open meeting space for ten or more people people; suitable for short, informal interaction. Brainstorm room: an enclosed meeting space for five to twelve people; suitable for brainstorming sessions and workshops. Meeting point: an open meeting point for two to four people; suitable for ad hoc, informal meetings. Support spaces Support spaces in an office are typically used for secondary activities such as filing documents or taking breaks. There are each supporting different activities. Filing space: an open or enclosed support space for the storage of frequently used files and documents. Storage space: an open or enclosed support space for the storage of commonly used office supplies. Print and copy area: an open or enclosed support space with facilities for printing, scanning and copying. Mail area: an open or semi-open support space where employees can pick up or deliver their mail. Pantry area: an open or enclosed support space where employees can get refreshments and where supplies for visitor hospitality are kept. Break area: a semi-open or enclosed support space where employees can take a break from their work. Locker area: an open or semi-open support space where employees can store their personal belongings. Smoking room: an enclosed support space where employees can smoke a cigarette. Library: a semi-open or enclosed support space for reading books, journals and magazines. Games room: an enclosed support space where employees can play games, such as pool or darts. Waiting area: an open or semi-open support space where visitors can be received and wait for their appointment. Circulation space: support space which is required for circulation on office floors, linking all major functions. Lactation rooms are also support spaces that are legally mandatory for companies in the United States, as of the 2010 Patient Protection and Affordable Care Act. Office structure There are many different ways of arranging the space in an office. Managerial styles and the culture of specific companies are important factors in how office space will ultimately be used. One example of diverging office layout philosophies concerns how many people will work within the same room. At one extreme, each individual worker might have their own room; at the other extreme, a large open plan office might see tens or hundreds of people working in the same room. Open-plan offices put multiple workers together in the same space, and some studies have shown that they can improve short-term productivity, i.e. within a single software project. At the same time, the loss of privacy and security can increase the incidence of theft and loss of company secrets. A type of compromise between open plan and individual rooms is provided by the cubicle desk, possibly made most famous by the Dilbert cartoon series, which solves visual privacy to some extent but often fails on acoustic separation and security. Most cubicles also require the occupant to sit with their back towards anyone who might be approaching. Workers in walled offices typically try to position their normal work seats and desks so that they can see someone entering, and if that goal is not feasible, some install tiny mirrors on things such as computer monitors. According to research, open-plan offices are associated with increased stress, a rise in electronic communication, a 70% decrease in face-to-face interactions, a 25% uptick in negative moods, and up to a 20% drop in productivity due to distractions. In contrast, post-pandemic trends are favoring private "cell-office plans", which address health precautions and have been reported to enhance productivity by up to 22%. Office buildings While offices can be set up in almost any location and in almost any building, some modern requirements for offices make this more difficult. These requirements can be legal (such as sufficient light levels) or technical (such as requirements for computer networking). Other needs, such as security and layout flexibility, have prompted the creation of special buildings which are dedicated primarily for use as offices. An office building, also known as an office block or business center, is a form of commercial building which contains spaces mainly designed to be used for offices. The primary purpose of an office building is to provide a workplace and working environment primarily for administrative and managerial workers. These workers usually occupy set up areas within the office building, and usually are provided with desks, PCs and other equipments they may need within their areas. An office building may be divided into sections for different companies, or it may be dedicated to one company. In either case, each company will typically have a reception area, one or several meeting rooms, singular or open-plan offices, and service rooms such as restrooms. Many office buildings also have kitchen facilities and a staff room, where workers can have lunch or take a short break. Some office spaces are now also serviced office spaces, allowing for those occupying a space or building to share facilities. Office and retail rental rates Rental rates for office and retail space are typically quoted in terms of cost per floor-area–time, usually cost per floor-area per year or month. For example, the rate for a particular property may be $29 per square-foot per year ($29/sq. ft/yr) or $290 per square-meter per year ($290/m2/yr). In many countries, rent is typically paid monthly, even if usually discussed in terms of years. Examples: A particular 2,000 sq. ft space is priced at $15/sq. ft/yr, ultimately costing (2,000 sq. ft) × ($15/sq. ft/yr) / (12 mo/yr) = $2500 per month A particular 200 m2 space is priced at $150/m2/yr, ultimately costing (200 m2) × ($150/m2/yr) / (12 mo/yr) = $2500 per month In a gross lease, the rate quoted is an all-inclusive rate. The renter pays a set amount of rent per time and the landlord is responsible for all other expenses, including payments for utilities, taxes, insurance, maintenance, and repairs. The triple net lease is one in which the tenant is liable for a share of various expenses such as property taxes, insurance, maintenance, utilities, climate control, repairs, janitorial services and landscaping. Office rents in the United States are still recovering from the high vacancy rates that occurred in the wake of the 2008 depression. Grading The Building Owners and Managers Association (BOMA) classifies office space into three categories: Class A, Class B, and Class C. According to BOMA, Class A office buildings have the "most prestigious buildings competing for premier office users with rents above average for the area." BOMA states that Class A facilities have "high-quality standard finishes, state of the art systems, exceptional accessibility and a definite market presence." BOMA describes Class B office buildings as those that compete "for a wide range of users with rents in the average range for the area." BOMA states that Class B buildings have "adequate systems" and finishes that "are fair to good for the area," but that the buildings do not compete at the same price rates as Class A buildings. According to BOMA, Class C buildings are aimed towards "tenants requiring functional space at rents below the average for the area." The lack of specifics allows considerable room for pushing the boundaries of these BOMA categories. Oftentimes, they are further modified by adding the plus or minus sign to create subclasses, such as Class A+ or Class B-.
Technology
Commercial buildings
null
1665281
https://en.wikipedia.org/wiki/Analogue%20electronics
Analogue electronics
Analogue electronics () are electronic systems with a continuously variable signal, in contrast to digital electronics where signals usually take only two levels. The term analogue describes the proportional relationship between a signal and a voltage or current that represents the signal. The word analogue is derived from the Greek word meaning proportional. Analogue signals An analogue signal uses some attribute of the medium to convey the signal's information. For example, an aneroid barometer uses the angular position of a needle on top of a contracting and expanding box as the signal to convey the information of changes in atmospheric pressure. Electrical signals may represent information by changing their voltage, current, frequency, or total charge. Information is converted from some other physical form (such as sound, light, temperature, pressure, position) to an electrical signal by a transducer which converts one type of energy into another (e.g. a microphone). The signals take any value from a given range, and each unique signal value represents different information. Any change in the signal is meaningful, and each level of the signal represents a different level of the phenomenon that it represents. For example, suppose the signal is being used to represent temperature, with one volt representing one degree Celsius. In such a system, 10 volts would represent 10 degrees, and 10.1 volts would represent 10.1 degrees. Another method of conveying an analogue signal is to use modulation. In this, some base carrier signal has one of its properties altered: amplitude modulation (AM) involves altering the amplitude of a sinusoidal voltage waveform by the source information, frequency modulation (FM) changes the frequency. Other techniques, such as phase modulation or changing the phase of the carrier signal, are also used. In an analogue sound recording, the variation in pressure of a sound striking a microphone creates a corresponding variation in the current passing through it or voltage across it. An increase in the volume of the sound causes the fluctuation of the current or voltage to increase proportionally while keeping the same waveform or shape. Mechanical, pneumatic, hydraulic, and other systems may also use analogue signals. Inherent noise Analogue systems invariably include noise that is random disturbances or variations, some caused by the random thermal vibrations of atomic particles. Since all variations of an analogue signal are significant, any disturbance is equivalent to a change in the original signal and so appears as noise. As the signal is copied and re-copied, or transmitted over long distances, these random variations become more significant and lead to signal degradation. Other sources of noise may include crosstalk from other signals or poorly designed components. These disturbances are reduced by shielding and by using low-noise amplifiers (LNA). Analogue vs digital electronics Since the information is encoded differently in analogue and digital electronics, the way they process a signal is consequently different. All operations that can be performed on an analogue signal such as amplification, filtering, limiting, and others, can also be duplicated in the digital domain. Every digital circuit is also an analogue circuit, in that the behaviour of any digital circuit can be explained using the rules of analogue circuits. The use of microelectronics has made digital devices cheap and widely available. Noise The effect of noise on an analogue circuit is a function of the level of noise. The greater the noise level, the more the analogue signal is disturbed, slowly becoming less usable. Because of this, analogue signals are said to "fail gracefully". Analogue signals can still contain intelligible information with very high levels of noise. Digital circuits, on the other hand, are not affected at all by the presence of noise until a certain threshold is reached, at which point they fail catastrophically. For digital telecommunications, it is possible to increase the noise threshold with the use of error detection and correction coding schemes and algorithms. Nevertheless, there is still a point at which catastrophic failure of the link occurs. In digital electronics, because the information is quantized, as long as the signal stays inside a range of values, it represents the same information. In digital circuits the signal is regenerated at each logic gate, lessening or removing noise. In analogue circuits, signal loss can be regenerated with amplifiers. However, noise is cumulative throughout the system and the amplifier itself will add to the noise according to its noise figure. Precision A number of factors affect how precise a signal is, mainly the noise present in the original signal and the noise added by processing (see signal-to-noise ratio). Fundamental physical limits such as the shot noise in components limits the resolution of analogue signals. In digital electronics additional precision is obtained by using additional digits to represent the signal. The practical limit in the number of digits is determined by the performance of the analogue-to-digital converter (ADC), since digital operations can usually be performed without loss of precision. The ADC takes an analogue signal and changes it into a series of binary numbers. The ADC may be used in simple digital display devices, e. g., thermometers or light meters but it may also be used in digital sound recording and in data acquisition. However, a digital-to-analogue converter (DAC) is used to change a digital signal to an analogue signal. A DAC takes a series of binary numbers and converts it to an analogue signal. It is common to find a DAC in the gain-control system of an op-amp which in turn may be used to control digital amplifiers and filters. Design difficulty Analogue circuits are typically harder to design, requiring more skill than comparable digital systems to conceptualize. An analogue circuit is usually designed by hand because the application is built into the hardware. Digital hardware, on the other hand, has a great deal of commonality across applications and can be mass-produced in a standardised form. Hardware design consists largely of repeated identical blocks and the design process can be highly automated. This is one of the main reasons that digital systems have become more common than analogue devices. However, the application of digital hardware is a function of the software/firmware and creating this is still largely a labour-intensive process. Since the early 2000s, there were some platforms that were developed which enabled analogue design to be defined using software - which allows faster prototyping. Furthermore, if a digital electronic device is to interact with the real world, it will always need an analogue interface. For example, every digital radio receiver has an analogue preamplifier as the first stage in the receive chain. Design of analogue circuits has been greatly eased by the advent of software circuit simulators such as SPICE. IBM developed their own in-house simulator, ASTAP, in the 1970s which used an unusual (compared to other simulators) sparse matrix method of circuit analysis. Circuit classification Analogue circuits can be entirely passive, consisting of resistors, capacitors and inductors. Active circuits also contain active elements such as transistors. Traditional circuits are built from lumped elements – that is, discrete components. However, an alternative is distributed-element circuits, built from pieces of transmission line.
Technology
Electronics: General
null
1665333
https://en.wikipedia.org/wiki/Abyssal%20zone
Abyssal zone
The abyssal zone or abyssopelagic zone is a layer of the pelagic zone of the ocean. The word abyss comes from the Greek word (), meaning "bottomless". At depths of , this zone remains in perpetual darkness. It covers 83% of the total area of the ocean and 60% of Earth's surface. The abyssal zone has temperatures around through the large majority of its mass. The water pressure can reach up to . As there is no light, photosynthesis cannot occur, and there are no plants producing molecular oxygen (O2), which instead primarily comes from ice that had melted long ago from the polar regions. The water along the seafloor of this zone is largely devoid of molecular oxygen, resulting in a death trap for organisms unable to quickly return to the oxygen-enriched water above or to survive in the low-oxygen environment. This region also contains a much higher concentration of nutrient salts, like nitrogen, phosphorus, and silica, due to the large amount of dead organic material that drifts down from the ocean zones above and decomposes. The region below the abyssal zone is the sparsely inhabited hadal zone. The region above is the bathyal zone. Trenches The deep trenches or fissures that plunge down thousands of meters below the ocean floor (for example, the mid-oceanic trenches such as the Mariana Trench in the Pacific) are almost unexplored. Previously, only the bathyscaphe Trieste, the remote control submarine Kaikō and the Nereus have been able to descend to these depths. However, as of March 25, 2012 one vehicle, the Deepsea Challenger, had penetrated to a depth of 10,898 meters (35,756 ft). Ecosystem The relative sparsity of primary producers means that the majority of organisms living in the abyssal zone depend on the marine snow that falls from oceanic layers above. The biomass of the abyssal zone actually increases near the seafloor as most of the decomposing material and decomposers rest on the seabed. The composition of the abyssal plain depends on the depth of the sea floor. Above 4000 meters the seafloor usually consists of calcareous shells of foraminifera, zooplankton, and phytoplankton. At depths greater than 4000 meters shells dissolve, leaving behind a seafloor of brown clay and silica from dead zooplankton and phytoplankton. Chemosynthetic bacteria support large and diverse communities near hydrothermal vents, filling a similar role in these ecosystems as plants do in the sunlit regions above. A new insight into the complexity of the abyssal environment has been provided by a team of researchers from the Scottish Society of Marine Sciences. They have found that manganese nodules on the deep sea floor produce free oxygen from water molecules. The manganese nodules act as a kind of battery as they contain different metals, and they release oxygen into the environment. Because it was previously thought that only plants and algae produce dark oxygen (oxygen produced without light), this can be seen as a scientific breakthrough. Biological adaptations Organisms that live at this depth have had to evolve to overcome challenges provided by the abyssal zone. Fish and invertebrates had to evolve to withstand the sheer cold and intense pressure found at this level. Not only did they have to find ways to hunt and survive in constant darkness, but they also had to thrive in an ecosystem that has less oxygen and biomass, energy sources and prey, than the upper zones. To survive in these conditions, many fish and other organisms developed a much slower metabolism, and require much less oxygen than those in upper zones. Many animals also move very slowly to conserve energy. Their reproduction rates are also very slow, to decrease competition and conserve energy. Animals here typically have flexible stomachs and mouths, so that when scarce prey are found they can consume as many as possible. Other challenges faced by life in the abyssal zone are the pressure and darkness caused by the zone's depth. Many organisms living in this zone have evolved to minimize internal air spaces, such as swim bladders. This adaptation helps to protect them from the extreme pressure, which can reach around 75 MPa (11,000 psi). The absence of light also spawned many different adaptations, such as having large eyes and the ability to produce their own light (bioluminescence). Large eyes would allow the detection and use of any light available, no matter how small. Commonly, animals in the abyssal zone are bioluminescent, producing blue light, because light in the blue wavelength range is attenuated over greater travel distances than other wavelengths. Due to this lack of light, complex patterns and bright colors are not needed. Most fish species have evolved to be transparent, red, or black so that they better blend in with the darkness and do not waste energy on developing and maintaining bright or complex patterns. Animals The abyssal zone is made up of many different types of organisms, including microorganisms, crustaceans, molluscs (bivalves, snails, and cephalopods), different classes of fishes, and possibly some animals that have yet to be discovered. Most of the fish species in this zone are described as demersal or benthopelagic fishes. Demersal fish are fish whose habitats are on or near (typically less than five meters from) the seafloor. Most fish species fit into that classification, because the seafloor contains most of the abyssal zone's nutrients; therefore, the most complex food web or greatest biomass would be in this region of the zone. Organisms in the abyssal zone rely on the natural processes of higher ocean layers. When animals from higher ocean levels die, their carcasses occasionally drift down to the abyssal zone, where organisms in the deep can feed on them. When a whale carcass falls down to the abyssal zone, this is called a whale fall. The carcass of the whale can create complex ecosystems for organisms in the depths. Benthic organisms in the abyssal zone would need to have evolved morphological traits that could either keep them out of oxygen-depleted water above the sea floor or enable them to extract oxygen from the water above, while also allowing the animal access to the seafloor and the nutrients located there. There are also animals that spend their time in the upper portion of the abyssal zone, some of which even occasionally spend time in the zone directly above, the bathyal zone. While there are a number of different fish species representing many different groups and classes, like Actinopterygii (ray-finned fish), there are no known members of the class Chondrichthyes (animals such as sharks, rays, and chimaeras) that make the abyssal zone their primary or constant habitat. Whether this is due to the limited resources, energy availability, or other physiological constraints is unknown. Most Chondrichthyes species only go as deep as the bathyal zone. Creatures that live in the abyssal zone include:     Tripod fish (Bathypterois grallator): their habitat is along the ocean floor, usually around 4,720 m below sea level. Their pelvic fins and caudal fin have long bony rays protruding from them. They face the current while standing still on their long rays. Once they sense food nearby, they use their large pectoral fins to hit the unsuspecting prey towards their mouth. Each member of this species has both male and female reproductive organs so that if a mate cannot be found, they can self-fertilize. Dumbo octopus: this octopus usually lives at a depth between 1,000 and 7,000 meters, deeper than any other known octopus. They use the fins on top of their head, which look like flapping ears, to hover over the sea floor looking for food. They use their arms to help change directions or crawl along the seafloor. To combat the intense pressure of the abyssal zone, this octopus species lost its ink sac during evolution. They also use their strand-like structured suction cups to help detect predators, food, and other aspects of their environment. Cusk eel (genus Bassozetus): there are no known fish that live at depths greater than the cusk eel. The depth of the cusk eel habitat can be as great as 8,370 meters below sea level. This animal's ventral fins are specialized forked barbel-like organs that act as sensory organs. Cusk eels produce sounds to mate. Male cusk eels have two pairs of sonic muscles, while female cusk eels have three. Abyssal grenadier: this resident of the abyssal zone is known to live at depths ranging from 800 and 4,000 meters. It has extremely large eyes, but a small mouth. It is thought to be a semelparous species, meaning it only reproduces once and then dies. This is seen as a way for the organism to conserve energy and have a higher chance of having some healthy strong children. This reproductive strategy could be very useful in low energy environments such as the abyssal zone. Pseudoliparis swirei: the Mariana snailfish, or Mariana hadal snailfish, is a species of snailfish found at hadal depths in the Mariana Trench in the western Pacific Ocean. It is known from a depth range of 6,198–8,076 m (20,335–26,496 ft), including a capture at 7,966 m (26,135 ft), which is possibly the record for a fish caught on the seafloor. Environmental concerns Climate change has had negative effects on the abyssal zone. Due to the zone's depth, increasing global temperatures do not affect it as quickly or drastically as the rest of the world, but the zone is still afflicted by ocean acidification. Pollutants, such as plastics, are also present in this zone. Plastics are especially bad for the abyssal zone because these organisms have evolved to eat or try to eat anything that moves or appears to be detritus, resulting in organisms consuming plastics instead of nutrients. Both ocean acidification and pollution are decreasing the already small biomass that resides within the abyssal zone. Another problem caused by humans is overfishing. Even though no fishery can fish for organisms anywhere near the abyssal zone, they can still cause harm in deeper waters. The abyssal zone depends on dead organisms from the upper zones sinking to the seafloor, since the ecosystem lacks producers due to a lack of sunlight. As fish and other animals are removed from the ocean, the frequency and amount of dead material reaching the abyssal zone decreases. Deep sea mining operations could cause problems for the abyssal zone in the future. The talks and planning for this industry are already under way. Deep sea mining could be disastrous for this extremely fragile ecosystem since there are many ecological dangers posed by mining for deep sea minerals. Mining could increase the amount of pollution not only in the abyssal zone, but in the ocean as a whole, and would physically destroy habitats and the seafloor. Sediment plumes generated by mining activities can spread widely, affecting filter feeders and smothering marine life. The potential release of toxic chemicals and heavy metals from mining equipment and disturbed seabed materials could lead to chemical pollution, while noise from machinery can disrupt the behavior and communication of marine animals. Physical disturbances to the seabed may destroy geological features and their associated ecosystems. Furthermore, changes in water quality and the disruption of carbon sequestration processes, where organic carbon is stored in the deep sea, could have broader environmental impacts, including contributing to climate change. The slow rate of change in deep-sea environments and the long lifespans and reproductive cycles of abyssal species mean that recovery from such disturbances could take decades or centuries.
Physical sciences
Oceanography
Earth science
1667490
https://en.wikipedia.org/wiki/Asteroid%20mining
Asteroid mining
Asteroid mining is the hypothetical extraction of materials from asteroids and other minor planets, including near-Earth objects. Notable asteroid mining challenges include the high cost of spaceflight, unreliable identification of asteroids which are suitable for mining, and the challenges of extracting usable material in a space environment. Asteroid sample return research missions, such as Hayabusa, Hayabusa2, and OSIRIS-REx illustrate the challenges of collecting ore from space using current technology. As of 2024, around 127 grams of asteroid material has been successfully brought to Earth from space. Asteroid research missions are complex endeavors and yield a tiny amount of material (less than 100 milligrams Hayabusa, 5.4 grams Hayabusa2, ~121.6 grams OSIRIS-REx) relative to the size and expense of these projects ($300 million Hayabusa, $800 million Hayabusa2, $1.16 billion OSIRIS-REx). The history of asteroid mining is brief but features a gradual development. Ideas of which asteroids to prospect, how to gather resources, and what to do with those resources have evolved over the decades. History Prior to 1970 Before 1970, asteroid mining existed largely within the realm of science fiction. Publications such as Worlds of If, Scavengers in Space, and Miners in the Sky told stories about the conceived dangers, motives, and experiences of mining asteroids. At the same time, many researchers in academia speculated about the profits that could be gained from asteroid mining, but they lacked the technology to seriously pursue the idea. The 1970s In 1969, the Apollo 11 Moon Landing spurred a wave of scientific interest in human space activity far beyond the Earth's orbit. As the decade continued, more and more academic interest surrounded the topic of asteroid mining. A good deal of serious academic consideration was aimed at mining asteroids located closer to Earth than the main asteroid belt. In particular, the asteroid groups Apollo and Amor were considered. These groups were chosen not only because of their proximity to Earth but also because many at the time thought they were rich in raw materials that could be refined. Despite the wave of interest, many in the space science community were aware of how little was known about asteroids and encouraged a more gradual and systematic approach to asteroid mining. The 1980s Academic interest in asteroid mining continued into the 1980s. The idea of targeting the Apollo and Amor asteroid groups still had some popularity. However, by the late 1980s the interest in the Apollo and Amor asteroid groups was being replaced with interest in the moons of Mars, Phobos and Deimos. Organizations like NASA begin to formulate ideas of how to process materials in space and what to do with the materials that are hypothetically gathered from space. The 1990s New reasons emerged for pursuing asteroid mining. These reasons tended to revolve around environmental concerns, such as fears over humans over-consuming the Earth's natural resources and trying to capture energy from the Sun in space. In the same decade, NASA was trying to establish what materials in asteroids could be valuable for extraction. These materials included free metals, volatiles, and bulk dirt. The 2010s After a burst of interest in the 2010s, asteroid mining ambitions shifted to more distant long-term goals and some 'asteroid mining' companies pivoted to more general-purpose propulsion technology. The 2020s The 2020s have brought a resurgence of interest, with companies from the United States, Europe, and China renewing their efforts in this ambitious venture. This revival is fueled by a new era of commercial space exploration, significantly driven by SpaceX. SpaceX's development of reusable rocket boosters has substantially lowered the cost of space access, reigniting interest and investment in asteroid mining. A US congressional committee acknowledged this renewed interest by holding a hearing on the topic in December 2023. There are also endeavors to make first-time landings on M-type asteroids to mine metals like Iridium which sells for many thousands of dollars per ounce. Private company driven efforts have also given rise to a new culture of secrecy obfuscating which asteroids are identified and targeted for mining missions, whereas previously government-led asteroid research and exploration operated with more transparency. Minerals in space As resource depletion on Earth becomes more of a concern, the idea of extracting valuable elements from asteroids and transporting them to Earth for profit, or using space-based resources to build solar-power satellites and space habitats, becomes more attractive. Hypothetically, water processed from ice could refuel orbiting propellant depots. Although asteroids and Earth accreted from the same starting materials, Earth's relatively stronger gravity pulled all heavy siderophilic (iron-loving) elements into its core during its molten youth more than four billion years ago. This left the crust depleted of such valuable elements until a rain of asteroid impacts re-infused the depleted crust with metals like gold, cobalt, iron, manganese, molybdenum, nickel, osmium, palladium, platinum, rhenium, rhodium, ruthenium and tungsten (some flow from core to surface does occur, e.g. at the Bushveld Igneous Complex, a famously rich source of platinum-group metals). Today, these metals are mined from Earth's crust, and they are essential for economic and technological progress. Hence, the geologic history of Earth may very well set the stage for a future of asteroid mining. In 2006, the Keck Observatory announced that the binary Jupiter trojan 617 Patroclus, and possibly large numbers of other Jupiter trojans, are likely extinct comets and consist largely of water ice. Similarly, Jupiter-family comets, and possibly near-Earth asteroids that are extinct comets, might also provide water. The process of in-situ resource utilization—using materials native to space for propellant, thermal management, tankage, radiation shielding, and other high-mass components of space infrastructure—could lead to radical reductions in its cost. Although whether these cost reductions could be achieved, and if achieved would offset the enormous infrastructure investment required, is unknown. From the astrobiological perspective, asteroid prospecting could provide scientific data for the search for extraterrestrial intelligence (SETI). Some astrophysicists have suggested that if advanced extraterrestrial civilizations employed asteroid mining long ago, the hallmarks of these activities might be detectable. An important factor to consider in target selection is orbital economics, in particular the change in velocity (Δv) and travel time to and from the target. More of the extracted native material must be expended as propellant in higher Δv trajectories, thus less returned as payload. Direct Hohmann trajectories are faster than Hohmann trajectories assisted by planetary and/or lunar flybys, which in turn are faster than those of the Interplanetary Transport Network, but the reduction in transfer time comes at the cost of increased Δv requirements. The Easily Recoverable Object (ERO) subclass of Near-Earth asteroids are considered likely candidates for early mining activity. Their low Δv makes them suitable for use in extracting construction materials for near-Earth space-based facilities, greatly reducing the economic cost of transporting supplies into Earth orbit. The table above shows a comparison of Δv requirements for various missions. In terms of propulsion energy requirements, a mission to a near-Earth asteroid compares favorably to alternative mining missions. An example of a potential target for an early asteroid mining expedition is 4660 Nereus, expected to be mainly enstatite. This body has a very low Δv compared to lifting materials from the surface of the Moon. However, it would require a much longer round-trip to return the material. Multiple types of asteroids have been identified but the three main types would include the C-type, S-type, and M-type asteroids: C-type asteroids have a high abundance of water which is not currently of use for mining, but could be used in an exploration effort beyond the asteroid. Mission costs could be reduced by using the available water from the asteroid. C-type asteroids also have high amounts of organic carbon, phosphorus, and other key ingredients for fertilizer which could be used to grow food. S-type asteroids carry little water but are more attractive because they contain numerous metals, including nickel, cobalt, and more valuable metals, such as gold, platinum, and rhodium. A small 10-meter S-type asteroid contains about of metal with in the form of rare metals like platinum and gold. M-type asteroids are rare but contain up to 10 times more metal than S-types. A class of "easily retrievable objects" (EROs) was identified by a group of researchers in 2013. Twelve asteroids made up the initially identified group, all of which could be potentially mined with present-day rocket technology. Of 9,000 asteroids searched in the NEO database, these twelve could all be brought into an Earth-accessible orbit by changing their velocity by less than . The dozen asteroids range in size from . Asteroid cataloging The B612 Foundation is a private nonprofit foundation with headquarters in the United States, dedicated to protecting Earth from asteroid strikes. As a non-governmental organization it has conducted two lines of related research to help detect asteroids that could one day strike Earth, and find the technological means to divert their path to avoid such collisions. The foundation's 2013 goal was to design and build a privately financed asteroid-finding space telescope, Sentinel, hoping in 2013 to launch it in 2017–2018. The Sentinel's infrared telescope, once parked in an orbit similar to that of Venus, is designed to help identify threatening asteroids by cataloging 90% of those with diameters larger than , as well as surveying smaller Solar System objects. After NASA terminated their $30 million funding agreement with the B612 Foundation in October 2015 and the private fundraising did not achieve its goals, the Foundation eventually opted for an alternative approach using a constellation of much smaller spacecraft which is under study . NASA/JPL's NEOCam has been proposed instead. Mining considerations There are four options for mining: In-space manufacturing (ISM), which may be enabled by biomining. Bring raw asteroidal material to Earth for use. Process asteroidal material on-site to bring back only processed materials, and perhaps produce propellant for the return trip. Transport the asteroid to a safe orbit around the Moon or Earth or to a space station. This can hypothetically allow for most materials to be used and not wasted. Processing in situ for the purpose of extracting high-value minerals will reduce the energy requirements for transporting the materials, although the processing facilities must first be transported to the mining site. In situ mining will involve drilling boreholes and injecting hot fluid/gas and allow the useful material to react or melt with the solvent and extract the solute. Due to the weak gravitational fields of asteroids, any activities, like drilling, will cause large disturbances and form dust clouds. These might be confined by some dome or bubble barrier. Or else some means of rapidly dissipating any dust could be provided. Mining operations require special equipment to handle the extraction and processing of ore in outer space. The machinery will need to be anchored to the body, but once in place, the ore can be moved about more readily due to the lack of gravity. However, no techniques for refining ore in zero gravity currently exist. Docking with an asteroid might be performed using a harpoon-like process, where a projectile would penetrate the surface to serve as an anchor; then an attached cable would be used to winch the vehicle to the surface, if the asteroid is both penetrable and rigid enough for a harpoon to be effective. Due to the distance from Earth to an asteroid selected for mining, the round-trip time for communications will be several minutes or more, except during occasional close approaches to Earth by near-Earth asteroids. Thus any mining equipment will either need to be highly automated, or a human presence will be needed nearby. Humans would also be useful for troubleshooting problems and for maintaining the equipment. On the other hand, multi-minute communications delays have not prevented the success of robotic exploration of Mars, and automated systems would be much less expensive to build and deploy. Mining projects On April 24, 2012 at the Seattle, Washington Museum of Flight, a plan was announced by billionaire entrepreneurs to mine asteroids for their resources. The company was called Planetary Resources and its founders included aerospace entrepreneurs Eric Anderson and Peter Diamandis. The company announced plans to create a propellant depot in space by 2020; splitting water from asteroids into hydrogen and oxygen to replenish satellites and spacecraft. Advisers included film director and explorer James Cameron; investors included Google's chief executive Larry Page, and its executive chairman was Eric Schmidt. Telescope technology proposed to identify and examine candidate asteroids lead to development of the Arkyd family of spacecraft; two prototypes of which were flown in 2015 and 2018. Shortly after, all plans for the Arkyd space telescope technology were abandoned; the company was wound down, its hardware auctioned off, and remaining assets acquired by ConsenSys, a blockchain company. A year after the appearance of Planetary Resources, similar asteroid mining plans were announced in 2013 by Deep Space Industries; a company established by David Gump, Rick Tumlinson, and others. The initial goal was to visit asteroids with prospecting and sample return spacecraft in 2015 and 2016; and begin mining within ten years. Deep Space Industries later pivoted to developing & selling the propulsion systems that would enable its envisioned asteroid operations, including a successful line of water-propellant thrusters in 2018; and in 2019 was acquired by Bradford Space, a company with a portfolio of earth orbit systems and space flight components. Proposed mining projects At ISDC-San Diego 2013, Kepler Energy and Space Engineering (KESE, llc) announced its intention to send an automated mining system to collect 40 tons of asteroid regolith and return to low Earth orbit by 2020. In September 2012, the NASA Institute for Advanced Concepts (NIAC) announced the Robotic Asteroid Prospector project, which would examine and evaluate the feasibility of asteroid mining in terms of means, methods, and systems. The TransAstra Corporation develops technology to locate and harvest asteroids using a family of spacecraft built around a patented approach using concentrated solar energy known as optical mining. In 2022, a startup called AstroForge announced intentions to develop technologies & spacecraft for prospecting, mining, and refining platinum from near-earth asteroids. Economics Currently, the quality of the ore and the consequent cost and mass of equipment required to extract it are unknown and can only be speculated on. Some economic analyses indicate that the cost of returning asteroidal materials to Earth far outweighs their market value, and that asteroid mining will not attract private investment at current commodity prices and space transportation costs. Other studies suggest large profit by using solar power. Potential markets for materials can be identified and profit generated if extraction cost is brought down. For example, the delivery of multiple tonnes of water to low Earth orbit for rocket fuel preparation for space tourism could generate significant profit if space tourism itself proves profitable. In 1997, it was speculated that a relatively small metallic asteroid with a diameter of contains more than US$20 trillion worth of industrial and precious metals. A comparatively small M-type asteroid with a mean diameter of could contain more than two billion metric tons of iron–nickel ore, or two to three times the world production of 2004. The asteroid 16 Psyche is believed to contain of nickel–iron, which could supply the world production requirement for several million years. A small portion of the extracted material would also be precious metals. Not all mined materials from asteroids would be cost-effective, especially for the potential return of economic amounts of material to Earth. For potential return to Earth, platinum is considered very rare in terrestrial geologic formations and therefore is potentially worth bringing some quantity for terrestrial use. Nickel, on the other hand, is quite abundant on Earth and being mined in many terrestrial locations, so the high cost of asteroid mining may not make it economically viable. Although Planetary Resources indicated in 2012 that the platinum from a asteroid could be worth US$25–50 billion, an economist remarked any outside source of precious metals could lower prices sufficiently to possibly doom the venture by rapidly increasing the available supply of such metals. Development of an infrastructure for altering asteroid orbits could offer a large return on investment. A peer-reviewed research article has examined how space mining could potentially contribute to sustainable economic growth while reducing environmental impacts of terrestrial mining. The authors of the study have developed a neoclassical growth model to investigate the economic transition from Earth-based to space-based mining operations. Their analysis has concluded that a transition from Earth to space mining could facilitate the sustained growth in metal use while concurrently limiting the environmental and social costs on Earth. However, this transition would require significant investment in research and development to ensure the technological feasibility of space mining. The study further suggests that environmental policy constraints on Earth, such as carbon emissions limits, could accelerate the transition to space mining. The researchers note that private companies like SpaceX and Blue Origin have reduced rocket launch costs by approximately 95% over the past decade. The study examines how further cost reductions could make asteroid and lunar mining economically viable. The authors' model posits that space mining would become more attractive under several conditions: when environmental regulations increase the cost of terrestrial mining, when research and development improves the efficiency of space mining technology, and when higher-grade mineral deposits become available in space compared to Earth. The paper acknowledges significant uncertainties around space mining technology and costs. It concludes that while space mining could theoretically enable sustainable growth, substantial research and development investment would be needed before it becomes economically viable. Scarcity Scarcity is a fundamental economic problem of humans having seemingly unlimited wants in a world of limited resources. Since Earth's resources are finite, the relative abundance of asteroidal ore gives asteroid mining the potential to provide nearly unlimited resources, which could essentially eliminate scarcity for those materials. The idea of exhausting resources is not new. In 1798, Thomas Malthus wrote, because resources are ultimately limited, the exponential growth in a population would result in falls in income per capita until poverty and starvation would result as a constricting factor on population. Malthus posited this years ago, and no sign has yet emerged of the Malthus effect regarding raw materials. Proven reserves are deposits of mineral resources that are already discovered and known to be economically extractable under present or similar demand, price and other economic and technological conditions. Conditional reserves are discovered deposits that are not yet economically viable. Indicated reserves are less intensively measured deposits whose data is derived from surveys and geological projections. Hypothetical reserves and speculative resources make up this group of reserves. Inferred reserves are deposits that have been located but not yet exploited. Continued development in asteroid mining techniques and technology may help to increase mineral discoveries. As the cost of extracting mineral resources, especially platinum group metals, on Earth rises, the cost of extracting the same resources from celestial bodies declines due to technological innovations around space exploration. , there are 711 known asteroids with a value exceeding US$100 trillion each. Financial feasibility Space ventures are high-risk, with long lead times and heavy capital investment, and that is no different for asteroid-mining projects. These types of ventures could be funded through private investment or through government investment. For a commercial venture, it can be profitable as long as the revenue earned is greater than total costs (costs for extraction and costs for marketing). The costs involving an asteroid-mining venture were estimated to be around US$100 billion in 1996. There are six categories of cost considered for an asteroid mining venture: Research and development costs Exploration and prospecting costs Construction and infrastructure development costs Operational and engineering costs Environmental costs Time cost Determining financial feasibility is best represented through net present value. One requirement needed for financial feasibility is a high return on investment estimating around 30%. Example calculation assumes for simplicity that the only valuable material on asteroids is platinum. On August 16, 2016, platinum was valued at $1157 per ounce or $37,000 per kilogram. At a price of $1,340, for a 10% return on investment, of platinum would have to be extracted for every 1,155,000 tons of asteroid ore. For a 50% return on investment of platinum would have to be extracted for every 11,350,000 tons of asteroid ore. This analysis assumes that doubling the supply of platinum to the market (5.13 million ounces in 2014) would have no effect on the price of platinum. A more realistic assumption is that increasing the supply by this amount would reduce the price 30–50%. The financial feasibility of asteroid mining with regards to different technical parameters has been presented by Sonter and more recently by Hein et al. Hein et al. have specifically explored the case where platinum is brought from space to Earth and estimate that economically viable asteroid mining for this specific case would be rather challenging. Decreases in the price of space access matter. The start of operational use of the low-cost-per-kilogram-in-orbit Spacex Falcon Heavy launch vehicle in 2018 is projected by astronomer Martin Elvis to have increased the extent of economically minable near-Earth asteroids from hundreds to thousands. With the increased availability of several kilometers per second of delta-v that Falcon Heavy provides, it increases the number of NEAs accessible from 3 percent to around 45 percent. Precedent for joint investment by multiple parties into a long-term venture to mine commodities may be found in the legal concept of a mining partnership, which exists in the state laws of multiple US states including California. In a mining partnership, "[Each] member of a mining partnership shares in the profits and losses thereof in the proportion which the interest or share he or she owns in the mine bears to the whole partnership capital or whole number of shares." Mining the Asteroid Belt from Mars Since Mars is much closer to the asteroid belt than Earth is, it would take less Delta-v to get to the asteroid belt and return minerals to Mars. One hypothesis is that the origin of the Moons of Mars (Phobos and Deimos) are actually asteroid captures from the asteroid belt. 16 Psyche in the main belt could have over $10,000 Quadrillion United States dollar worth of minerals. NASA is planning a mission for October 10, 2023 for the Psyche orbiter to launch and get to the asteroid by August 2029 to study. 511 Davida could have $27 quadrillion worth of minerals and resources. Using the moon Phobos to launch spacecraft is energetically favorable and a useful location from which to dispatch missions to main belt asteroids. Mining the asteroid belt from Mars and its moons could help in the Colonization of Mars. Phobos as a space elevator for Mars Phobos is synchronously orbiting Mars, where the same face stays facing the planet at ~6,028 km above the Martian surface. A space elevator could extend from Phobos to Mars 6,000 km, about 28 kilometers from the surface, and just out of the atmosphere of Mars. A similar space elevator cable could extend out 6,000 km the opposite direction that would counterbalance Phobos. In total the space elevator would extend over 12,000 km which would be below Areostationary orbit of Mars (17,032 km). A rocket launch would be needed to get the rocket and cargo to the beginning of the space elevator 28 km above the surface. The surface of Mars is rotating at 0.25 km/s at the equator and the bottom of the space elevator would be rotating around Mars at 0.77 km/s, so only 0.52 km/s of Delta-v would be needed to get to the space elevator. Phobos orbits at 2.15 km/s and the outer most part of the space elevator would rotate around Mars at 3.52 km/s. Regulation and safety Space law involves a specific set of international treaties, along with national statutory laws. The system and framework for international and domestic laws have emerged in part through the United Nations Office for Outer Space Affairs. The rules, terms and agreements that space law authorities consider to be part of the active body of international space law are the five international space treaties and five UN declarations. Approximately 100 nations and institutions were involved in negotiations. The space treaties cover many major issues such as arms control, non-appropriation of space, freedom of exploration, liability for damages, safety and rescue of astronauts and spacecraft, prevention of harmful interference with space activities and the environment, notification and registration of space activities, and the settlement of disputes. In exchange for assurances from the space power, the nonspacefaring nations acquiesced to U.S. and Soviet proposals to treat outer space as a commons (res communis) territory which belonged to no one state. Asteroid mining in particular is covered by both international treaties—for example, the Outer Space Treaty—and national statutory laws—for example, specific legislative acts in the United States and Luxembourg. Varying degrees of criticism exist regarding international space law. Some critics accept the Outer Space Treaty, but reject the Moon Agreement. The Outer Space Treaty allows private property rights for outer space natural resources once removed from the surface, subsurface or subsoil of the Moon and other celestial bodies in outer space. Thus, international space law is capable of managing newly emerging space mining activities, private space transportation, commercial spaceports and commercial space stations, habitats and settlements. Space mining involving the extraction and removal of natural resources from their natural location is allowable under the Outer Space Treaty. Once removed, those natural resources can be reduced to possession, sold, traded and explored or used for scientific purposes. International space law allows space mining, specifically the extraction of natural resources. It is generally understood within the space law authorities that extracting space resources is allowable, even by private companies for profit. However, international space law prohibits property rights over territories and outer space land. Astrophysicists Carl Sagan and Steven J. Ostro raised the concern altering the trajectories of asteroids near Earth might pose a collision hazard threat. They concluded that orbit engineering has both opportunities and dangers: if controls instituted on orbit-manipulation technology were too tight, future spacefaring could be hampered, but if they were too loose, human civilization would be at risk. The Outer Space Treaty After ten years of negotiations between nearly 100 nations, the Outer Space Treaty opened for signature on January 27, 1966. It entered into force as the constitution for outer space on October 10, 1967. The Outer Space Treaty was well received; it was ratified by ninety-six nations and signed by an additional twenty-seven states. The outcome has been that the basic foundation of international space law consists of five (arguably four) international space treaties, along with various written resolutions and declarations. The main international treaty is the Outer Space Treaty of 1967; it is generally viewed as the "Constitution" for outer space. By ratifying the Outer Space Treaty of 1967, ninety-eight nations agreed that outer space would belong to the "province of mankind", that all nations would have the freedom to "use" and "explore" outer space, and that both these provisions must be done in a way to "benefit all mankind". The province of mankind principle and the other key terms have not yet been specifically defined (Jasentuliyana, 1992). Critics have complained that the Outer Space Treaty is vague. Yet, international space law has worked well and has served space commercial industries and interests for many decades. The taking away and extraction of Moon rocks, for example, has been treated as being legally permissible. The framers of Outer Space Treaty initially focused on solidifying broad terms first, with the intent to create more specific legal provisions later (Griffin, 1981: 733–734). This is why the members of the COPUOS later expanded the Outer Space Treaty norms by articulating more specific understandings which are found in the "three supplemental agreements" – the Rescue and Return Agreement of 1968, the Liability Convention of 1973, and the Registration Convention of 1976 (734). Hobe (2007) explains that the Outer Space Treaty "explicitly and implicitly prohibits only the acquisition of territorial property rights" but extracting space resources is allowable. It is generally understood within the space law authorities that extracting space resources is allowable, even by private companies for profit. However, international space law prohibits property rights over territories and outer space land. Hobe further explains that there is no mention of “the question of the extraction of natural resources which means that such use is allowed under the Outer Space Treaty” (2007: 211). He also points out that there is an unsettled question regarding the division of benefits from outer space resources in accordance with Article, paragraph 1 of the Outer Space Treaty. The Moon Agreement The Moon Agreement was signed on December 18, 1979, as part of the United Nations Charter and it entered into force in 1984 after a five state ratification consensus procedure, agreed upon by the members of the United Nations Committee on Peaceful Uses of Outer Space (COPUOS). As of September 2019, only 18 nations have signed or ratified the treaty. The other three outer space treaties experienced a high level of international cooperation in terms of signage and ratification, but the Moon Treaty went further than them, by defining the Common Heritage concept in more detail and by imposing specific obligations on the parties engaged in the exploration and/or exploitation of outer space. The Moon Treaty explicitly designates the Moon and its natural resources as part of the Common Heritage of Mankind. The Article 11 establishes that lunar resources are "not subject to national appropriation by claim of sovereignty, by means of use or occupation, or by any other means". However, exploitation of resources is suggested to be allowed if it is "governed by an international regime" (Article 11.5), but the rules of such regime have not yet been established. S. Neil Hosenball, the NASA General Counsel and chief US negotiator for the Moon Treaty, cautioned in 2018 that negotiation of the rules of the international regime should be delayed until the feasibility of exploitation of lunar resources has been established. The objection to the treaty by the spacefaring nations is held to be the requirement that extracted resources (and the technology used to that end) must be shared with other nations. The similar regime in the United Nations Convention on the Law of the Sea is believed to impede the development of such industries on the seabed. The United States, the Russian Federation, and the People's Republic of China (PRC) have neither signed, acceded to, nor ratified the Moon Agreement. Legal regimes of some countries Luxembourg In February 2016, the Government of Luxembourg said that it would attempt to "jump-start an industrial sector to mine asteroid resources in space" by, among other things, creating a "legal framework" and regulatory incentives for companies involved in the industry. By June 2016, it announced that it would "invest more than in research, technology demonstration, and in the direct purchase of equity in companies relocating to Luxembourg". In 2017, it became the "first European country to pass a law conferring to companies the ownership of any resources they extract from space", and remained active in advancing space resource public policy in 2018. In 2017, Japan, Portugal, and the UAE entered into cooperation agreements with Luxembourg for mining operations in celestial bodies. In 2018, the Luxembourg Space Agency was created. It provides private companies and organizations working on asteroid mining with financial support. United States Some nations are beginning to promulgate legal regimes for extraterrestrial resource extraction. For example, the United States "SPACE Act of 2015"—facilitating private development of space resources consistent with US international treaty obligations—passed the US House of Representatives in July 2015. In November 2015 it passed the United States Senate. On 25 November U.S. President Barack Obama signed the H.R.2262 – U.S. Commercial Space Launch Competitiveness Act into law. The law recognizes the right of U.S. citizens to own space resources they obtain and encourages the commercial exploration and use of resources from asteroids. According to the article § 51303 of the law: On 6 April 2020 U.S. President Donald Trump signed the Executive Order on Encouraging International Support for the Recovery and Use of Space Resources. According to the Order: Americans should have the right to engage in commercial exploration, recovery, and use of resources in outer space the US does not view space as a "global commons" the US opposes the Moon Agreement Environmental impact A positive impact of asteroid mining has been conjectured as being an enabler of transferring industrial activities into space, such as energy generation. A quantitative analysis of the potential environmental benefits of water and platinum mining in space has been developed, where potentially large benefits could materialize, depending on the ratio of material mined in space and mass launched into space. Research missions to asteroids and comets Proposed or cancelled Near Earth Asteroid Prospector – concept for a small commercial spacecraft mission by the private company SpaceDev; the project ran into fundraising difficulties and was subsequently cancelled. VIPER rover – cancelled NASA mission to prospect for lunar resources Ongoing and planned Hayabusa2 – ongoing JAXA asteroid sample return mission (arrived at the target in 2018, returned sample in 2020) OSIRIS-REx – NASA asteroid sample return mission (launched on September 8, 2016, arrived at target 2020, returned sample on September 24, 2023) Fobos-Grunt 2 – proposed Roskosmos sample return mission to Phobos Completed First of successful missions by country: In fiction The first mention of asteroid mining in science fiction apparently came in Garrett P. Serviss' story Edison's Conquest of Mars, published in the New York Evening Journal in 1898. Several science-fiction video games include asteroid mining. Gallery
Technology
Basics_6
null
1670234
https://en.wikipedia.org/wiki/Pachycephalosaurus
Pachycephalosaurus
Pachycephalosaurus (; meaning "thick-headed lizard", from Greek pachys-/ "thickness", kephalon/ "head" and sauros/ "lizard") is a genus of pachycephalosaurid ornithischian dinosaur. The type species, P. wyomingensis, is the only known species, but some researchers argue that Stygimoloch might be a second species, P. spinifer, or a juvenile specimen of P. wyomingensis. It lived during the Maastrichtian age of the Late Cretaceous period in what is now western North America. Remains have been excavated in Montana, South Dakota, Wyoming, and Alberta. Measuring and weighing , the species is known mainly from a single skull, plus a few extremely thick skull roofs (at 22 cm or 9 in thick). More complete fossils would come to be found in the following years. Pachycephalosaurus was among the last species of non-avian dinosaurs on Earth before the Cretaceous–Paleogene extinction event. The genus Tylosteus has been synonymized with Pachycephalosaurus, as have the genera Stygimoloch and Dracorex, in recent studies. Like other pachycephalosaurids, Pachycephalosaurus was a bipedal herbivore, possessing long, strong legs and somewhat small arms with five-fingered hands. Pachycephalosaurus is the largest-known pachycephalosaur, known for having an extremely thick, slightly domed skull roof; visually, the structure of the skull suggests a ‘battering ram' function in life, evolved for use as a defensive mechanism or intra-species combat, similar to what is seen with today's bighorn sheep or muskoxen (with male animals routinely charging and head-butting each other for dominance). This hypothesis has actually been highly disputed in recent years. History of discovery Remains attributable to Pachycephalosaurus may have been found as early as the 1850s. As determined by Donald Baird, in 1859 or 1860, Ferdinand Vandeveer Hayden, an early fossil collector in the American West, collected a bone fragment in the vicinity of the head of the Missouri River, from what is now known to be the Lance Formation of southeastern Montana. This specimen, ANSP 8568, was described by Joseph Leidy in 1872 as belonging to the dermal armor of a reptile or an armadillo-like animal. It became known as Tylosteus. Its actual nature was not revealed until Baird studied it again over a century later and identified it as a squamosal (bone from the back of the skull) of Pachycephalosaurus, including a set of bony knobs corresponding to those found on other specimens of Pachycephalosaurus. Because the name Tylosteus predates Pachycephalosaurus, according to the International Code of Zoological Nomenclature Tylosteus would normally be preferred. In 1985, Baird successfully petitioned to have Pachycephalosaurus used instead of Tylosteus because the latter name had not been used for over fifty years, was based on undiagnostic materials, and had poor geographic and stratigraphic information. This may not be the end of the story, however. Robert Sullivan suggested in 2006 that ANSP 8568 is more like the corresponding bone of Dracorex than that of Pachycephalosaurus. The issue is of uncertain importance, though, if Dracorex actually represents a juvenile Pachycephalosaurus, as has been recently proposed. In 1890, during the Bone Wars between Othniel Charles Marsh and Edward Drinker Cope, one of Marsh's collectors, John Bell Hatcher, collected a partial left squamosal (YPM VP 335) later referred to Stygimoloch spinifer near Lance Creek, Wyoming, in the Lance Formation. Marsh described the squamosal along with the dermal armor of Denversaurus as the body armor of Triceratops in 1892, believing that the squamosal was a spike akin to the plates on Stegosaurus. The squamosal spike was even featured in Charles Knight's painting of Cope's ceratopsid Agathaumas, likely based on Marsh's hypothesis. Marsh also named a species of now-dubious ankylosaur Palaeoscincus in 1892 based on a single tooth (YPM 4810), also collected by Hatcher from the Lance. The tooth was named Palaeoscinus latus, but in 1990, Coombs found the tooth to be from a pachycephalosaurid, possibly even Pachycephalosaurus itself. Hatcher also collected several additional teeth and skull fragments while working for Marsh, though these have yet to be described. P. wyomingensis, the type and currently only valid species of Pachycephalosaurus, was named by Charles W. Gilmore in 1931. He coined it for the partial skull USNM 12031, from the Lance Formation of Niobrara County, Wyoming. Gilmore assigned his new species to Troodon as T. wyomingensis. At the time, paleontologists thought that Troodon, then known only from teeth, was the same as Stegoceras, which had similar teeth. Accordingly, what are now known as pachycephalosaurids were assigned to the family Troodontidae, a misconception which was not corrected until 1945 by Charles M. Sternberg. In 1943, Barnum Brown and Erich Maren Schlaikjer, with newer, more complete material, established the genus Pachycephalosaurus. They named two species: Pachycephalosaurus grangeri, the type species of their new genus, and Pachycephalosaurus reinheimeri. P. grangeri was based on AMNH 1696, a nearly complete skull from the Hell Creek Formation of Ekalaka, Carter County, Montana. P. reinheimeri was based on what is now DMNS 469, a dome and a few associated elements from the Lance Formation of Corson County, South Dakota. They also referred the older species "Troodon" wyomingensis to their new genus. Their two newer species have been considered synonymous with P. wyomingensis since 1983. In 2015, some pachycephalosaurid material and a domed parietal attributable to Pachycephalosaurus were discovered in the Scollard Formation of Alberta, implying that the dinosaurs of this era were cosmopolitan and did not have discrete faunal provinces. Description The anatomy of Pachycephalosaurus itself is poorly known, as only skull remains have been described. Pachycephalosaurus is famous for having a large, bony dome on top of its skull, up to thick, which safely cushioned its brain. The dome's rear aspect was edged with bony knobs and short bony spikes projected upwards from the snout. However, the spikes were probably blunted, not sharp. The skull was short and possessed large, rounded eye sockets that faced forward, suggesting that the animal had binocular vision. Pachycephalosaurus had a small muzzle that ended in a pointed beak. The teeth were tiny, with leaf-shaped crowns. The head was supported by an S- or U-shaped neck. Younger individuals of Pachycephalosaurus might have had flatter skulls and larger horns projecting from the back of the skull. As the animal grew, the horns shrunk and rounded out as the dome grew. Pachycephalosaurus was bipedal and possibly the largest of all pachycephalosaurids. It has been estimated that Pachycephalosaurus was about long and weighed about . Based on other pachycephalosaurids, it probably had a fairly short, thick neck, short arms, a bulky body, long legs, and a heavy tail that was likely held rigid by ossified tendons. Classification Pachycephalosaurus gives its name to Pachycephalosauria, a clade of herbivorous ornithischian dinosaurs that lived during the Late Cretaceous period in North America and Asia. Pachycephalosaurs were a part of Marginocephalia, thus being likely more closely related to the ceratopsians than the ornithopods. Pachycephalosaurus is the most famous member of Pachycephalosauria, even if it is not the best-preserved member. The clade also includes Stenopelix, Wannanosaurus, Goyocephale, Stegoceras, Homalocephale, Tylocephale, Sphaerotholus, and Prenocephale. Within the tribe Pachycephalosaurini, Pachycephalosaurus is most closely related to Alaskacephale. Dracorex and Stygimoloch have also been synonymized with Pachycephalosaurus. In 2010, Gregory S. Paul proposed that, while Stygimoloch and Dracorex possibly represent different growth stages of Pachycephalosaurus, Stygimoloch might represent a different species, P. spinifer. This idea has been regarded as a way of interpretation by Mark Witton and Thomas Holtz. A phylogenetic analysis from 2021 by Evans and colleagues accepted the validity of the genus Stygimoloch on the basis of it being found in later rock layers than Pachycephalosaurus, but agreed with the consensus that Dracorex represents an ontogimorph of either Stygimoloch or Pachycephalosaurus instead of a distinct taxon. However, David Evans himself noted in a Twitter post that he and his colleagues would also consider Stygimoloch as P. spinifer. Phylogenetic analyses by Evans and colleagues have been used to resolve the relationships within Pachycephalosauyridae, consistently finding Pachycephalosaurus as one of the most derived taxa closer to Prenocephale and Sphaerotholus than Stegoceras. The version of the analysis published by Woodruff and colleagues in 2023 is below. Paleobiology Growth Aside from Pachycephalosaurus itself, two other pachycephalosaurs were described from the latest Cretaceous of the northwestern United States: Stygimoloch spinifer ("thorny Moloch of the Styx") and Dracorex hogwartsia ("dragon king of Hogwarts"). The former is only known from a juvenile skull with a reduced dome and large spikes, while the latter, also known from only a juvenile skull, had a seemingly flat head with short horns. Due to their unique head ornamentation, they were seen as separate species for a number of years. However, in 2007, they were proposed to be juvenile or female morphologies of Pachycephalosaurus. At that year's meeting of the Society of Vertebrate Paleontology, Jack Horner of Montana State University presented evidence, from analysis of the skull of the Dracorex specimen, that it may be a juvenile form of Stygimoloch. In addition to this, he presented data that indicates that both Stygimoloch and Dracorex may be juvenile forms of Pachycephalosaurus. Horner and M.B. Goodwin published their findings in 2009, showing that the spike and skull dome bones of all three "species" exhibit extreme plasticity and that both Dracorex and Stygimoloch are known only from juvenile specimens, while Pachycephalosaurus is known only from adult specimens. These observations, in addition to the fact that all three forms lived in the same time and place, led them to conclude that Dracorex and Stygimoloch were simply juvenile Pachycephalosaurus, which lost spikes and grew domes as they aged. A 2010 study by Nick Longrich and colleagues also supported the hypothesis that all flat-skulled pachycephalosaur species were juveniles of the dome-headed adults, such as Goyocephale and Homalocephale.The discovery of baby skulls assigned to Pachycephalosaurus that were described in 2016 from two different bone beds in the Hell Creek Formation has been presented as further evidence for this hypothesis. The fossils, as described by David Evans and Mark Goodwin et al are identical to all three supposed genera in the placement of the rugose knobs on their skulls, and the unique features of Stygimoloch and Dracorex are thus instead morphologically consistent features on a Pachycephalosaurus growth curve. It has been noted that morphological differences between Stygimoloch and Pachycephalosaurus may also partly be due to slight stratigraphic differences. The few Stygimoloch specimens that have reliable stratigraphic data were all collected from the upper part of the Hell Creek Formation, whereas Pachycephalosaurus morphs were all collected from the lower part. This has also led to suggestions that Stygimoloch might represent its own species, P. spinifer. In their 2021 redescription of Sinocephale bexelli, Evans and his colleageues treated Stygimoloch (but not Dracorex) as a separate taxon based on their phylogenetic analysis. However, Evans himself has noted that he and his colleagues support the idea of P. spinifer. Dome function It has been widely hypothesized for decades that Pachycephalosaurus and its relatives were the ancient, bipedal equivalents of bighorn sheep or musk oxen, where male individuals would ram each other headlong and that they would horizontally straighten their head, neck, and body in order to transmit stress during ramming. However, there have also been alternative suggestions that the pachycephalosaurs could not have used their domes in this way. The primary argument that has been raised against head-butting is that the skull roof may not have adequately sustained impact associated with ramming, as well as a lack of definitive evidence of scars or other damage on fossilized Pachycephalosaurus skulls. However, more recent analyses have uncovered such damage (see below). Furthermore, the cervical and anterior dorsal vertebrae show that the neck was carried in an S- or U-shaped curve, rather than a straight orientation and that it might have been unfit for transmitting stress from direct head-butting. Lastly, the rounded shape of the skull would lessen the contacted surface area during head-butting, resulting in glancing blows. Alternatively, Pachycephalosaurus and other pachycephalosaurids may have engaged in flank-butting during intraspecific combat. In this scenario, an individual may have stood roughly parallel or faced a rival directly, using intimidation displays to cow its rival. If intimidation failed, the Pachycephalosaurus would bend its head downward and to the side, striking the rival on its flank. This hypothesis is supported by the relatively broad torso of most pachycephalosaurs, which would have protected vital organs from trauma. The flank-butting theory was first proposed by Sues in 1978 and expanded upon by Ken Carpenter in 1997. In 2012, a study showed that cranial pathologies in a P. wyomingensis specimen were likely due to agonistic behavior. It was also proposed that similar damage in other pachycephalosaur specimens (previously explained as taphonomic artifacts and bone absorptions) may instead have been due to such behavior. Peterson et al. (2013) studied cranial pathologies among Pachycephalosauridae and found that 22% of all domes examined had lesions that are consistent with osteomyelitis, an infection of the bone resulting from penetrating trauma or trauma to the tissue overlying the skull that lead to an infection of the bone tissue. This high rate of pathology lends more support to the hypothesis that pachycephalosaurid domes were employed in intra-specific combat. Pachycephalosaurus wyomingensis specimen BMR P2001.4.5 was observed to have 23 lesions in its frontal bone and P. wyomingensis specimen DMNS 469 was observed to have 5 lesions. The frequency of trauma was comparable across the different genera in the pachycephalosaurid family, despite the fact that these genera vary with respect to the size and architecture of their domes and the fact that they existed during varying geologic periods. These findings were in stark contrast with the results from analysis of the relatively flat-headed pachycephalosaurids, where there was an absence of pathology. This would support the hypothesis that these individuals represent either females or juveniles, where intra-specific combat behavior is not expected. Histological examination reveals that pachycephalosaurid domes are composed of a unique form of fibrolamellar bone that contains fibroblasts, which play a critical role in wound healing and are capable of rapidly depositing bone during remodeling. Peterson et al. (2013) concluded that, taken together, the frequency of lesion distribution and the bone structure of frontoparietal domes lends strong support to the hypothesis that pachycephalosaurids used their unique cranial structures for agonistic behavior. CT scan comparisons of the skulls of Stegoceras validum, Prenocephale prenes, and several head-striking artiodactyls have also supported pachycephalosaurids as being well-equipped for head-butting. Micro-CT scans of the pachycephalosaurid specimen, identified as cf. Foraminacephale brevis, also support that pachycephalosaurids likely engaged in head-butting. Diet Scientists do not yet know what these dinosaurs ate. Having very small, ridged teeth, they could not have chewed tough, fibrous plants like flowering shrubs as effectively as other dinosaurs of the same period. It is assumed that pachycephalosaurs lived on a mixed diet of leaves, seeds, and fruits. The sharp, serrated teeth would have been very effective for shredding plants. It has also been suspected to a degree that it may have included meat in its diet. The most complete fossil jaw shows that it had serrated blade-like front teeth, reminiscent of those of carnivorous theropods. Paleoecology Nearly all Pachycephalosaurus fossils have been recovered from the Lance Formation and Hell Creek Formation of the northwestern United States. Pachycephalosaurus possibly coexisted alongside additional pachycephalosaur species of the genera Sphaerotholus, as well as Dracorex and Stygimoloch, though these last two genera may represent different growth stages of Pachycephalosaurus itself. Other dinosaurs that shared its time and place include Thescelosaurus, the hadrosaurid Edmontosaurus and a possible species of Parasaurolophus, ceratopsians like Triceratops, Torosaurus, Nedoceratops, Tatankaceratops, and Leptoceratops, the ankylosaurid Ankylosaurus, the nodosaurids Denversaurus and Edmontonia, and the theropods Acheroraptor, Dakotaraptor, Ornithomimus, Struthiomimus, Anzu, Leptorhynchos, Pectinodon, Paronychodon, Richardoestesia, and Tyrannosaurus.
Biology and health sciences
Ornitischians
Animals
9097185
https://en.wikipedia.org/wiki/Technetium-99m
Technetium-99m
Technetium-99m (99mTc) is a metastable nuclear isomer of technetium-99 (itself an isotope of technetium), symbolized as 99mTc, that is used in tens of millions of medical diagnostic procedures annually, making it the most commonly used medical radioisotope in the world. Technetium-99m is used as a radioactive tracer and can be detected in the body by medical equipment (gamma cameras). It is well suited to the role, because it emits readily detectable gamma rays with a photon energy of 140 keV (these 8.8 pm photons are about the same wavelength as emitted by conventional X-ray diagnostic equipment) and its half-life for gamma emission is 6.0058 hours (meaning 93.7% of it decays to 99Tc in 24 hours). The relatively "short" physical half-life of the isotope and its biological half-life of 1 day (in terms of human activity and metabolism) allows for scanning procedures which collect data rapidly but keep total patient radiation exposure low. The same characteristics make the isotope unsuitable for therapeutic use. Technetium-99m was discovered as a product of cyclotron bombardment of molybdenum. This procedure produced molybdenum-99, a radionuclide with a longer half-life (2.75 days), which decays to 99mTc. This longer decay time allows for 99Mo to be shipped to medical facilities, where 99mTc is extracted from the sample as it is produced. In turn, 99Mo is usually created commercially by fission of highly enriched uranium in a small number of research and material testing nuclear reactors in several countries. History Discovery In 1938, Emilio Segrè and Glenn T. Seaborg isolated for the first time the metastable isotope technetium-99m, after bombarding natural molybdenum with 8 MeV deuterons in the cyclotron of Ernest Orlando Lawrence's Radiation laboratory. In 1970 Seaborg explained that: Later in 1940, Emilio Segrè and Chien-Shiung Wu published experimental results of an analysis of fission products of uranium-235, including molybdenum-99, and detected the presence of an isomer of element 43 with a 6-hour half life, later labelled as technetium-99m. Early medical applications in the United States 99mTc remained a scientific curiosity until the 1950s when Powell Richards realized the potential of technetium-99m as a medical radiotracer and promoted its use among the medical community. While Richards was in charge of the radioisotope production at the Hot Lab Division of the Brookhaven National Laboratory, Walter Tucker and Margaret Greene were working on how to improve the separation process purity of the short-lived eluted daughter product iodine-132 from its parent, tellurium-132 (with a half life of 3.2 days), produced in the Brookhaven Graphite Research Reactor. They detected a trace contaminant which proved to be 99mTc, which was coming from 99Mo and was following tellurium in the chemistry of the separation process for other fission products. Based on the similarities between the chemistry of the tellurium-iodine parent-daughter pair, Tucker and Greene developed the first technetium-99m generator in 1958. It was not until 1960 that Richards became the first to suggest the idea of using technetium as a medical tracer. The first US publication to report on medical scanning of 99mTc appeared in August 1963. Sorensen and Archambault demonstrated that intravenously injected carrier-free 99Mo selectively and efficiently concentrated in the liver, becoming an internal generator of 99mTc. After build-up of 99mTc, they could visualize the liver using the 140 keV gamma ray emission. Worldwide expansion The production and medical use of 99mTc rapidly expanded across the world in the 1960s, benefiting from the development and continuous improvements of the gamma cameras. Americas Between 1963 and 1966, numerous scientific studies demonstrated the use of 99mTc as radiotracer or diagnostic tool. As a consequence the demand for 99mTc grew exponentially and by 1966, Brookhaven National Laboratory was unable to cope with the demand. Production and distribution of 99mTc generators were transferred to private companies. "TechneKow-CS generator", the first commercial 99mTc generator, was produced by Nuclear Consultants, Inc. (St. Louis, Missouri) and Union Carbide Nuclear Corporation (Tuxedo, New York). From 1967 to 1984, 99Mo was produced for Mallinckrodt Nuclear Company at the Missouri University Research Reactor (MURR). Union Carbide actively developed a process to produce and separate useful isotopes like 99Mo from mixed fission products that resulted from the irradiation of highly enriched uranium (HEU) targets in nuclear reactors developed from 1968 to 1972 at the Cintichem facility (formerly the Union Carbide Research Center built in the Sterling forest in Tuxedo, New York ()). The Cintichem process originally used 93% highly enriched U-235 deposited as UO2 on the inside of a cylindrical target. At the end of the 1970s, of total fission product radiation were extracted weekly from 20 to 30 reactor bombarded HEU capsules, using the so-called "Cintichem [chemical isolation] process." The research facility with its 1961 5-MW pool-type research reactor was later sold to Hoffman-LaRoche and became Cintichem Inc. In 1980, Cintichem, Inc. began the production/isolation of 99Mo in its reactor, and became the single U.S. producer of 99Mo during the 1980s. However, in 1989, Cintichem detected an underground leak of radioactive products that led to the reactor shutdown and decommissioning, putting an end to the commercial production of 99Mo in the USA. The production of 99Mo started in Canada in the early 1970s and was shifted to the NRU reactor in the mid-1970s. By 1978 the reactor provided technetium-99m in large enough quantities that were processed by AECL's radiochemical division, which was privatized in 1988 as Nordion, now MDS Nordion. In the 1990s a substitution for the aging NRU reactor for production of radioisotopes was planned. The Multipurpose Applied Physics Lattice Experiment (MAPLE) was designed as a dedicated isotope-production facility. Initially, two identical MAPLE reactors were to be built at Chalk River Laboratories, each capable of supplying 100% of the world's medical isotope demand. However, problems with the MAPLE 1 reactor, most notably a positive power co-efficient of reactivity, led to the cancellation of the project in 2008. The first commercial 99mTc generators were produced in Argentina in 1967, with 99Mo produced in the CNEA's RA-1 Enrico Fermi reactor. Besides its domestic market CNEA supplies 99Mo to some South American countries. Oceania In 1967, the first 99mTc procedures were carried out in Auckland, New Zealand. 99Mo was initially supplied by Amersham, UK, then by the Australian Nuclear Science and Technology Organisation (ANSTO) in Lucas Heights, Australia. Europe In May 1963, Scheer and Maier-Borst were the first to introduce the use of 99mTc for medical applications. In 1968, Philips-Duphar (later Mallinckrodt, today Covidien) marketed the first technetium-99m generator produced in Europe and distributed from Petten, the Netherlands. Shortage Global shortages of technetium-99m emerged in the late 2000s because two aging nuclear reactors (NRU and HFR) that provided about two-thirds of the world's supply of molybdenum-99, which itself has a half-life of only 66 hours, were shut down repeatedly for extended maintenance periods. In May 2009, the Atomic Energy of Canada Limited announced the detection of a small leak of heavy water in the NRU reactor that remained out of service until completion of the repairs in August 2010. After the observation of gas bubble jets released from one of the deformations of primary cooling water circuits in August 2008, the HFR reactor was stopped for a thorough safety investigation. NRG received in February 2009 a temporary license to operate HFR only when necessary for medical radioisotope production. HFR stopped for repairs at the beginning of 2010 and was restarted in September 2010. Two replacement Canadian reactors (see MAPLE Reactor) constructed in the 1990s were closed before beginning operation, for safety reasons. A construction permit for a new production facility to be built in Columbia, MO was issued in May 2018. Nuclear properties Technetium-99m is a metastable nuclear isomer, as indicated by the "m" after its mass number 99. This means it is a nuclide in an excited (metastable) state that lasts much longer than is typical. The nucleus will eventually relax (i.e., de-excite) to its ground state through the emission of gamma rays or internal conversion electrons. Both of these decay modes rearrange the nucleons without transmuting the technetium into another element. 99mTc decays mainly by gamma emission, slightly less than 88% of the time. (99mTc → 99Tc + γ) About 98.6% of these gamma decays result in 140.5 keV gamma rays and the remaining 1.4% are to gammas of a slightly higher energy at 142.6 keV. These are the radiations that are picked up by a gamma camera when 99mTc is used as a radioactive tracer for medical imaging. The remaining approximately 12% of 99mTc decays are by means of internal conversion, resulting in ejection of high speed internal conversion electrons in several sharp peaks (as is typical of electrons from this type of decay) also at about 140 keV (99mTc → 99Tc+ + e−). These conversion electrons will ionize the surrounding matter like beta radiation electrons would do, contributing along with the 140.5 keV and 142.6 keV gammas to the total deposited dose. Pure gamma emission is the desirable decay mode for medical imaging because other particles deposit more energy in the patient body (radiation dose) than in the camera. Metastable isomeric transition is the only nuclear decay mode that approaches pure gamma emission. 99mTc's half-life of 6.0058 hours is considerably longer (by 14 orders of magnitude, at least) than most nuclear isomers, though not unique. This is still a short half-life relative to many other known modes of radioactive decay and it is in the middle of the range of half lives for radiopharmaceuticals used for medical imaging. After gamma emission or internal conversion, the resulting ground-state technetium-99 then decays with a half-life of 211,000 years to stable ruthenium-99. This process emits soft beta radiation without a gamma. Such low radioactivity from the daughter product(s) is a desirable feature for radiopharmaceuticals. ^{99\!m}_{43}Tc ->[\ce{\gamma\ 141 keV}][\ce{6 h}] {}^{99}_{43}Tc ->[\ce{\beta^-\ 249 keV}][211,000\ \ce{y}] \overbrace{\underset{(stable)}{^{99}_{44}Ru}}^{ruthenium-99} Production Production of 99Mo in nuclear reactors Neutron irradiation of uranium-235 targets The parent nuclide of 99mTc, 99Mo, is mainly extracted for medical purposes from the fission products created in neutron-irradiated uranium-235 targets, the majority of which is produced in five nuclear research reactors around the world using highly enriched uranium (HEU) targets. Smaller amounts of 99Mo are produced from low-enriched uranium in at least three reactors. Neutron activation of 98Mo Production of 99Mo by neutron activation of natural molybdenum, or molybdenum enriched in 98Mo, is another, currently smaller, route of production. Production of 99mTc/99Mo in particle accelerators Production of "Instant" 99mTc The feasibility of 99mTc production with the 22-MeV-proton bombardment of a 100Mo target in medical cyclotrons was demonstrated in 1971. The recent shortages of 99mTc reignited the interest in the production of "instant" 99mTc by proton bombardment of isotopically enriched 100Mo targets (>99.5%) following the reaction 100Mo(p,2n)99mTc. Canada is commissioning such cyclotrons, designed by Advanced Cyclotron Systems, for 99mTc production at the University of Alberta and the Université de Sherbrooke, and is planning others at the University of British Columbia, TRIUMF, University of Saskatchewan and Lakehead University. A particular drawback of cyclotron production via (p,2n) on 100Mo is the significant co-production of 99gTc. The preferential in-growth of this nuclide occurs due to the larger reaction cross-section pathway leading to the ground state, which is almost five times higher at the cross-section maximum in comparison with the metastable one at the same energy. Depending on the time required to process the target material and recovery of 99mTc, the amount of 99mTc relative to 99gTc will continue to decrease, in turn reducing the specific activity of 99mTc available. It has been reported that ingrowth of 99gTc as well as the presence of other Tc isotopes can negatively affect subsequent labelling and/or imaging; however, the use of high purity 100Mo targets, specified proton beam energies, and appropriate time of use have shown to be sufficient for yielding 99mTc from a cyclotron comparable to that from a commercial generator. Liquid metal molybdenum-containing targets have been proposed that would aid in streamlined processing, ensuring better production yields. A particular problem associated with the continued reuse of recycled, enriched 100Mo targets is unavoidable transmutation of the target as other Mo isotopes are generated during irradiation and cannot be easily removed post-processing. Indirect routes of production of 99Mo Other particle accelerator-based isotope production techniques have been investigated. The supply disruptions of 99Mo in the late 2000s and the ageing of the producing nuclear reactors forced the industry to look into alternative methods of production. The use of cyclotrons or electron accelerators to produce 99Mo from 100Mo via (p,pn) or (γ,n) reactions, respectively, has been further investigated. The (n,2n) reaction on 100Mo yields a higher reaction cross-section for high energy neutrons than of (n,γ) on 98Mo with thermal neutrons. In particular, this method requires accelerators that generate fast neutron spectrums, such as ones using D-T or other fusion-based reactions, or high energy spallation or knock out reactions. A disadvantage of these techniques is the necessity for enriched 100Mo targets, which are significantly more expensive than natural isotopic targets and typically require recycling of the material, which can be costly, time-consuming, and arduous. Technetium-99m generators Technetium-99m's short half-life of 6 hours makes storage impossible and would make transport very expensive. Instead, its parent nuclide 99Mo is supplied to hospitals after its extraction from the neutron-irradiated uranium targets and its purification in dedicated processing facilities. It is shipped by specialised radiopharmaceutical companies in the form of technetium-99m generators worldwide or directly distributed to the local market. The generators, colloquially known as moly cows, are devices designed to provide radiation shielding for transport and to minimize the extraction work done at the medical facility. A typical dose rate at 1 metre from the 99mTc generator is 20-50 μSv/h during transport. These generators' output declines with time and must be replaced weekly, since the half-life of 99Mo is still only 66 hours. Molybdenum-99 spontaneously decays to excited states of 99Tc through beta decay. Over 87% of the decays lead to the excited state of 99mTc. A electron and a electron antineutrino are emitted in the process (99Mo → 99mTc + + ). The electrons are easily shielded for transport, and 99mTc generators are only minor radiation hazards, mostly due to secondary X-rays produced by the electrons (also known as Bremsstrahlung). At the hospital, the 99mTc that forms through 99Mo decay is chemically extracted from the technetium-99m generator. Most commercial 99Mo/99mTc generators use column chromatography, in which 99Mo in the form of water-soluble molybdate, MoO42− is adsorbed onto acid alumina (Al2O3). When the 99Mo decays, it forms pertechnetate TcO4−, which, because of its single charge, is less tightly bound to the alumina. Pulling normal saline solution through the column of immobilized 99MoO42− elutes the soluble 99mTcO4−, resulting in a saline solution containing the 99mTc as the dissolved sodium salt of the pertechnetate. One technetium-99m generator, holding only a few micrograms of 99Mo, can potentially diagnose 10,000 patients because it will be producing 99mTc strongly for over a week. Preparation Technetium exits the generator in the form of the pertechnetate ion, TcO4−. The oxidation state of Tc in this compound is +7. This is directly suitable for medical applications only in bone scans (it is taken up by osteoblasts) and some thyroid scans (it is taken up in place of iodine by normal thyroid tissues). In other types of scans relying on 99mTc, a reducing agent is added to the pertechnetate solution to bring the oxidation state of the technecium down to +3 or +4. Secondly, a ligand is added to form a coordination complex. The ligand is chosen to have an affinity for the specific organ to be targeted. For example, the exametazime complex of Tc in oxidation state +3 is able to cross the blood–brain barrier and flow through the vessels in the brain for cerebral blood flow imaging. Other ligands include sestamibi for myocardial perfusion imaging and mercapto acetyl triglycine for MAG3 scan to measure renal function. Medical uses In 1970, Eckelman and Richards presented the first "kit" containing all the ingredients required to release the 99mTc, "milked" from the generator, in the chemical form to be administered to the patient. Technetium-99m is used in 20 million diagnostic nuclear medical procedures every year. Approximately 85% of diagnostic imaging procedures in nuclear medicine use this isotope as radioactive tracer. Klaus Schwochau's book Technetium lists 31 radiopharmaceuticals based on 99mTc for imaging and functional studies of the brain, myocardium, thyroid, lungs, liver, gallbladder, kidneys, skeleton, blood, and tumors. A more recent review is also available. Depending on the procedure, the 99mTc is tagged (or bound to) a pharmaceutical that transports it to its required location. For example, when 99mTc is chemically bound to exametazime (HMPAO), the drug is able to cross the blood–brain barrier and flow through the vessels in the brain for cerebral blood-flow imaging. This combination is also used for labeling white blood cells (99mTc labeled WBC) to visualize sites of infection. 99mTc sestamibi is used for myocardial perfusion imaging, which shows how well the blood flows through the heart. Imaging to measure renal function is done by attaching 99mTc to mercaptoacetyl triglycine (MAG3); this procedure is known as a MAG3 scan. Technetium-99m (Tc-99m) can be readily detected in the body by medical equipment because it emits 140.5 keV gamma rays (these are about the same wavelength as emitted by conventional X-ray diagnostic equipment), and its half-life for gamma emission is six hours (meaning 94% of it decays to 99Tc in 24 hours). Besides, it emits virtually no beta radiation, thus keeping radiation dosage low. Its decay product, 99Tc, has a relatively long half-life (211,000 years) and emits little radiation. The short physical half-life of 99mTc and its biological half-life of 1 day with its other favourable properties allows scanning procedures to collect data rapidly and keep total patient radiation exposure low. Chemically, technetium-99m is selectively concentrated in the stomach, thyroid, and salivary glands, and excluded from cerebrospinal fluid; combining it with perchlorate abolishes its selectiveness. Radiation side-effects Diagnostic treatment involving technetium-99m will result in radiation exposure to technicians, patients, and passers-by. Typical quantities of technetium administered for immunoscintigraphy tests, such as SPECT tests, range from (millicurie or mCi; and Mega-Becquerel or MBq) for adults. These doses result in radiation exposures to the patient around 10 mSv (1000 mrem), the equivalent of about 500 chest X-ray exposures. This level of radiation exposure is estimated by the linear no-threshold model to carry a 1 in 1000 lifetime risk of developing a solid cancer or leukemia in the patient. The risk is higher in younger patients, and lower in older ones. Unlike a chest x-ray, the radiation source is inside the patient and will be carried around for a few days, exposing others to second-hand radiation. A spouse who stays constantly by the side of the patient through this time might receive one thousandth of patient's radiation dose this way. The short half-life of the isotope allows for scanning procedures that collect data rapidly. The isotope is also of a very low energy level for a gamma emitter. Its ~140 keV of energy make it safer for use because of the substantially reduced ionization compared with other gamma emitters. The energy of gammas from 99mTc is about the same as the radiation from a commercial diagnostic X-ray machine, although the number of gammas emitted results in radiation doses more comparable to X-ray studies like computed tomography. Technetium-99m has several features that make it safer than other possible isotopes. Its gamma decay mode can be easily detected by a camera, allowing the use of smaller quantities. And because technetium-99m has a short half-life, its quick decay into the far less radioactive technetium-99 results in relatively low total radiation dose to the patient per unit of initial activity after administration, as compared with other radioisotopes. In the form administered in these medical tests (usually pertechnetate), technetium-99m and technetium-99 are eliminated from the body within a few days. 3-D scanning technique: SPECT Single-photon emission computed tomography (SPECT) is a nuclear medicine imaging technique using gamma rays. It may be used with any gamma-emitting isotope, including 99mTc. In the use of technetium-99m, the radioisotope is administered to the patient and the escaping gamma rays are incident upon a moving gamma camera which computes and processes the image. To acquire SPECT images, the gamma camera is rotated around the patient. Projections are acquired at defined points during the rotation, typically every three to six degrees. In most cases, a full 360° rotation is used to obtain an optimal reconstruction. The time taken to obtain each projection is also variable, but 15–20 seconds are typical. This gives a total scan time of 15–20 minutes. The technetium-99m radioisotope is used predominantly in bone and brain scans. For bone scans, the pertechnetate ion is used directly, as it is taken up by osteoblasts attempting to heal a skeletal injury, or (in some cases) as a reaction of these cells to a tumor (either primary or metastatic) in the bone. In brain scanning, 99mTc is attached to the chelating agent HMPAO to create technetium (99mTc) exametazime, an agent which localizes in the brain according to region blood flow, making it useful for the detection of stroke and dementing illnesses that decrease regional brain flow and metabolism. Most recently, technetium-99m scintigraphy has been combined with CT coregistration technology to produce SPECT/CT scans. These employ the same radioligands and have the same uses as SPECT scanning, but are able to provide even finer 3-D localization of high-uptake tissues, in cases where finer resolution is needed. An example is the sestamibi parathyroid scan which is performed using the 99mTc radioligand sestamibi, and can be done in either SPECT or SPECT/CT machines. Bone scan The nuclear medicine technique commonly called the bone scan usually uses 99mTc. It is not to be confused with the "bone density scan", DEXA, which is a low-exposure X-ray test measuring bone density to look for osteoporosis and other diseases where bones lose mass without rebuilding activity. The nuclear medicine technique is sensitive to areas of unusual bone rebuilding activity, since the radiopharmaceutical is taken up by osteoblast cells which build bone. The technique therefore is sensitive to fractures and bone reaction to bone tumors, including metastases. For a bone scan, the patient is injected with a small amount of radioactive material, such as of 99mTc-medronic acid and then scanned with a gamma camera. Medronic acid is a phosphate derivative which can exchange places with bone phosphate in regions of active bone growth, so anchoring the radioisotope to that specific region. To view small lesions (less than ) especially in the spine, the SPECT imaging technique may be required, but currently in the United States, most insurance companies require separate authorization for SPECT imaging. Myocardial perfusion imaging Myocardial perfusion imaging (MPI) is a form of functional cardiac imaging, used for the diagnosis of ischemic heart disease. The underlying principle is, under conditions of stress, diseased myocardium receives less blood flow than normal myocardium. MPI is one of several types of cardiac stress test. As a nuclear stress test, the average radiation exposure is 9.4 mSv, which when compared with a typical 2 view chest X-ray (.1 mSv) is equivalent to 94 Chest X-rays. Several radiopharmaceuticals and radionuclides may be used for this, each giving different information. In the myocardial perfusion scans using 99mTc, the radiopharmaceuticals 99mTc-tetrofosmin (Myoview, GE Healthcare) or 99mTc-sestamibi (Cardiolite, Bristol-Myers Squibb) are used. Following this, myocardial stress is induced, either by exercise or pharmacologically with adenosine, dobutamine or dipyridamole(Persantine), which increase the heart rate or by regadenoson(Lexiscan), a vasodilator. (Aminophylline can be used to reverse the effects of dipyridamole and regadenoson). Scanning may then be performed with a conventional gamma camera, or with SPECT/CT. Cardiac ventriculography In cardiac ventriculography, a radionuclide, usually 99mTc, is injected, and the heart is imaged to evaluate the flow through it, to evaluate coronary artery disease, valvular heart disease, congenital heart diseases, cardiomyopathy, and other cardiac disorders. As a nuclear stress test, the average radiation exposure is 9.4 mSv, which when compared with a typical 2 view chest X-ray (.1 mSv) is equivalent to 94 Chest X-Rays. It exposes patients to less radiation than comparable chest X-ray studies. Functional brain imaging Usually the gamma-emitting tracer used in functional brain imaging is 99mTc-HMPAO (hexamethylpropylene amine oxime, exametazime). The similar 99mTc-EC tracer may also be used. These molecules are preferentially distributed to regions of high brain blood flow, and act to assess brain metabolism regionally, in an attempt to diagnose and differentiate the different causal pathologies of dementia. When used with the 3-D SPECT technique, they compete with brain FDG-PET scans and fMRI brain scans as techniques to map the regional metabolic rate of brain tissue. Sentinel-node identification The radioactive properties of 99mTc can be used to identify the predominant lymph nodes draining a cancer, such as breast cancer or malignant melanoma. This is usually performed at the time of biopsy or resection.99mTc-labelled filtered sulfur colloid or Technetium (99mTc) tilmanocept are injected intradermally around the intended biopsy site. The general location of the sentinel node is determined with the use of a handheld scanner with a gamma-sensor probe that detects the technetium-99m–labeled tracer that was previously injected around the biopsy site. An injection of Methylene blue or isosulfan blue is done at the same time to dye any draining nodes visibly blue. An incision is then made over the area of highest radionuclide accumulation, and the sentinel node is identified within the incision by inspection; the isosulfan blue dye will usually stain any lymph nodes blue that are draining from the area around the tumor. Immunoscintigraphy Immunoscintigraphy incorporates 99mTc into a monoclonal antibody, an immune system protein, capable of binding to cancer cells. A few hours after injection, medical equipment is used to detect the gamma rays emitted by the 99mTc; higher concentrations indicate where the tumor is. This technique is particularly useful for detecting hard-to-find cancers, such as those affecting the intestines. These modified antibodies are sold by the German company Hoechst (now part of Sanofi-Aventis) under the name Scintimun. Blood pool labeling When 99mTc is combined with a tin compound, it binds to red blood cells and can therefore be used to map circulatory system disorders. It is commonly used to detect gastrointestinal bleeding sites as well as ejection fraction, heart wall motion abnormalities, abnormal shunting, and to perform ventriculography. Pyrophosphate for heart damage A pyrophosphate ion with 99mTc adheres to calcium deposits in damaged heart muscle, making it useful to gauge damage after a heart attack. Sulfur colloid for spleen scan The sulfur colloid of 99mTc is scavenged by the spleen, making it possible to image the structure of the spleen. Meckel's diverticulum Pertechnetate is actively accumulated and secreted by the mucoid cells of the gastric mucosa, and therefore, technetate(VII) radiolabeled with Tc99m is injected into the body when looking for ectopic gastric tissue as is found in a Meckel's diverticulum with Meckel's Scans. Pulmonary Carbon inhalation aerosol labeled with technetium-99m (Technegas) is indicated for the visualization of pulmonary ventilation and the evaluation of pulmonary embolism.
Physical sciences
Group 7
Chemistry
9104734
https://en.wikipedia.org/wiki/Surgeon
Surgeon
In medicine, a surgeon is a medical doctor who performs surgery. Even though there are different traditions in different times and places, a modern surgeon is a licensed physician and received the same medical training as physicians before specializing in surgery. In some countries and jurisdictions, the title of 'surgeon' is restricted to maintain the integrity of the craft group in the medical profession. A specialist regarded as a legally recognized surgeon includes podiatry, dentistry, and veterinary medicine. It is estimated that surgeons perform over 300 million surgical procedures globally each year. History The first person to document a surgery was the 6th century BC Indian physician-surgeon, Sushruta. He specialized in cosmetic plastic surgery and even documented an open rhinoplasty procedure. His magnum opus Suśruta-saṃhitā is one of the most important surviving ancient treatises on medicine and is considered a foundational text of both Ayurveda and surgery. The treatise addresses all aspects of general medicine, but the translator G. D. Singhal dubbed Sushruta "the father of surgical intervention" on account of the extraordinarily accurate and detailed accounts of surgery to be found in the work. After the eventual decline of the Sushruta School of Medicine in India, surgery was largely ignored until the Islamic Golden Age surgeon Al-Zahrawi (936–1013) re-established surgery as an effective medical practice. He is considered the greatest medieval surgeon to have appeared from the Islamic World, and has also been described as the father of surgery. His greatest contribution to medicine is the Kitab al-Tasrif, a thirty-volume encyclopedia of medical practices. He was the first physician to describe an ectopic pregnancy, and the first physician to identify the hereditary nature of haemophilia. His pioneering contributions to the field of surgical procedures and instruments had an enormous impact on surgery but it was not until the 18th century that surgery emerged as a distinct medical discipline in England. In Europe, surgery was mostly associated with barber-surgeons who also used their hair-cutting tools to undertake surgical procedures, often at the battlefield and also for their employers. With advances in medicine and physiology, the professions of barbers and surgeons diverged; by the 19th century barber-surgeons had virtually disappeared, and surgeons were almost invariably qualified doctors who had specialized in surgery. Surgeon continued, however, to be used as the title for military medical officers until the end of the 19th century, and the title of Surgeon General continues to exist for both senior military medical officers and senior government public health officers. Titles in the Commonwealth In 1950, the Royal College of Surgeons of England (RCS) in London began to offer surgeons a formal status via RCS membership. The title Mister became a badge of honour, and today, in many Commonwealth countries, a qualified doctor who, after at least four years' training, obtains a surgical qualification (formerly Fellow of the Royal College of Surgeons, but now also Member of the Royal College of Surgeons or a number of other diplomas) is given the honour of being allowed to revert to calling themselves Mr, Miss, Mrs or Ms in the course of their professional practice, but this time the meaning is different. It is sometimes assumed that the change of title implies consultant status (and some mistakenly think non-surgical consultants are Mr too), but the length of postgraduate medical training outside North America is such that a qualified surgeon may be years away from obtaining such a post: many doctors previously obtained these qualifications in the senior house officer grade, and remained in that grade when they began sub-specialty training. The distinction of Mr (etc.) is also used by surgeons in the Republic of Ireland, some states of Australia, Barbados, New Zealand, South Africa, Zimbabwe, and some other Commonwealth countries. In August 2021, the Royal Australasian College of Surgeons announced that it was advocating for this practice to be phased out and began encouraging the use of the gender neutral title Dr or appropriate academic titles such as Professor. Military titles In many English-speaking countries the military title of surgeon is applied to any medical practitioner, due to the historical evolution of the term. The US Army Medical Corps retains various surgeon United States military occupation codes in the ranks of officer pay grades, for military personnel dedicated to performing surgery on wounded soldiers. Specialties Some physicians who are general practitioners or specialists in family medicine or emergency medicine may perform limited ranges of minor, common, or emergency surgery. Anesthesia often accompanies surgery, and anesthesiologists and nurse anesthetists may oversee this aspect of surgery. Surgeon's assistant, surgical nurses, surgical technologists are trained professionals who support surgeons. In the United States, the Department of Labor description of a surgeon is "a physician who treats diseases, injuries, and deformities by invasive, minimally-invasive, or non-invasive surgical methods, such as using instruments, appliances, or by manual manipulation". Around the world, the array of 'surgical' pathology that a surgeon manages does not always require surgical methods. For example, surgeons treat diverticulitis conservatively using antibiotics and bowel rest. In some cases of small bowel obstruction, particularly where a patient has had previous abdominal surgery, the surgeon treats the patient with fluid resuscitation, nasogastric decompression of the stomach, which gives rise to resolution of the intestinal obstruction in cases where adhesions are the aetiology of the obstruction. The same is true for other craft groups in surgery. Pioneer surgeons Christiaan Barnard (cardiac surgery, first heart transplantation) Alfred Blalock (first modern day successful open heart surgery in 1944) Nina Starr Braunwald (First female cardiac surgeon) Dorothy-Laviania Brown (First female African-American surgeon) Victor Chang Australian pioneer of heart transplantation Harvey Cushing (pioneer, and often considered the father of, modern neurosurgery) Eleanor Davies-Colley (surgeon and founder of the South London Hospital for Women and Children) Michael DeBakey (educator and innovator in the field of cardiac surgery) René Favaloro (first surgeon to perform bypass surgery) Svyatoslav Fyodorov (creator of radial keratotomy) Harold Gillies (pioneer of plastic surgery) Jesse Gray (First female chief of surgery at Hopkinz Hospital) William Stewart Halsted (initiated surgical residency training in U.S., pioneer in many fields) Michael R. Harrison (pioneer of fetal surgery) Sir Victor Horsley (neurosurgery) John Hunter (Scottish, viewed as the father of modern surgery, performed hundreds of dissections, served as the model for Dr. Jekyll.) Gavriil Ilizarov, inventor of the Ilizarov apparatus for lengthening limb bones and for the method of surgery named after him, the Ilizarov surgery Charles Kelman (Invented phacoemulsification, the technique of modern cataract surgery) Lars Leksell (neurosurgery, inventor of radiosurgery) C. Walton Lillehei (labeled "Father of modern day open heart surgery") Joseph Lister (discoverer of surgical sepsis, Listerine named in his honour) B. K. Misra – first neurosurgeon in the world to perform image-guided surgery for aneurysms, first in South Asia to perform stereotactic radiosurgery, first in India to perform awake craniotomy and laparoscopic spine surgery. Ioannis Pallikaris (Greek surgeon. Performed the first LASIK procedure on a human eye. Developed Epi-LASIK.) Fidel Pagés (pioneer of epidural anesthesia) Wilder Penfield (neurosurgery) Gholam A. Peyman (inventor of LASIK,) Nikolay Pirogov (the founder of field surgery) Jennie Simile Robertson (first female surgeon in Canada) Valery Shumakov (pioneer of artificial organs implantation) Maria Siemionow (pioneer of near-total face transplant surgery) Thomas E. Starzl (pioneer of the development of liver transplantations) Sushruta (the first to document an operation of open rhinoplasty) Paul Tessier (French surgeon in Craniofacial surgery) Mary Edwards Walker (first female surgeon in the United States) Gazi Yasargil (Turkish neurosurgeon, founder of microneurosurgery) al-Zahrawi, regarded as one of the greatest medieval surgeons and a father of surgery. Organizations and fellowships ACFAS FACS FRACDS FRACS FRCS FRCS (Canada) FRCS (Edinburgh) FRCSI (Ireland) MRCS
Biology and health sciences
Health professionals
Health