id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
339747
https://en.wikipedia.org/wiki/Gutta-percha
Gutta-percha
Gutta-percha is a tree of the genus Palaquium in the family Sapotaceae, which is primarily used to create a high-quality latex of the same name. The material is rigid, naturally biologically inert, resilient, electrically nonconductive, and thermoplastic, most commonly sourced from Palaquium gutta; it is a polymer of isoprene which forms a rubber-like elastomer. The word "gutta-percha" comes from the plant's name in Malay: translates as 'sticky gum' and () is the name of a less-sought-after gutta tree. The western term therefore is likely a derivative amalgamation of the original native names. Description Palaquium gutta trees are tall and up to in trunk diameter. The leaves are evergreen, alternate or spirally arranged, simple, entire, long, glossy green above, and often yellow or glaucous below. The flowers are produced in small clusters along the stems, each flower with a white corolla with four to seven (mostly six) acute lobes. The fruit is an ovoid berry, containing one to four seeds; in many species, the fruit is edible. In Australia, gutta-percha is a common name specifically used for the euphorbiaceous tree Excoecaria parvifolia, which yields an aromatic, heavy, dark-brown timber. Chemistry Chemically, gutta-percha is a polyterpene, a polymer of isoprene, or polyisoprene, specifically (trans-1,4-polyisoprene). The cis structure of polyisoprene is the common latex elastomer. While latex rubbers are amorphous in molecular structure, gutta-percha (the trans structure) crystallizes, leading to a more rigid material. It exists in alpha and beta forms, with the alpha form being brittle at room temperature. Uses Historic Long before gutta-percha was introduced into the Western world, it was used in a less-processed form by the natives of the Malaysian archipelago for making knife handles, walking sticks, and other purposes. The first European to study this material was John Tradescant, who collected it in the far east in 1656. He named this material "Mazer wood". William Montgomerie, a medical officer in imperial service, introduced gutta-percha into practical use in the West. He was the first to appreciate the potential of this material in medicine, and he was awarded the gold medal by the Royal Society of Arts, London in 1843. Scientifically classified in 1843, it was found to be a useful natural thermoplastic. In 1851, of gutta-percha was imported into Britain. During the second half of the 19th century, gutta-percha was used for many domestic and industrial purposes, and it became a household word. Gutta-percha was particularly important for the manufacture of underwater telegraph cables. Compared to rubber, it does not degrade in seawater, is not damaged by marine life, and maintains good electrical insulation. These properties, along with its mouldability and flexibility made it ideal for the purpose, with no other material to match it in the 19th century. The use in electrical cables generated a huge demand which led to unsustainable harvesting and collapse of supply. Electrical Gutta-percha latex is biologically inert, resilient, and is a good electrical insulator with a high dielectric strength. Michael Faraday discovered its value as an insulator soon after the introduction of the material to Britain in 1843. Allowing this fluid to evaporate and coagulate in the sun produced a latex which could be made flexible again with hot water, but which did not become brittle, unlike rubber prior to the discovery of vulcanization. By 1845, telegraph wires insulated with gutta-percha were being manufactured in the UK. It served as the insulating material for early undersea telegraph cables, including the first transatlantic telegraph cable. The material was a major constituent of Chatterton's compound used as an insulating sealant for telegraph and other electrical cables. The dielectric constant of dried gutta-percha ranges from 2.56 to 3.01. Resistivity of dried gutta-percha ranges from to . Since about 1940, polyethylene has supplanted gutta-percha as an electrical insulator. Other In the mid-19th century, gutta-percha was used to make furniture, notably by the Gutta Percha Company, established in 1847. Several of these ornate, revival-style pieces were shown at the 1851 Great Exhibition in Hyde Park, London. The company also made a range of utensils. The "guttie" golf ball (which had a solid gutta-percha core) revolutionized the game. Gutta-percha was used to make "mourning" jewelry, because it was dark in color and could be easily molded into beads or other shapes. Pistol hand grips and rifle shoulder pads were also made from gutta-percha, since it was hard and durable, though it fell into disuse when synthetic plastics such as Bakelite became available. Gutta-percha was used in canes and walking sticks. In 1856, United States Representative Preston Brooks used a cane made of gutta-percha as a weapon in his attack on Senator Charles Sumner. In the 1860s, gutta-percha was used to reinforce the soles of football players' boots before it was banned by The Football Association in the first codified set of rules in 1863. Gutta-percha was briefly used in bookbinding until the advent of vulcanization. The wood of many species is also valuable. Today Art Gutta-percha is used as a resist in silk painting, including some newer forms of batik. Dentistry The same bioinertness that made it suitable for marine cables also means it does not readily react within the human body. It is used in a variety of surgical devices and during root canal therapy. It is the predominant material used to obturate, or fill, the empty space inside the root of a tooth after it has undergone endodontic therapy. Its physical and chemical properties, including its inertness and biocompatibility, melting point, ductility, and malleability, make it important in endodontics, e.g., as gutta-percha points. Zinc oxide is added to reduce brittleness and improve plasticity. Barium sulfate is added to provide radiopacity so that its presence and location can be verified in dental X-ray images. Substitutes Gutta-percha remained an industrial staple well into the 20th century, when it was gradually replaced with superior synthetic materials, such as Bakelite. A similar and cheaper natural material called balatá was often used in gutta-percha's place. The two materials are almost identical, and balatá is often called gutta-balatá.
Technology
Materials
null
339838
https://en.wikipedia.org/wiki/Molecular%20genetics
Molecular genetics
Molecular genetics is a branch of biology that addresses how differences in the structures or expression of DNA molecules manifests as variation among organisms. Molecular genetics often applies an "investigative approach" to determine the structure and/or function of genes in an organism's genome using genetic screens.  The field of study is based on the merging of several sub-fields in biology: classical Mendelian inheritance, cellular biology, molecular biology, biochemistry, and biotechnology. It integrates these disciplines to explore things like genetic inheritance, gene regulation and expression, and the molecular mechanism behind various life processes. A key goal of molecular genetics is to identify and study genetic mutations. Researchers search for mutations in a gene or induce mutations in a gene to link a gene sequence to a specific phenotype. Therefore molecular genetics is a powerful methodology for linking mutations to genetic conditions that may aid the search for treatments of various genetics diseases. History The discovery of DNA as the blueprint for life and breakthroughs in molecular genetics research came from the combined works of many scientists. In 1869, chemist Johann Friedrich Miescher, who was researching the composition of white blood cells, discovered and isolated a new molecule that he named nuclein from the cell nucleus, which would ultimately be the first discovery of the molecule DNA that was later determined to be the molecular basis of life. He determined it was composed of hydrogen, oxygen, nitrogen and phosphorus. Biochemist Albrecht Kossel identified nuclein as a nucleic acid and provided its name deoxyribonucleic acid (DNA). He continued to build on that by isolating the basic building blocks of DNA and RNA; made up of the nucleotides: adenine, guanine, thymine, cytosine. and uracil. His work on nucleotides earned him a Nobel Prize in Physiology. In the early 1800s, Gregor Mendel, who became known as one of the fathers of genetics, made great contributions to the field of genetics through his various experiments with pea plants where he was able to discover the principles of inheritance such as recessive and dominant traits, without knowing what genes where composed of. In the mid 19th century, anatomist Walther Flemming, discovered what we now know as chromosomes and the separation process they undergo through mitosis. His work along with Theodor Boveri first came up with the Chromosomal Theory of Inheritance, which helped explain some of the patterns Mendel had observed much earlier. For molecular genetics to develop as a discipline, several scientific discoveries were necessary.  The discovery of DNA as a means to transfer the genetic code of life from one cell to another and between generations was essential for identifying the molecule responsible for heredity. Molecular genetics arose initially from studies involving genetic transformation in bacteria. In 1944 Avery, McLeod and McCarthy isolated DNA from a virulent strain of S. pneumoniae, and using just this DNA were able to convert a harmless strain to virulence. They called the uptake, incorporation and expression of DNA by bacteria "transformation". This finding suggested that DNA is the genetic material of bacteria. Bacterial transformation is often induced by conditions of stress, and the function of transformation appears to be repair of genomic damage. In 1950, Erwin Chargaff derived rules that offered evidence of DNA being the genetic material of life. These were "1) that the base composition of DNA varies between species and 2) in natural DNA molecules, the amount of adenine (A) is equal to the amount of thymine (T), and the amount of guanine (G) is equal to the amount of cytosine (C)." These rules, known as Chargaff's rules, helped to understand of molecular genetics. In 1953 Francis Crick and James Watson, building upon the X-ray crystallography work done by Rosalind Franklin and Maurice Wilkins, were able to derive the 3-D double helix structure of DNA. The phage group was an informal network of biologists centered on Max Delbrück that contributed substantially to molecular genetics and the origins of molecular biology during the period from about 1945 to 1970. The phage group took its name from bacteriophages, the bacteria-infecting viruses that the group used as experimental model organisms. Studies by molecular geneticists affiliated with this group contributed to understanding how gene-encoded proteins function in DNA replication, DNA repair and DNA recombination, and on how viruses are assembled from protein and nucleic acid components (molecular morphogenesis). Furthermore, the role of chain terminating codons was elucidated. One noteworthy study was performed by Sydney Brenner and collaborators using "amber" mutants defective in the gene encoding the major head protein of bacteriophage T4. This study demonstrated the co-linearity of the gene with its encoded polypeptide, thus providing strong evidence for the "sequence hypothesis" that the amino acid sequence of a protein is specified by the nucleotide sequence of the gene determining the protein.  The isolation of a restriction endonuclease in E. coli by Arber and Linn in 1969 opened the field of genetic engineering. Restriction enzymes were used to linearize DNA for separation by electrophoresis and Southern blotting allowed for the identification of specific DNA segments via hybridization probes. In 1971, Berg utilized restriction enzymes to create the first recombinant DNA molecule and first recombinant DNA plasmid.  In 1972, Cohen and Boyer created the first recombinant DNA organism by inserting recombinant DNA plasmids into E. coli, now known as bacterial transformation, and paved the way for molecular cloning.  The development of DNA sequencing techniques in the late 1970s, first by Maxam and Gilbert, and then by Frederick Sanger, was pivotal to molecular genetic research and enabled scientists to begin conducting genetic screens to relate genotypic sequences to phenotypes. Polymerase chain reaction (PCR) using Taq polymerase, invented by Mullis in 1985, enabled scientists to create millions of copies of a specific DNA sequence that could be used for transformation or manipulated using agarose gel separation. A decade later, the first whole genome was sequenced (Haemophilus influenzae), followed by the eventual sequencing of the human genome via the Human Genome Project in 2001. The culmination of all of those discoveries was a new field called genomics that links the molecular structure of a gene to the protein or RNA encoded by that segment of DNA and the functional expression of that protein within an organism. Today, through the application of molecular genetic techniques, genomics is being studied in many model organisms and data is being collected in computer databases like NCBI and Ensembl. The computer analysis and comparison of genes within and between different species is called bioinformatics, and links genetic mutations on an evolutionary scale. Central dogma The central dogma plays a key role in the study of molecular genetics. The central dogma states that DNA replicates itself, DNA is transcribed into RNA, and RNA is translated into proteins. Along with the central dogma, the genetic code is used in understanding how RNA is translated into proteins. Replication of DNA and transcription from DNA to mRNA occurs in the nucleus while translation from RNA to proteins occurs in the ribosome. The genetic code is made of four interchangeable parts othe DNA molecules, called "bases": adenine, cytosine, uracil (in RNA; thymine in DNA), and guanine and is redundant, meaning multiple combinations of these base pairs (which are read in triplicate) produce the same amino acid. Proteomics and genomics are fields in biology that come out of the study of molecular genetics and the central dogma. Structure of DNA An organism's genome is made up by its entire set of DNA and is responsible for its genetic traits, function and development. The composition of DNA itself is an essential component to the field of molecular genetics; it is the basis of how DNA is able to store genetic information, pass it on, and be in a format that can be read and translated. DNA is a double stranded molecule, with each strand oriented in an antiparallel fashion. Nucleotides are the building blocks of DNA, each composed of a sugar molecule, a phosphate group and one of four nitrogenous bases: adenine, guanine, cytosine, and thymine. A single strand of DNA is held together by covalent bonds, while the two antiparallel strands are held together by hydrogen bonds between the nucleotide bases. Adenine binds with thymine and cytosine binds with guanine. It is these four base sequences that form the genetic code for all biological life and contains the information for all the proteins the organism will be able to synthesize. Its unique structure allows DNA to store and pass on biological information across generations during cell division. At cell division, cells must be able to copy its genome and pass it on to daughter cells. This is possible due to the double-stranded structure of DNA because one strand is complementary to its partner strand, and therefore each of these strands can act as a template strand for the formation of a new complementary strand. This is why the process of DNA replication is known as a semiconservative process. Techniques Forward genetics Forward genetics is a molecular genetics technique used to identify genes or genetic mutations that produce a certain phenotype. In a genetic screen, random mutations are generated with mutagens (chemicals or radiation) or transposons and individuals are screened for the specific phenotype. Often, a secondary assay in the form of a selection may follow mutagenesis where the desired phenotype is difficult to observe, for example in bacteria or cell cultures. The cells may be transformed using a gene for antibiotic resistance or a fluorescent reporter so that the mutants with the desired phenotype are selected from the non-mutants. Mutants exhibiting the phenotype of interest are isolated and a complementation test may be performed to determine if the phenotype results from more than one gene. The mutant genes are then characterized as dominant (resulting in a gain of function), recessive (showing a loss of function), or epistatic (the mutant gene masks the phenotype of another gene). Finally, the location and specific nature of the mutation is mapped via sequencing. Forward genetics is an unbiased approach and often leads to many unanticipated discoveries, but may be costly and time consuming. Model organisms like the nematode worm Caenorhabditis elegans, the fruit fly Drosophila melanogaster, and the zebrafish Danio rerio have been used successfully to study phenotypes resulting from gene mutations. Reverse genetics Reverse genetics is the term for molecular genetics techniques used to determine the phenotype resulting from an intentional mutation in a gene of interest. The phenotype is used to deduce the function of the un-mutated version of the gene. Mutations may be random or intentional changes to the gene of interest. Mutations may be a missense mutation caused by nucleotide substitution, a nucleotide addition or deletion to induce a frameshift mutation, or a complete addition/deletion of a gene or gene segment. The deletion of a particular gene creates a gene knockout where the gene is not expressed and a loss of function results (e.g. knockout mice). Missense mutations may cause total loss of function or result in partial loss of function, known as a knockdown. Knockdown may also be achieved by RNA interference (RNAi). Alternatively, genes may be substituted into an organism's genome (also known as a transgene) to create a gene knock-in and result in a gain of function by the host. Although these techniques have some inherent bias regarding the decision to link a phenotype to a particular function, it is much faster in terms of production than forward genetics because the gene of interest is already known. Molecular genetic tools Molecular genetics is a scientific approach that utilizes the fundamentals of genetics as a tool to better understand the molecular basis of a disease and biological processes in organisms. Below are some tools readily employed by researchers in the field. Microsatellites Microsatellites or single sequence repeats (SSRS) are short repeating segment of DNA composed to 6 nucleotides at a particular location on the genome that are used as genetic marker. Researchers can analyze these microsatellites in techniques such DNA fingerprinting and paternity testing since these repeats are highly unique to individuals/families. a can also be used in constructing genetic maps and to studying genetic linkage to locate the gene or mutation responsible for specific trait or disease. Microsatellites can also be applied to population genetics to study comparisons between groups. Genome-wide association studies Genome-wide association studies (GWAS) are a technique that relies on single nucleotide polymorphisms (SNPs) to study genetic variations in populations that can be associated with a particular disease. The Human Genome Project mapped the entire human genome and has made this approach more readily available and cost effective for researchers to implement. In order to conduct a GWAS researchers use two groups, one group that has the disease researchers are studying and another that acts as the control that does not have that particular disease. DNA samples are obtained from participants and their genome can then be derived through lab machinery and quickly surveyed to compare participants and look for SNPs that can potentially be associated with the disease. This technique allows researchers to pinpoint genes and locations of interest in the human genome that they can then further study to identify that cause of the disease. Karyotyping Karyotyping allows researchers to analyze chromosomes during metaphase of mitosis, when they are in a condensed state. Chromosomes are stained and visualized through a microscope to look for any chromosomal abnormalities. This technique can be used to detect congenital genetic disorder such as down syndrome, identify gender in embryos, and diagnose some cancers that are caused by chromosome mutations such as translocations. Modern applications Genetic engineering Genetic engineering is an emerging field of science, and researcher are able to leverage molecular genetic technology to modify the DNA of organisms and create genetically modified and enhanced organisms for industrial, agricultural and medical purposes. This can be done through genome editing techniques, which can involve modifying base pairings in a DNA sequence, or adding and deleting certain regions of DNA. Gene editing Gene editing allows scientists to alter/edit an organism's DNA. One way to due this is through the technique Crispr/Cas9, which was adapted from the genome immune defense that is naturally occurring in bacteria. This technique relies on the protein Cas9 which allows scientists to make a cut in strands of DNA at a specific location, and it uses a specialized RNA guide sequence to ensure the cut is made in the proper location in the genome. Then scientists use DNAs repair pathways to induce changes in the genome; this technique has wide implications for disease treatment. Personalized medicine Molecular genetics has wide implications in medical advancement and understanding the molecular basis of a disease allows the opportunity for more effective diagnostic and therapies. One of the goals of the field is personalized medicine, where an individual's genetics can help determine the cause and tailor the cure for a disease they are afflicted with and potentially allow for more individualized treatment approaches which could be more effective. For example, certain genetic variations in individuals could make them more receptive to a particular drug while other could have a higher risk of adverse reaction to treatments. So this information would allow researchers and clinicals to make the most informed decisions about treatment efficacy for patients rather than the standard trial and error approach. Forensic genetics Forensic genetics plays an essential role for criminal investigations through that use of various molecular genetic techniques. One common technique is DNA fingerprinting which is done using a combination of molecular genetic techniques like polymerase chain reaction (PCR) and gel electrophoresis. PCR is a technique that allows a target DNA sequence to be amplified, meaning even a tiny quantity of DNA from a crime scene can be extracted and replicated many times to provide a sufficient amount of material for analysis. Gel electrophoresis allows the DNA sequence to be separated based on size, and the pattern that is derived is known as DNA fingerprinting and is unique to each individual. This combination of molecular genetic techniques allows a simple DNA sequence to be extracted, amplified, analyzed and compared with others and is a standard technique used in forensics.
Biology and health sciences
Genetics
Biology
339887
https://en.wikipedia.org/wiki/Theridiidae
Theridiidae
Theridiidae, also known as the tangle-web spiders, cobweb spiders and comb-footed spiders, is a large family of araneomorph spiders first described by Carl Jakob Sundevall in 1833. This diverse, globally distributed family includes over 3,000 species in 124 genera, and is the most common arthropod found in human dwellings throughout the world. Theridiid spiders are both entelegyne, meaning that the females have a genital plate, and ecribellate, meaning that they spin sticky capture silk instead of woolly silk. They have a comb of serrated bristles (setae) on the tarsus of the fourth leg. The family includes some model organisms for research, including the medically important widow spiders. They are important to studies characterizing their venom and its clinical manifestation, but widow spiders are also used in research on spider silk and sexual biology, including sexual cannibalism. Anelosimus are also model organisms, used for the study of sociality, because it has evolved frequently within the genus, allowing comparative studies across species, and because it contains species varying from solitary to permanently social. These spiders are also a promising model for the study of inbreeding because all permanently social species are highly inbred. The Hawaiian Theridion grallator is used as a model to understand the selective forces and the genetic basis of color polymorphism within species. T. grallator is known as the "happyface" spider, as certain morphs have a pattern resembling a smiley face or a grinning clown face on their yellow body. Webs They often build tangle space webs, hence the common name, but Theridiidae has a large diversity of spider web forms. Many trap ants and other ground dwelling insects using elastic, sticky silk trap lines leading to the soil surface. Webs remain in place for extended periods and are expanded and repaired, but no regular pattern of web replacement has been observed. The well studied kleptoparasitic members of Argyrodinae (Argyrodes, Faiditus, and Neospintharus) live in the webs of larger spiders and pilfer small prey caught by their host's web. They eat prey killed by the host spider, consume silk from the host web, and sometimes attack and eat the host itself. Theridiid gumfoot-webs consist of frame lines that anchor them to surroundings and of support threads, which possess viscid silk. These can either have a central retreat (Achaearanea-type) or a peripheral retreat (Latrodectus-type). Building gum-foot lines is a unique, stereotyped behaviour, and is likely homologous for Theridiidae and its sister family Nesticidae. Among webs without gumfooted lines, some contain viscid silk (Theridion-type) and some that are sheet-like, which do not contain viscid silk (Coleosoma-type). However, there are many undescribed web forms. Genera The largest genus is Theridion with over 600 species, but it is not monophyletic. Parasteatoda, previously Achaearanea, is another large genus that includes the North American common house spider. , the World Spider Catalog accepts the following genera: Achaearanea Strand, 1929 – Africa, Asia, Australia, South America, Central America Achaearyopa Barrion & Litsinger, 1995 – Philippines Achaeridion Wunderlich, 2008 – Turkey Allothymoites Ono, 2007 – China, Japan Ameridion Wunderlich, 1995 – Central America, Caribbean, Mexico, South America Anatea Berland, 1927 – Australia Anatolidion Wunderlich, 2008 – Africa, Europe, Turkey Anelosimus Simon, 1891 – Asia, Africa, North America, South America, Oceania, Central America, Caribbean Argyrodella Saaristo, 2006 – Seychelles Argyrodes Simon, 1864 – Africa, Asia, Oceania, North America, South America, Jamaica Ariamnes Thorell, 1869 – Costa Rica, South America, Asia, Africa, Oceania, Mexico, Cuba Asagena Sundevall, 1833 – North America, Asia, Europe, Algeria Asygyna Agnarsson, 2006 – Madagascar Audifia Keyserling, 1884 – Guinea-Bissau, Congo, Brazil Bardala Saaristo, 2006 – Seychelles Borneoridion Deeleman & Wunderlich, 2011 – Indonesia Brunepisinus Yoshida & Koh, 2011 – Indonesia Cabello Levi, 1964 – Venezuela Cameronidion Wunderlich, 2011 – Malaysia Campanicola Yoshida, 2015 – Asia Canalidion Wunderlich, 2008 – Russia Carniella Thaler & Steinberger, 1988 – Europe, Angola, Asia Cephalobares O. Pickard-Cambridge, 1871 – Sri Lanka, China Cerocida Simon, 1894 – Brazil, Venezuela, Guyana Chikunia Yoshida, 2009 – Asia Chorizopella Lawrence, 1947 – South Africa Chrosiothes Simon, 1894 – North America, South America, Central America, Caribbean, Asia Chrysso O. Pickard-Cambridge, 1882 – North America, South America, Central America, Asia, Trinidad, Europe Coleosoma O. Pickard-Cambridge, 1882 – United States, South America, Seychelles, Asia, New Zealand Coscinida Simon, 1895 – Asia, Africa Craspedisia Simon, 1894 – Brazil Crustulina Menge, 1868 – Ukraine, United States, Africa, Oceania, Asia Cryptachaea Archer, 1946 – South America, North America, Oceania, Central America, Asia, Trinidad, Belgium Cyllognatha L. Koch, 1872 – Samoa, Australia, India Deelemanella Yoshida, 2003 – Indonesia Dipoena Thorell, 1869 – North America, Oceania, Asia, Central America, South America, Caribbean, Africa, Europe Dipoenata Wunderlich, 1988 – Panama, South America, Malta Dipoenura Simon, 1909 – Asia, Sierra Leone Echinotheridion Levi, 1963 – South America Emertonella Bryant, 1945 – North America, Asia, Papua New Guinea Enoplognatha Pavesi, 1880 – Asia, Europe, Australia, Africa, North America, South America Episinus Walckenaer, 1809 – Asia, South America, Europe, North America, New Zealand, Central America, Africa, Caribbean Euryopis Menge, 1868 – Asia, North America, South America, Jamaica, Europe, Oceania, Africa, Panama Eurypoena Wunderlich, 1992 – Canary Is. Exalbidion Wunderlich, 1995 – Central America, South America, Mexico Faiditus Keyserling, 1884 – South America, North America, Central America, Caribbean, Asia Gmogala Keyserling, 1890 – Papua New Guinea, Australia Grancanaridion Wunderlich, 2011 – Canary Is. Guaraniella Baert, 1984 – Brazil, Paraguay Hadrotarsus Thorell, 1881 – Oceania, Belgium, Taiwan Helvibis Keyserling, 1884 – South America, Panama, Trinidad Helvidia Thorell, 1890 – Indonesia Hentziectypus Archer, 1946 – Caribbean, Panama, North America, South America Heterotheridion Wunderlich, 2008 – Turkey, Russia, China Hetschkia Keyserling, 1886 – Brazil Histagonia Simon, 1895 – South Africa Icona Forster, 1955 – New Zealand Jamaitidion Wunderlich, 1995 – Jamaica Janula Strand, 1932 – Asia, South America, Australia, Panama, Trinidad Keijiella Yoshida, 2016 – Asia Kochiura Archer, 1950 – Chile, Turkey, Brazil Landoppo Barrion & Litsinger, 1995 – Philippines Lasaeola Simon, 1881 – Europe, North America, Panama, South America, Asia Latrodectus Walckenaer, 1805 – South America, North America, Asia, Europe, Oceania, Africa Macaridion Wunderlich, 1992 – Europe Magnopholcomma Wunderlich, 2008 – Australia Meotipa Simon, 1894 – Asia, Papua New Guinea Molione Thorell, 1892 – Asia Moneta O. Pickard-Cambridge, 1871 – Oceania, Asia, Seychelles Montanidion Wunderlich, 2011 – Malaysia Nanume Saaristo, 2006 – Seychelles Neopisinus Marques, Buckup & Rodrigues, 2011 – Panama, Caribbean, South America, North America Neospintharus Exline, 1950 – North America, Asia, South America, Central America Neottiura Menge, 1868 – Asia, Europe, Algeria Nesopholcomma Ono, 2010 – Japan Nesticodes Archer, 1950 – Asia, New Zealand Nihonhimea Yoshida, 2016 – Asia, Seychelles, Oceania, Mexico Nipponidion Yoshida, 2001 – Japan Nojimaia Yoshida, 2009 – China, Japan Ohlertidion Wunderlich, 2008 – Greenland, Russia Okumaella Yoshida, 2009 – Japan Paidiscura Archer, 1950 – Europe, Algeria, Asia Parasteatoda Archer, 1946 – Asia, Oceania, Cuba, North America, Argentina, Seychelles Paratheridula Levi, 1957 – United States, Chile Pholcomma Thorell, 1869 – Oceania, North America, Asia, South America Phoroncidia Westwood, 1835 – Asia, Africa, North America, Caribbean, South America, Oceania, Europe, Costa Rica Phycosoma O. Pickard-Cambridge, 1879 – North America, Asia, Africa, Jamaica, Panama, Brazil, New Zealand Phylloneta Archer, 1950 – Asia, United States, Spain Platnickina Koçak & Kemal, 2008 – North America, Asia, Africa Proboscidula Miller, 1970 – Angola, Rwanda Propostira Simon, 1894 – India, Sri Lanka Pycnoepisinus Wunderlich, 2008 – Kenya Rhomphaea L. Koch, 1872 – Asia, Africa, South America, Oceania, North America, Europe, Central America, Saint Vincent and the Grenadines Robertus O. Pickard-Cambridge, 1879 – Europe, North America, Asia, Congo Ruborridion Wunderlich, 2011 – India Rugathodes Archer, 1950 – Asia, North America Sardinidion Wunderlich, 1995 – Africa, Europe Selkirkiella Berland, 1924 – Chile, Argentina Sesato Saaristo, 2006 – Seychelles Seycellesa Koçak & Kemal, 2008 – Seychelles Simitidion Wunderlich, 1992 – Europe, Asia, Canada Spheropistha Yaginuma, 1957 – Japan, China Spinembolia Saaristo, 2006 – Seychelles Spintharus Hentz, 1850 – Pakistan, Caribbean, Mexico, Brazil Steatoda Sundevall, 1833 – Oceania, North America, Asia, Europe, South America, Africa Stemmops O. Pickard-Cambridge, 1894 – South America, North America, Central America, Caribbean, Asia Stoda Saaristo, 2006 – Seychelles Styposis Simon, 1894 – United States, South America, Central America, Congo Takayus Yoshida, 2001 – Asia Tamanidion Wunderlich, 2011 – Malaysia Tekellina Levi, 1957 – United States, Brazil, Asia Theonoe Simon, 1881 – Tanzania, Europe, North America Theridion Walckenaer, 1805 – Asia, North America, Central America, Europe, South America, Africa, Oceania, Caribbean Theridula Emerton, 1882 – Spain, Africa, North America, Central America, Asia, South America Thwaitesia O. Pickard-Cambridge, 1881 – Panama, South America, Africa, Asia, Oceania, Trinidad Thymoites Keyserling, 1884 – South America, Central America, Asia, North America, Caribbean, Greenland, Tanzania Tidarren Chamberlin & Ivie, 1934 – Africa, Yemen, North America, Argentina, Costa Rica Tomoxena Simon, 1895 – Indonesia, India Wamba O. Pickard-Cambridge, 1896 – North America, South America, Panama Wirada Keyserling, 1886 – Mexico, South America Yaginumena Yoshida, 2002 – Asia Yoroa Baert, 1984 – Papua New Guinea, Australia Yunohamella Yoshida, 2007 – Asia, Europe Zercidium Benoit, 1977 – St. Helena About 35 extinct genera have also been placed in the family. The oldest known stem-group member of the family is Cretotheridion from the Cenomanian aged Burmese amber of Myanmar.
Biology and health sciences
Spiders
Animals
340167
https://en.wikipedia.org/wiki/Giant%20anteater
Giant anteater
The giant anteater (Myrmecophaga tridactyla) is an insectivorous mammal native to Central and South America. It is one of four living species of anteaters, of which it is the largest member. The only extant member of the genus Myrmecophaga, it is classified with sloths in the order Pilosa. This species is mostly terrestrial, in contrast to other living anteaters and sloths, which are arboreal or semiarboreal. The giant anteater is in length, with weights of for males and for females. It is recognizable by its elongated snout, bushy tail, long fore claws, and distinctively colored pelage. The giant anteater is found in multiple habitats, including grassland and rainforest. It forages in open areas and rests in more forested habitats. It feeds primarily on ants and termites, using its fore claws to dig them up and its long, sticky tongue to collect them. Though giant anteaters live in overlapping home ranges, they are mostly solitary except during mother-offspring relationships, aggressive interactions between males, and when mating. Mother anteaters carry their offspring on their backs until weaning them. The giant anteater is listed as vulnerable by the International Union for Conservation of Nature. It has been extirpated from many parts of its former range. Threats to its survival include habitat destruction, fire, and poaching for fur and bushmeat, although some anteaters inhabit protected areas. With its distinctive appearance and habits, the anteater has been featured in pre-Columbian myths and folktales, as well as modern popular culture. Taxonomy The giant anteater got its binomial name from Carl Linnaeus in 1758. Its generic name, Myrmecophaga, and specific name, tridactyla, are both Greek, meaning "anteater" and "three fingers", respectively. Myrmecophaga jubata was used as a synonym. Three subspecies have been suggested: M. t. tridactyla (Venezuela and the Guianas south to northern Argentina), M. t. centralis (Central America to northwestern Colombia and northern Ecuador), and M. t. artata (northeastern Colombia and northwestern Venezuela). The giant anteater is grouped with the semiarboreal northern and southern tamanduas in the family Myrmecophagidae. Together with the family Cyclopedidae, whose only extant member is the arboreal silky anteater, the two families comprise the suborder Vermilingua. Anteaters and sloths belong to order Pilosa and share superorder Xenarthra (cladogram below) with the Cingulata (whose only extant members are armadillos). The two orders of Xenarthra split 66 million years ago (Mya) during the Late Cretaceous epoch. Anteaters and sloths diverged around 55 Mya, between the Paleocene and Eocene epochs. The lineages of Cyclopes and other extant anteaters split around 40 Mya in the Oligocene epoch, while the last common ancestor of Myrmecophaga and Tamandua existed 10 Mya in the Late Miocene subepoch. Though most of their evolutionary history, anteaters were confined to South America, which was formerly an island continent. Following the formation of the Isthmus of Panama about 3 Mya, anteaters of all three extant genera invaded Central America as part of the Great American Interchange. The fossil record for anteaters is generally sparse. Known fossils include the Pliocene genus Palaeomyrmidon, a close relative to the silky anteater, Protamandua, which is closer to the giant anteater and the tamanduas from the Miocene, and Neotamandua, which is believed have close affinities to Myrmecophaga. Protamandua was larger than the silky anteater, but smaller than a tamandua, while Neotamandua was larger, falling somewhere between a tamandua and a giant anteater. Protamandua did not appear to be specialized for walking or climbing, but it may have had a prehensile tail. Neotamandua, though, is unlikely to have had a prehensile tail and its feet were similar in form to both the tamanduas and the giant anteater. The species Neotamandua borealis was suggested to be an ancestor of the latter. Another member of the genus Myrmecophaga has been recovered from the Montehermosan Monte Hermoso Formation in Argentina and was described by Kraglievitch in 1934 as Nunezia caroloameghinoi. The species was reclassified as Myrmecophaga caroloameghinoi by S. E. Hirschfeld in 1976. The giant anteater is the most terrestrial of the living anteater species; specialization for life on the ground appears to be a new trait in anteater evolution. The transition to life on the ground could have been aided by the expansion of open habitats such as savanna in South America and the abundance of native colonial insects, such as termites, that provided a larger potential food source. Both the giant anteater and the southern tamandua are well represented in the fossil record of the late Pleistocene and early Holocene. Characteristics The giant anteater can be identified by its large size, long, narrow muzzle, and long bushy tail. It has a total body length of . Males weigh and females weigh , making the giant anteater the biggest extant species in its suborder. The head of the giant anteater, at long, is particularly elongated, even when compared to other anteaters. Its cylindrical snout takes up most of its head. Its eyes, ears and mouth are relatively small. It has poor eyesight, but has a powerful sense of smell; 40 times that of a human. While there is some difference in size and shape between the sexes, males being larger and more robust, telling them apart from a distance can be difficult. The male's genitals are located within its body and upon closer examination, its urogenital opening is smaller and farther from the anus than the female's. The female's two mammary glands are located between the front legs. Even for an anteater, the neck is especially thick compared to the back of the head, and a small hump protrudes behind the neck. The coat is mostly greyish, brown or black with mottled white. They have white front legs with black ringed wrists and hands, and dark hind legs. From the throat to the shoulders is a thick black mark with white outlines and sharp tips. The body ends in a brown tail. The coat hairs are long, especially on the tail, which makes the appendage look larger than it actually is. An erect mane stretches along the back. The bold pattern was thought to be disruptive camouflage, but a 2009 study suggests it is warning coloration. The giant anteater has broad ribs. It has five toes on each foot. Three toes on the front feet have claws, which are particularly large on the third digits. It walks on its front knuckles similar to gorillas and chimpanzees. This allows the giant anteater to walk without scraping its claws on the ground. The middle digits, which support most of its weight, have long metacarpophalangeal joints and bent interphalangeal joints. Unlike the front feet, the hind feet have short claws on all five toes and walk plantigrade. As a "hook-and-pull" digger, the giant anteater has a large supraspinous fossa which gives the teres major more leverage—increasing the front limbs' pulling power—and the triceps muscle helps control the thickened middle digit. The giant anteater has a low body temperature for a mammal, about , a few degrees lower than a typical mammalian temperature of . Xenarthrans in general tend to have lower metabolic rates than most other mammals, a trend thought to correlate with their dietary specializations and low mobility. Feeding anatomy The giant anteater has no teeth and is capable of very limited jaw movement. It relies on the rotation of the two halves of its lower jaw, held together by a ligament connecting the rami, to open and close its mouth. This is accomplished by its chewing muscles, which are relatively underdeveloped. Jaw depression creates an oral opening large enough for the slender tongue to flick out. It has a length of around and is more triangular in the back but becomes more rounded towards the front and ends in a rounded tip. The tongue has backward-curving papillae and is extremely moist due to the large salivary glands. The tongue can only move forwards and backwards due to the tiny mouth and shape of the snout. During feeding, the animal relies on the direction of its head for aim. When fully extended, the tongue reaches , and can move in and out around 160 times per minute (nearly three times per second). A unique sternoglossus muscle, a combination of the sternohyoid and the hyoglossus, anchors the tongue directly to the sternum. The hyoid apparatus is large, V-shaped and flexible, and supports the tongue as it moves. The buccinator muscles loosen and tighten, allowing food in and preventing it from falling out. When retracted, the tongue is held in the oropharynx, preventing it from blocking respiration. The anteater rubs its tongue against its palate to smash the insects for swallowing. Unlike other mammals, giant anteaters swallow almost constantly when feeding. The giant anteater's stomach, similar to a bird's gizzard, has hardened folds to crush food, assisted by some sand and soil. The giant anteater cannot produce stomach acid of its own, but digests using the formic acid of its prey. Distribution and status The giant anteater is native to Central and South America; its known range stretches from Honduras to Bolivia and northern Argentina, and fossil remains have been found as far north as northwestern Sonora, Mexico. It is largely absent from the Andes and has been fully extirpated in Uruguay, Belize, El Salvador, and Guatemala, as well as in parts of Costa Rica, Brazil, Argentina, and Paraguay. The species can live in both tropical rainforests and arid shrublands, provided enough prey is present to sustain it. The species is listed as vulnerable by the International Union for Conservation of Nature, due to the number of regional extirpations, and under Appendix II by CITES, tightly restricting international trade in specimens. By 2014, the total population declined more than 30 percent "over the last three generations". In 1994, some 340 giant anteaters died due to wildfires at Emas National Park in Brazil. The animal is particularly vulnerable to fires as its coat can easily catch ablaze and it is too slow to escape. Human-induced threats include collision with vehicles, attacks by dogs and destruction of habitat. One study of anteater mortality along roads found that they are likely to be struck on linear roads near native plants. A 2018 study in Brazil found that: (1) roads were more likely to be detrimental to anteaters because of habitat fragmentation rather than vehicle accidents, (2) 18–20% of satisfactory anteater habitat did not reach minimum patch size (3) 0.1–1% of its range had dangerously high road density, (4) 32–36% of the anteater's distribution represented critical areas for its survival and (5) more conservation opportunities existed in the north of the country. A 2020 study in the Brazilian cerrado found that road mortality can cut population growth by 50 percent at the local level. The giant anteater is commonly hunted in Bolivia, both as a trophy and food. The animal's thick, leathery hide is used to make horse-riding equipment in the Chaco. In Venezuela, it is slain for its claws. Giant anteaters are also killed for their perceived danger, particularly during threat displays. The biggest ecological strengths of the species is its wide range and adaptability. The Amazon, Pantanal and the cerrado have various protected areas where the anteater finds refuge. In Argentina, some local governments list it as national heritage species, affording it official protection. Behaviour and ecology Despite its iconic status, the giant anteater is little studied in the wild and research has been limited to certain areas. The species may use multiple habitats. A 2007 study of giant anteaters in the Brazilian Pantanal found that the animals move and forage in open areas and rest in forest; the latter provide shade when the temperature rises and retain heat when the temperature drops. Anteaters may travel an average of per day. Giant anteaters can be either diurnal or nocturnal. A 2006 study in the Pantanal found them to be mostly nocturnal when it is warm, but became more active in daylight hours as the temperature dropped. Diurnal giant anteaters have been observed at Serra da Canastra. Nocturnality in anteaters may be a response to human disturbances. Giant anteaters prefer dense brush to sleep in, but when it gets cooler, they may use tall grass. When they need to rest, they carve a shallow cavity in the ground. The animal sleeps curled up with its bushy tail over its body; both to keep it warm and camouflage it from predators. One anteater was recorded sleeping flat on its side with the tail unfolded on a morning; possibly to allow its body to absorb the sun's rays for warmth. Giant anteaters sometimes enter water to bathe and even swim across wide rivers. They are also able to climb and have been recorded ascending both termite mounds and trees while foraging. One individual was observed attempting to climb a tree by rearing up and grabbing onto a branch above it. Spacing Giant anteater home ranges vary in size depending on the location, ranging as small as in Serra da Canastra National Park, Brazil, to as large as in Iberá Natural Reserve, Argentina. Individuals mostly live alone, aside from young who stay with their mothers. Anteaters keep in contact with secretions from their anal glands and tree markings. They appear to be able to recognize each other's saliva by scent. Females are more tolerant of each other than males are, and thus are more likely to be found closer together. Males are more likely to engage in agonistic behaviors, which start with the combatants approaching and circling each other while uttering a "harrr" noise. This can escalate into chasing and actual fighting. Combat includes wrestling and slashing with the claws. Fighting anteaters may emit roars or bellows. Males are possibly territorial. Foraging This animal is an insectivore, feeding mostly on ants or termites. In areas that experience regular flooding, like the Pantanal and the Venezuelan-Colombian Llanos, anteaters mainly feed on ants because termites are less available. Conversely, anteaters at Emas National Park eat mainly termites, which are numerous in the grassland habitat. At Serra da Canastra, during the wet season (October to March) anteaters eat mainly ants, while during the dry season (May to September) they switch to termites. Anteaters track prey by their scent. After finding a nest, the animal tears it open with its claws and inserts its long, sticky tongue to collect its prey (which includes eggs, larvae and adult insects). An anteater attacks up to 200 nests in one day, for as long as a minute each, and consumes a total of around 35,000 insects. The anteater may be driven away from a nest by the chemical or biting attacks of soldiers. Termites may rely on their fortified mounds for protection or use underground or wide spreading tunnels to escape. Other prey include the larvae of beetles and western honey bees. Anteaters may target termite mounds with bee hives. Captive anteaters are fed mixtures of milk and eggs as well as mealworms and ground beef. To drink, an anteater may dig for water when none at the surface is available, creating waterholes for other animals. Reproduction and parenting Giant anteaters mate all year. A male trails an estrous female, who partially raises her tail. Courting pairs are known to share the same insect nest during feeding. Mating involves the female laying sideways and the male hunching over. A couple may stay together for up to three days and mate multiple times during that period. Giant anteater have a 170–190 day gestation period which ends with the birth of a single pup. There is some evidence that the species can experience delayed implantation. Females give birth standing upright. Pups are born weighing with eyes closed for the first six days. The mother carries its dependent young on its back. The pup camouflages against its mother by aligning its black and white band with hers. The mother grooms and nurses her young, which communicates with her using sharp whistles. After three months, grooming declines and the young starts to eat more solid food. Both grooming and nursing bouts end at 10 months, which is also when the young leaves its mother. They are sexually mature in 2.5–4 years. Mortality Giant anteaters may live around 15 years in the wild, but can live twice that in captivity. The adult giant anteater has few predators. Adults are hunted only by jaguars and pumas. They typically flee from danger by galloping, but if cornered, will rear up on their hind legs and attack with the claws. The front claws of the giant anteater are formidable weapons, capable of potentially killing a jaguar. The giant anteater is a host of the Acanthocephalan intestinal parasites Gigantorhynchus echinodiscus and Moniliformis monoechinus. Interactions with humans Attacks Although they are usually not a threat to humans, giant anteaters can inflict severe wounds with their front claws. Between 2010 and 2012, two hunters were killed by giant anteaters in Brazil; in both cases, the attacks appeared to be defensive behaviors. In April 2007, an anteater at the Florencio Varela Zoo slashed and killed a zookeeper with its front claws. In culture In the mythology and folklore of the indigenous peoples of the Amazon Basin, the giant anteater is depicted as both a trickster and a comical figure due to its appearance. In one Shipib tale, an anteater stole a jaguar's coat after challenging it to a diving contest and left the jaguar with its own pelt. In a Yarabara myth, the evil ogre Ucara is punished by the sun and turned into an anteater so he will have been unable to speak with his long snout and small mouth. The Kayapo people wear masks of various animals and spirits, including the anteater, during naming and initiation ceremonies. They believe women who touched anteater masks or men who fall while wearing them would die or be disabled. During the Spanish colonization of the Americas, the giant anteater was among the native fauna taken to Europe for display. It was popularly thought that there were only female anteaters and they reproduced with their noses, a misconception corrected by naturalist Félix de Azara. In the 20th century, Salvador Dalí wrote imaginatively that the giant anteater "reaches sizes bigger than the horse, possesses enormous ferocity, has exceptional muscle power, is a terrifying animal." Dalí depicted an anteater in the style of The Great Masturbator. It was used as a bookplate for André Breton, who compared the temptations a man experiences in life to what "the tongue of the anteater must offer to the ant." The 1940 Max Fleischer cartoon Ants in the Plants features a colony of ants fighting off a villainous anteater. It may have been a commentary on France's Maginot Line during the Phoney War. An anteater is also a character in the comic strip B.C. This character was the inspiration for Peter the Anteater, the University of California, Irvine team mascot. In the Stephen King miniseries Kingdom Hospital, the character Antubis appears in the form of an anteater-like creature with razor-sharp teeth.
Biology and health sciences
Xenarthra
Animals
340240
https://en.wikipedia.org/wiki/Partition%20of%20a%20set
Partition of a set
In mathematics, a partition of a set is a grouping of its elements into non-empty subsets, in such a way that every element is included in exactly one subset. Every equivalence relation on a set defines a partition of this set, and every partition defines an equivalence relation. A set equipped with an equivalence relation or a partition is sometimes called a setoid, typically in type theory and proof theory. Definition and notation A partition of a set X is a set of non-empty subsets of X such that every element x in X is in exactly one of these subsets (i.e., the subsets are nonempty mutually disjoint sets). Equivalently, a family of sets P is a partition of X if and only if all of the following conditions hold: The family P does not contain the empty set (that is ). The union of the sets in P is equal to X (that is ). The sets in P are said to exhaust or cover X.
Mathematics
Set theory
null
340421
https://en.wikipedia.org/wiki/Tagetes
Tagetes
Tagetes () is a genus of 50 species of annual or perennial, mostly herbaceous plants in the family Asteraceae. They are among several groups of plants known in English as marigolds. The genus Tagetes was described by Carl Linnaeus in 1753. Originally called cempōhualxōchitl, by the Nahua peoples, these plants are native to Central and Southern Mexico and several other Latin American countries. Some species have become naturalized around the world. One species, T. minuta, is considered a noxious invasive plant in some areas. Description Tagetes species vary in size from 0.1 to 2.2 m tall. Most species have pinnate green leaves. Blooms naturally occur in golden, orange, yellow, and white colors, often with maroon highlights. Floral heads are typically (1-) to 4–6 cm diameter, generally with both ray florets and disc florets. In horticulture, they tend to be planted as annuals, although the perennial species are gaining popularity. Like all marigolds, they have a fibrous root system. Depending on the species, Tagetes species grow well in almost any sort of soil. Most horticultural selections grow best in soil with good drainage, and some cultivars are known to have good tolerance to drought. Nomenclature The Latin Tagētes derives from the Tages in Etruscan mythology, born from plowing the earth. It likely refers to the ease with which plants of this genus come out each year either by the seeds produced in the previous year, or by the stems which regrow from the stump already in place. The common name in English, marigold, is derived from Mary's gold in honor of the Virgin Mary, a name first applied to a similar plant native to Europe, Calendula officinalis. The most commonly cultivated varieties of Tagetes are known variously as African marigolds (usually referring to cultivars and hybrids of Tagetes erecta), or French marigolds (usually referring to hybrids and cultivars of Tagetes patula, many of which were developed in France). The so-called signet marigolds are hybrids derived mostly from Tagetes tenuifolia. Cultivation and uses Depending on the species, marigold foliage has a musky, pungent scent, though some varieties have been bred to be scentless. Due to antibacterial thiophenes exuded by the roots, Tagetes should not be planted near any legume crop. Some of the perennial species are deer-, rabbit-, rodent- and javelina or peccary-resistant. T. minuta (khakibush or huacatay), originally from South America, has been used as a source of essential oil for the perfume industry known as tagette or "marigold oil", and as a flavourant in the food and tobacco industries. It is commonly cultivated in South Africa, where the species is also a useful pioneer plant in the reclamation of disturbed land. The florets of Tagetes erecta are rich in the orange-yellow carotenoid lutein and are used as a food colour (INS number E161b) in the European Union for foods such as pasta, vegetable oil, margarine, mayonnaise, salad dressing, baked goods, confectionery, dairy products, ice cream, yogurt, citrus juice and mustard. In the United States, however, the powders and extracts are only approved as colorants in animal feed. Marigolds are recorded as a food plant for some Lepidoptera caterpillars including the dot moth, and a nectar source for other butterflies and bumblebees. They are often part of butterfly gardening plantings. In the wild, many species are pollinated by beetles. Cultural significance Tagetes lucida The species Tagetes lucida, known as pericón, is used to prepare a sweetish, anise-flavored medicinal tea in Mexico. It is also used as a culinary herb in many warm climates, as a substitute for tarragon, and offered in the nursery as "Texas tarragon" or "Mexican mint marigold". Tagetes minuta Tagetes minuta, native to southern South America, is a tall, upright marigold plant with small flowers used as a culinary herb in Peru, Ecuador, and parts of Chile and Bolivia, where it is called by the Incan term huacatay. The paste is used to make the popular potato dish called ocopa. Having both "green" and "yellow/orange" notes, the taste and odor of fresh T. minuta is like a mixture of sweet basil, tarragon, mint and citrus. It is also used as a medicinal tea for gastrointestinal complaints and specifically against nematodes. Tagetes erecta Tagetes erecta is widely used in Day of the Dead celebrations in Mexico. Tagetes – various species In Bangladesh, India and other South Asian countries, marigold is used for ornamentation purposes in functions like the turmeric ceremony, weddings, Pohela Falgun and other functions. During the colonial period the native varieties of these flowers were replaced by American species like T. erecta, T. patula and T. tenuifolia. The marigold is also widely cultivated in India and Thailand, particularly the species T. erecta, Tagetes patula and T. tenuifolia. It is always sold in the markets for daily rituals. Vast quantities of marigolds are used in garlands and decoration for weddings, festivals, and religious events. Marigold cultivation is extensively seen in Telangana, Andhra Pradesh, Tamil Nadu, West Bengal, Karnataka and Uttar Pradesh (for the Vijayadashami and Diwali markets) states of India. In Ukraine, chornobryvtsi (T. erecta, T. patula and the signet marigold, l. tenuifolia) are regarded as one of the national symbols, and are often mentioned in songs, poems and tales. Species Accepted species Gallery
Biology and health sciences
Asterales
null
340602
https://en.wikipedia.org/wiki/Zinnia
Zinnia
Zinnia is a genus of plants of the tribe Heliantheae within the family Asteraceae. They are native to scrub and dry grassland in an area stretching from the Southwestern United States to South America, with a centre of diversity in Mexico. Members of the genus are notable for their solitary long-stemmed 12 petal flowers that come in a variety of bright colors. The genus name honors the German scientist Johann Gottfried Zinn (1727–1759). Description Zinnias are annuals, shrubs, and sub-shrubs native primarily to North America, with a few species in South America. Most species have upright stems but some have a lax habit with spreading stems that mound over the surface of the ground. They typically range in height from 10 to 100 cm tall (4" to 40"). The leaves are opposite and usually stalkless (sessile), with a shape ranging from linear to ovate, and a color ranging from pale to medium green. Zinnia's composite flowers consist of ray florets that surround disk florets, which may be a different color than the ray florets and mature from the periphery inward. The flowers have a range of appearances, from a single row of petals to a dome shape. Zinnias may be white, chartreuse, yellow, orange, red, purple, or lilac. Cultivation Zinnias are easy to grow with potential heavy, brightly colored blooms. Their petals can take different forms as single row with a visible center (single-flowered zinnia), numerous rows with a center that is not visible (double-flowered) and petals that are somewhere in-between with numerous rows but visible centers (semi-double-flowered zinnia). Their flowers can also take several shapes. Zinnias are an annual plant usually grown in situ from seed, as they dislike being transplanted. Much like daisies, zinnias prefer to have full sunlight and adequate water. In the preferred conditions they will grow quickly but are sensitive to frost and therefore will die after the first frost of autumn. Zinnias benefit from deadheading to encourage further blooming. Species Accepted species Zinnia acerosa – Arizona, New Mexico, Texas, and Utah in the United States; Coahuila, Durango, Michoacán, Nuevo León, San Luis Potosí, Sonora, and Zacatecas in Mexico. Zinnia americana – Chiapas, Guerrero, Honduras, Jalisco, Michoacán, México State, Nayarit, Nicaragua, Oaxaca, and Veracruz. Zinnia angustifolia – Chihuahua, Durango, Jalisco, San Luis Potosí, and Sinaloa. Zinnia anomala – Texas; Coahuila, and Nuevo León. Zinnia bicolor – Chihuahua, Durango, Guanajuato, Jalisco, Nayarit, and Sinaloa. Zinnia citrea – Chihuahua, Coahuila, and San Luis Potosí. Zinnia elegans from Jalisco to Paraguay; naturalized in parts of United States. Zinnia flavicoma – Guerrero, Jalisco, Michoacán, and Oaxaca. Zinnia grandiflora – Arizona, Colorado, Kansas, New Mexico, Oklahoma, and Texas; Chihuahua, Coahuila, Nuevo León, Sonora, and Tamaulipas. Zinnia haageana – Guanajuato, Jalisco, México State, Michoacán, and Oaxaca. Zinnia juniperifolia – Coahuila, Nuevo León, and Tamaulipas. Zinnia maritima – Colima, Guerrero, Jalisco, Nayarit, and Sinaloa. Zinnia microglossa – Guanajuato and Jalisco. Zinnia oligantha – Coahuila. Zinnia palmeri – Colima, Jalisco Zinnia pauciflora Phil. Zinnia peruviana – widespread from Chihuahua to Paraguay including Galápagos and West Indies; naturalized in parts of China, South Africa, and the United States. Zinnia pumila A.Gray Zinnia purpusii – Chiapas, Colima, Guerrero, Jalisco, and Puebla. Zinnia tenuis – Chihuahua. Zinnia venusta – Guerrero. Zinnia zinnioides (Kunth) Olorode & Torres Formerly included See Glossocardia and Philactis. Zinnia bidens – Glossocardia bidens Zinnia liebmannii – Philactis zinnioides Zinnia elegans, also known as Zinnia violacea, is the most familiar species, originally from the warm regions of Mexico being a warm–hot climate plant. Its leaves are lance-shaped and sandpapery in texture, and height ranges from 15 cm to 1 meter. Zinnia angustifolia is another Mexican species. It has a low bushy plant habit, linear foliage, and more delicate flowers than Z. elegans – usually single, and in shades of yellow, orange or white. It is also more resistant to powdery mildew than Z. elegans, and hybrids between the two species have been raised which impart this resistance to plants intermediate in appearance between the two. The 'Profusion' cultivars, with both single and double-flowered components, are among the most well-known of this hybrid group. Zinnias is favored by butterflies as well as hummingbirds, and many gardeners add zinnias specifically to attract them. Uses Zinnias are popular garden flowers because they come in a wide range of flower colors and shapes, and they can withstand hot summer temperatures and are easy to grow from seeds. They bloom all summer long. They are grown in fertile, humus-rich, and well-drained soil, in an area with full sun. They will reseed themselves each year. Over 100 cultivars have been produced since selective breeding started in the 19th century. Zinnia peruviana was introduced to Europe in the early 1700s. Around 1790 Z. elegans (Zinnia violacea) was introduced. Those plants had a single row of ray florets, which were violet. In 1829, scarlet flowering plants were available under the name "Coccinea". Double flowering types were available in 1858, coming from India, and they were in a range of colors, including shades of reds, rose, purple, orange, buff, and rose striped. In time, they came to represent thinking of absent friends in the language of flowers. A number of species of zinnia are popular flowering plants, and interspecific hybrids are becoming more common. Their varied habits allow for uses in several parts of a garden, and their tendency to attract butterflies and hummingbirds is seen as desirable. Commercially available seeds and plants are derived from open pollinated or F1 crosses, and the first commercial F1 hybrid dates from 1960. Some zinnias are edible, though often reported to have a bitter taste best suited to garnish. Cultivation in microgravity Experimentation aboard the International Space Station has demonstrated the capability of zinnias to blossom in a weightless environment, an example of plants in space. Companion plants In the Americas their ability to attract hummingbirds is also seen as useful as a defense against whiteflies, and therefore zinnias are a desirable companion plant, benefiting plants that are inter-cropped with it. Gallery
Biology and health sciences
Asterales
Plants
340651
https://en.wikipedia.org/wiki/Eurasian%20plate
Eurasian plate
The Eurasian plate is a tectonic plate that includes most of Eurasia (a landmass consisting of the traditional continents of Asia and Europe), with the notable exceptions of the Arabian Peninsula, the Indian subcontinent, and the area east of the Chersky Range in eastern Siberia. It also includes oceanic crust extending westward to the Mid-Atlantic Ridge and northward to the Gakkel Ridge. Boundaries The western edge is a triple junction plate boundary with the North American plate and Nubian plate at the seismically active Azores triple junction extending northward along the Mid-Atlantic Ridge towards Iceland. Ridges like the Mid-Atlantic ridge form at a divergent plate boundary. They are located deep underwater and very difficult to study. Scientists know less about ocean ridges than they do the planets of the solar system. There is another triple junction where the Eurasian plate meets the Anatolian sub-plate and the Arabian plate. The Anatolian sub-plate is currently being squeezed by the collision of the Eurasian plate with the Arabian plate in the East Anatolian Fault Zone. The boundary between the North American plate and the Eurasian plate in the area around Japan has been described as "shifty". There are different maps for it based on recent tectonics, seismicity and earthquake focal mechanism. The simplest plate geometry draws the boundary from the Nansen Ridge through a broad zone of deformation in North Asia to the Sea of Okhotsk then south through Sakhalin Island and Hokkaido to the triple junction in the Japan Trench. But this simple view has been successfully challenged by more recent research. During the 1970s, Japan was thought to be located on the Eurasian plate at a quadruple junction with the North American plate when the eastern boundary of the North American plate was drawn through southern Hokkaido. New research in the 1990s supported that the Okhotsk microplate was independent from the North American plate and a boundary with the Amurian microplate, sometimes described as "a division within the Eurasian plate" with an unknown western boundary. All volcanic eruptions in Iceland, such as the 1973 eruption of Eldfell, the 1783 eruption of Laki and the 2010 eruption of Eyjafjallajökull, are caused by the North American and the Eurasian plates moving apart, which is a result of divergent plate boundary forces. The convergent boundary between the Eurasian plate and the Indian plate formed the Himalayas mountain range. The geodynamics of Central Asia is dominated by the interaction between the Eurasian plate and the Indian plate. In this area, many sub-plates or crust blocks have been recognized, which form the Central Asian and the East Asian transit zones.
Physical sciences
Tectonic plates
Earth science
340757
https://en.wikipedia.org/wiki/Internal%20energy
Internal energy
The internal energy of a thermodynamic system is the energy of the system as a state function, measured as the quantity of energy necessary to bring the system from its standard internal state to its present internal state of interest, accounting for the gains and losses of energy due to changes in its internal state, including such quantities as magnetization. It excludes the kinetic energy of motion of the system as a whole and the potential energy of position of the system as a whole, with respect to its surroundings and external force fields. It includes the thermal energy, i.e., the constituent particles' kinetic energies of motion relative to the motion of the system as a whole. The internal energy of an isolated system cannot change, as expressed in the law of conservation of energy, a foundation of the first law of thermodynamics. The notion has been introduced to describe the systems characterized by temperature variations, temperature being added to the set of state parameters, the position variables known in mechanics (and their conjugated generalized force parameters), in a similar way to potential energy of the conservative fields of force, gravitational and electrostatic. Its author is Rudolf Clausius. Internal energy changes equal the algebraic sum of the heat transferred and the work done. In systems without temperature changes, potential energy changes equal the work done by/on the system. The internal energy cannot be measured absolutely. Thermodynamics concerns changes in the internal energy, not its absolute value. The processes that change the internal energy are transfers, into or out of the system, of substance, or of energy, as heat, or by thermodynamic work. These processes are measured by changes in the system's properties, such as temperature, entropy, volume, electric polarization, and molar constitution. The internal energy depends only on the internal state of the system and not on the particular choice from many possible processes by which energy may pass into or out of the system. It is a state variable, a thermodynamic potential, and an extensive property. Thermodynamics defines internal energy macroscopically, for the body as a whole. In statistical mechanics, the internal energy of a body can be analyzed microscopically in terms of the kinetic energies of microscopic motion of the system's particles from translations, rotations, and vibrations, and of the potential energies associated with microscopic forces, including chemical bonds. The unit of energy in the International System of Units (SI) is the joule (J). The internal energy relative to the mass with unit J/kg is the specific internal energy. The corresponding quantity relative to the amount of substance with unit J/mol is the molar internal energy. Cardinal functions The internal energy of a system depends on its entropy S, its volume V and its number of massive particles: . It expresses the thermodynamics of a system in the energy representation. As a function of state, its arguments are exclusively extensive variables of state. Alongside the internal energy, the other cardinal function of state of a thermodynamic system is its entropy, as a function, , of the same list of extensive variables of state, except that the entropy, , is replaced in the list by the internal energy, . It expresses the entropy representation. Each cardinal function is a monotonic function of each of its natural or canonical variables. Each provides its characteristic or fundamental equation, for example , that by itself contains all thermodynamic information about the system. The fundamental equations for the two cardinal functions can in principle be interconverted by solving, for example, for , to get . In contrast, Legendre transformations are necessary to derive fundamental equations for other thermodynamic potentials and Massieu functions. The entropy as a function only of extensive state variables is the one and only cardinal function of state for the generation of Massieu functions. It is not itself customarily designated a 'Massieu function', though rationally it might be thought of as such, corresponding to the term 'thermodynamic potential', which includes the internal energy. For real and practical systems, explicit expressions of the fundamental equations are almost always unavailable, but the functional relations exist in principle. Formal, in principle, manipulations of them are valuable for the understanding of thermodynamics. Description and definition The internal energy of a given state of the system is determined relative to that of a standard state of the system, by adding up the macroscopic transfers of energy that accompany a change of state from the reference state to the given state: where denotes the difference between the internal energy of the given state and that of the reference state, and the are the various energies transferred to the system in the steps from the reference state to the given state. It is the energy needed to create the given state of the system from the reference state. From a non-relativistic microscopic point of view, it may be divided into microscopic potential energy, , and microscopic kinetic energy, , components: The microscopic kinetic energy of a system arises as the sum of the motions of all the system's particles with respect to the center-of-mass frame, whether it be the motion of atoms, molecules, atomic nuclei, electrons, or other particles. The microscopic potential energy algebraic summative components are those of the chemical and nuclear particle bonds, and the physical force fields within the system, such as due to internal induced electric or magnetic dipole moment, as well as the energy of deformation of solids (stress-strain). Usually, the split into microscopic kinetic and potential energies is outside the scope of macroscopic thermodynamics. Internal energy does not include the energy due to motion or location of a system as a whole. That is to say, it excludes any kinetic or potential energy the body may have because of its motion or location in external gravitational, electrostatic, or electromagnetic fields. It does, however, include the contribution of such a field to the energy due to the coupling of the internal degrees of freedom of the system with the field. In such a case, the field is included in the thermodynamic description of the object in the form of an additional external parameter. For practical considerations in thermodynamics or engineering, it is rarely necessary, convenient, nor even possible, to consider all energies belonging to the total intrinsic energy of a sample system, such as the energy given by the equivalence of mass. Typically, descriptions only include components relevant to the system under study. Indeed, in most systems under consideration, especially through thermodynamics, it is impossible to calculate the total internal energy. Therefore, a convenient null reference point may be chosen for the internal energy. The internal energy is an extensive property: it depends on the size of the system, or on the amount of substance it contains. At any temperature greater than absolute zero, microscopic potential energy and kinetic energy are constantly converted into one another, but the sum remains constant in an isolated system (cf. table). In the classical picture of thermodynamics, kinetic energy vanishes at zero temperature and the internal energy is purely potential energy. However, quantum mechanics has demonstrated that even at zero temperature particles maintain a residual energy of motion, the zero point energy. A system at absolute zero is merely in its quantum-mechanical ground state, the lowest energy state available. At absolute zero a system of given composition has attained its minimum attainable entropy. The microscopic kinetic energy portion of the internal energy gives rise to the temperature of the system. Statistical mechanics relates the pseudo-random kinetic energy of individual particles to the mean kinetic energy of the entire ensemble of particles comprising a system. Furthermore, it relates the mean microscopic kinetic energy to the macroscopically observed empirical property that is expressed as temperature of the system. While temperature is an intensive measure, this energy expresses the concept as an extensive property of the system, often referred to as the thermal energy, The scaling property between temperature and thermal energy is the entropy change of the system. Statistical mechanics considers any system to be statistically distributed across an ensemble of microstates. In a system that is in thermodynamic contact equilibrium with a heat reservoir, each microstate has an energy and is associated with a probability . The internal energy is the mean value of the system's total energy, i.e., the sum of all microstate energies, each weighted by its probability of occurrence: This is the statistical expression of the law of conservation of energy. Internal energy changes Thermodynamics is chiefly concerned with the changes in internal energy . For a closed system, with mass transfer excluded, the changes in internal energy are due to heat transfer and due to thermodynamic work done by the system on its surroundings. Accordingly, the internal energy change for a process may be written When a closed system receives energy as heat, this energy increases the internal energy. It is distributed between microscopic kinetic and microscopic potential energies. In general, thermodynamics does not trace this distribution. In an ideal gas all of the extra energy results in a temperature increase, as it is stored solely as microscopic kinetic energy; such heating is said to be sensible. A second kind of mechanism of change in the internal energy of a closed system changed is in its doing of work on its surroundings. Such work may be simply mechanical, as when the system expands to drive a piston, or, for example, when the system changes its electric polarization so as to drive a change in the electric field in the surroundings. If the system is not closed, the third mechanism that can increase the internal energy is transfer of substance into the system. This increase, cannot be split into heat and work components. If the system is so set up physically that heat transfer and work that it does are by pathways separate from and independent of matter transfer, then the transfers of energy add to change the internal energy: If a system undergoes certain phase transformations while being heated, such as melting and vaporization, it may be observed that the temperature of the system does not change until the entire sample has completed the transformation. The energy introduced into the system while the temperature does not change is called latent energy or latent heat, in contrast to sensible heat, which is associated with temperature change. Internal energy of the ideal gas Thermodynamics often uses the concept of the ideal gas for teaching purposes, and as an approximation for working systems. The ideal gas consists of particles considered as point objects that interact only by elastic collisions and fill a volume such that their mean free path between collisions is much larger than their diameter. Such systems approximate monatomic gases such as helium and other noble gases. For an ideal gas the kinetic energy consists only of the translational energy of the individual atoms. Monatomic particles do not possess rotational or vibrational degrees of freedom, and are not electronically excited to higher energies except at very high temperatures. Therefore, the internal energy of an ideal gas depends solely on its temperature (and the number of gas particles): . It is not dependent on other thermodynamic quantities such as pressure or density. The internal energy of an ideal gas is proportional to its amount of substance (number of moles) and to its temperature where is the isochoric (at constant volume) molar heat capacity of the gas; is constant for an ideal gas. The internal energy of any gas (ideal or not) may be written as a function of the three extensive properties , , (entropy, volume, number of moles). In case of the ideal gas it is in the following way where is an arbitrary positive constant and where is the universal gas constant. It is easily seen that is a linearly homogeneous function of the three variables (that is, it is extensive in these variables), and that it is weakly convex. Knowing temperature and pressure to be the derivatives the ideal gas law immediately follows as below: Internal energy of a closed thermodynamic system The above summation of all components of change in internal energy assumes that a positive energy denotes heat added to the system or the negative of work done by the system on its surroundings. This relationship may be expressed in infinitesimal terms using the differentials of each term, though only the internal energy is an exact differential. For a closed system, with transfers only as heat and work, the change in the internal energy is expressing the first law of thermodynamics. It may be expressed in terms of other thermodynamic parameters. Each term is composed of an intensive variable (a generalized force) and its conjugate infinitesimal extensive variable (a generalized displacement). For example, the mechanical work done by the system may be related to the pressure and volume change . The pressure is the intensive generalized force, while the volume change is the extensive generalized displacement: This defines the direction of work, , to be energy transfer from the working system to the surroundings, indicated by a positive term. Taking the direction of heat transfer to be into the working fluid and assuming a reversible process, the heat is where denotes the temperature, and denotes the entropy. The change in internal energy becomes Changes due to temperature and volume The expression relating changes in internal energy to changes in temperature and volume is This is useful if the equation of state is known. In case of an ideal gas, we can derive that , i.e. the internal energy of an ideal gas can be written as a function that depends only on the temperature. The expression relating changes in internal energy to changes in temperature and volume is The equation of state is the ideal gas law Solve for pressure: Substitute in to internal energy expression: Take the derivative of pressure with respect to temperature: Replace: And simplify: To express in terms of and , the term is substituted in the fundamental thermodynamic relation This gives The term is the heat capacity at constant volume The partial derivative of with respect to can be evaluated if the equation of state is known. From the fundamental thermodynamic relation, it follows that the differential of the Helmholtz free energy is given by The symmetry of second derivatives of with respect to and yields the Maxwell relation: This gives the expression above. Changes due to temperature and pressure When considering fluids or solids, an expression in terms of the temperature and pressure is usually more useful: where it is assumed that the heat capacity at constant pressure is related to the heat capacity at constant volume according to The partial derivative of the pressure with respect to temperature at constant volume can be expressed in terms of the coefficient of thermal expansion and the isothermal compressibility by writing and equating dV to zero and solving for the ratio dP/dT. This gives Substituting () and () in () gives the above expression. Changes due to volume at constant temperature The internal pressure is defined as a partial derivative of the internal energy with respect to the volume at constant temperature: Internal energy of multi-component systems In addition to including the entropy and volume terms in the internal energy, a system is often described also in terms of the number of particles or chemical species it contains: where are the molar amounts of constituents of type in the system. The internal energy is an extensive function of the extensive variables , , and the amounts , the internal energy may be written as a linearly homogeneous function of first degree: where is a factor describing the growth of the system. The differential internal energy may be written as which shows (or defines) temperature to be the partial derivative of with respect to entropy and pressure to be the negative of the similar derivative with respect to volume , and where the coefficients are the chemical potentials for the components of type in the system. The chemical potentials are defined as the partial derivatives of the internal energy with respect to the variations in composition: As conjugate variables to the composition , the chemical potentials are intensive properties, intrinsically characteristic of the qualitative nature of the system, and not proportional to its extent. Under conditions of constant and , because of the extensive nature of and its independent variables, using Euler's homogeneous function theorem, the differential may be integrated and yields an expression for the internal energy: The sum over the composition of the system is the Gibbs free energy: that arises from changing the composition of the system at constant temperature and pressure. For a single component system, the chemical potential equals the Gibbs energy per amount of substance, i.e. particles or moles according to the original definition of the unit for . Internal energy in an elastic medium For an elastic medium the potential energy component of the internal energy has an elastic nature expressed in terms of the stress and strain involved in elastic processes. In Einstein notation for tensors, with summation over repeated indices, for unit volume, the infinitesimal statement is Euler's theorem yields for the internal energy: For a linearly elastic material, the stress is related to the strain by where the are the components of the 4th-rank elastic constant tensor of the medium. Elastic deformations, such as sound, passing through a body, or other forms of macroscopic internal agitation or turbulent motion create states when the system is not in thermodynamic equilibrium. While such energies of motion continue, they contribute to the total energy of the system; thermodynamic internal energy pertains only when such motions have ceased. History James Joule studied the relationship between heat, work, and temperature. He observed that friction in a liquid, such as caused by its agitation with work by a paddle wheel, caused an increase in its temperature, which he described as producing a quantity of heat. Expressed in modern units, he found that c. 4186 joules of energy were needed to raise the temperature of one kilogram of water by one degree Celsius.
Physical sciences
Thermodynamics
null
340982
https://en.wikipedia.org/wiki/Petrology
Petrology
Petrology () is the branch of geology that studies rocks, their mineralogy, composition, texture, structure and the conditions under which they form. Petrology has three subdivisions: igneous, metamorphic, and sedimentary petrology. Igneous and metamorphic petrology are commonly taught together because both make heavy use of chemistry, chemical methods, and phase diagrams. Sedimentary petrology is commonly taught together with stratigraphy because it deals with the processes that form sedimentary rock. Modern sedimentary petrology is making increasing use of chemistry. Background Lithology was once approximately synonymous with petrography, but in current usage, lithology focuses on macroscopic hand-sample or outcrop-scale description of rocks while petrography is the speciality that deals with microscopic details. In the petroleum industry, lithology, or more specifically mud logging, is the graphic representation of geological formations being drilled through and drawn on a log called a mud log. As the cuttings are circulated out of the borehole, they are sampled, examined (typically under a 10× microscope) and tested chemically when needed. Methodology Petrology utilizes the fields of mineralogy, petrography, optical mineralogy, and chemical analysis to describe the composition and texture of rocks. Petrologists also include the principles of geochemistry and geophysics through the study of geochemical trends and cycles and the use of thermodynamic data and experiments in order to better understand the origins of rocks. Branches There are three branches of petrology, corresponding to the three types of rocks: igneous, metamorphic, and sedimentary, and another dealing with experimental techniques: Igneous petrology focuses on the composition and texture of igneous rocks (rocks such as granite or basalt which have crystallized from molten rock or magma). Igneous rocks include volcanic and plutonic rocks. Sedimentary petrology focuses on the composition and texture of sedimentary rocks (rocks such as sandstone, shale, or limestone which consist of pieces or particles derived from other rocks or biological or chemical deposits, and are usually bound together in a matrix of finer material). Metamorphic petrology focuses on the composition and texture of metamorphic rocks (rocks such as slate, marble, gneiss, or schist) which have undergone chemical, mineralogical or textural changes due to the effects of pressure, temperature, or both). The original rock, prior to change (called the protolith), may be of any sort. Experimental petrology employs high-pressure, high-temperature apparatus to investigate the geochemistry and phase relations of natural or synthetic materials at elevated pressures and temperatures. Experiments are particularly useful for investigating rocks of the lower crust and upper mantle that rarely survive the journey to the surface in pristine condition. They are also one of the prime sources of information about completely inaccessible rocks, such as those in the Earth's lower mantle and in the mantles of the other terrestrial planets and the Moon. The work of experimental petrologists has laid a foundation on which modern understanding of igneous and metamorphic processes has been built.
Physical sciences
Petrology
null
341046
https://en.wikipedia.org/wiki/Green%20algae
Green algae
The green algae (: green alga) are a group of chlorophyll-containing autotrophic eukaryotes consisting of the phylum Prasinodermophyta and its unnamed sister group that contains the Chlorophyta and Charophyta/Streptophyta. The land plants (Embryophytes) have emerged deep within the charophytes as a sister of the Zygnematophyceae. Since the realization that the Embryophytes emerged within the green algae, some authors are starting to include them. The completed clade that includes both green algae and embryophytes is monophyletic and is referred to as the clade Viridiplantae and as the kingdom Plantae. The green algae include unicellular and colonial flagellates, most with two flagella per cell, as well as various colonial, coccoid (spherical), and filamentous forms, and macroscopic, multicellular seaweeds. There are about 22,000 species of green algae, many of which live most of their lives as single cells, while other species form coenobia (colonies), long filaments, or highly differentiated macroscopic seaweeds. A few other organisms rely on green algae to conduct photosynthesis for them. The chloroplasts in dinoflagellates of the genus Lepidodinium, euglenids and chlorarachniophytes were acquired from ingested endosymbiont green algae, and in the latter retain a nucleomorph (vestigial nucleus). Green algae are also found symbiotically in the ciliate Paramecium, and in Hydra viridissima and in flatworms. Some species of green algae, particularly of genera Trebouxia of the class Trebouxiophyceae and Trentepohlia (class Ulvophyceae), can be found in symbiotic associations with fungi to form lichens. In general the fungal species that partner in lichens cannot live on their own, while the algal species is often found living in nature without the fungus. Trentepohlia is a filamentous green alga that can live independently on humid soil, rocks or tree bark or form the photosymbiont in lichens of the family Graphidaceae. Also the macroalga Prasiola calophylla (Trebouxiophyceae) is terrestrial, and Prasiola crispa, which live in the supralittoral zone, is terrestrial and can in the Antarctic form large carpets on humid soil, especially near bird colonies. Cellular structure Green algae have chloroplasts that contain chlorophyll a and b, giving them a bright green colour, as well as the accessory pigments beta carotene (red-orange) and xanthophylls (yellow) in stacked thylakoids. The cell walls of green algae usually contain cellulose, and they store carbohydrate in the form of starch. All green algae have mitochondria with flat cristae. When present, paired flagella are used to move the cell. They are anchored by a cross-shaped system of microtubules and fibrous strands. Flagella are only present in the motile male gametes of charophytes bryophytes, pteridophytes, cycads and Ginkgo, but are absent from the gametes of Pinophyta and flowering plants. Members of the class Chlorophyceae undergo closed mitosis in the most common form of cell division among the green algae, which occurs via a phycoplast. By contrast, charophyte green algae and land plants (embryophytes) undergo open mitosis without centrioles. Instead, a 'raft' of microtubules, the phragmoplast, is formed from the mitotic spindle and cell division involves the use of this phragmoplast in the production of a cell plate. Origins Photosynthetic eukaryotes originated following a primary endosymbiotic event, where a heterotrophic eukaryotic cell engulfed a photosynthetic cyanobacterium-like prokaryote that became stably integrated and eventually evolved into a membrane-bound organelle: the plastid. This primary endosymbiosis event gave rise to three autotrophic clades with primary plastids: the (green) plants (with chloroplasts) the red algae (with rhodoplasts) and the glaucophytes (with muroplasts). Evolution and classification Green algae are often classified with their embryophyte descendants in the green plant clade Viridiplantae (or Chlorobionta). Viridiplantae, together with red algae and glaucophyte algae, form the supergroup Primoplantae, also known as Archaeplastida or Plantae sensu lato. The ancestral green alga was a unicellular flagellate. The Viridiplantae diverged into two clades. The Chlorophyta include the early diverging prasinophyte lineages and the core Chlorophyta, which contain the majority of described species of green algae. The Streptophyta include charophytes and land plants. Below is a consensus reconstruction of green algal relationships, mainly based on molecular data. The basal character of the Mesostigmatophyceae, Chlorokybophyceae and spirotaenia are only more conventionally basal Streptophytes. The algae of this paraphyletic group "Charophyta" were previously included in Chlorophyta, so green algae and Chlorophyta in this definition were synonyms. As the green algae clades get further resolved, the embryophytes, which are a deep charophyte branch, are included in "algae", "green algae" and "Charophytes", or these terms are replaced by cladistic terminology such as Archaeplastida, Plantae/Viridiplantae, and streptophytes, respectively. Reproduction Green algae are a group of photosynthetic, eukaryotic organisms that include species with haplobiontic and diplobiontic life cycles. The diplobiontic species, such as Ulva, follow a reproductive cycle called alternation of generations in which two multicellular forms, haploid and diploid, alternate, and these may or may not be isomorphic (having the same morphology). In haplobiontic species only the haploid generation, the gametophyte is multicellular. The fertilized egg cell, the diploid zygote, undergoes meiosis, giving rise to haploid cells which will become new gametophytes. The diplobiontic forms, which evolved from haplobiontic ancestors, have both a multicellular haploid generation and a multicellular diploid generation. Here the zygote divides repeatedly by mitosis and grows into a multicellular diploid sporophyte. The sporophyte produces haploid spores by meiosis that germinate to produce a multicellular gametophyte. All land plants have a diplobiontic common ancestor, and diplobiontic forms have also evolved independently within Ulvophyceae more than once (as has also occurred in the red and brown algae). Diplobiontic green algae include isomorphic and heteromorphic forms. In isomorphic algae, the morphology is identical in the haploid and diploid generations. In heteromorphic algae, the morphology and size are different in the gametophyte and sporophyte. Reproduction varies from fusion of identical cells (isogamy) to fertilization of a large non-motile cell by a smaller motile one (oogamy). However, these traits show some variation, most notably among the basal green algae called prasinophytes. Haploid algal cells (containing only one copy of their DNA) can fuse with other haploid cells to form diploid zygotes. When filamentous algae do this, they form bridges between cells, and leave empty cell walls behind that can be easily distinguished under the light microscope. This process is called conjugation and occurs for example in Spirogyra. Sex pheromone Sex pheromone production is likely a common feature of green algae, although only studied in detail in a few model organisms. Volvox is a genus of chlorophytes. Different species form spherical colonies of up to 50,000 cells. One well-studied species, Volvox carteri (2,000 – 6,000 cells) occupies temporary pools of water that tend to dry out in the heat of late summer. As their environment dries out, asexual V. carteri quickly die. However, they are able to escape death by switching, shortly before drying is complete, to the sexual phase of their life cycle that leads to production of dormant desiccation-resistant zygotes. Sexual development is initiated by a glycoprotein pheromone (Hallmann et al., 1998). This pheromone is one of the most potent known biological effector molecules. It can trigger sexual development at concentrations as low as 10−16M. Kirk and Kirk showed that sex-inducing pheromone production can be triggered experimentally in somatic cells by heat shock. Thus heat shock may be a condition that ordinarily triggers sex-inducing pheromone in nature. The Closterium peracerosum-strigosum-littorale (C. psl) complex is a unicellular, isogamous charophycean alga group that is the closest unicellular relative to land plants. Heterothallic strains of different mating type can conjugate to form zygospores. Sex pheromones termed protoplast-release inducing proteins (glycopolypeptides) produced by mating-type (-) and mating-type (+) cells facilitate this process. Physiology The green algae, including the characean algae, have served as model experimental organisms to understand the mechanisms of the ionic and water permeability of membranes, osmoregulation, turgor regulation, salt tolerance, cytoplasmic streaming, and the generation of action potentials.
Biology and health sciences
Green algae
null
341265
https://en.wikipedia.org/wiki/Jungle
Jungle
A jungle is land covered with dense forest and tangled vegetation, usually in tropical climates. Application of the term has varied greatly during the past century. Etymology The word jungle originates from the Sanskrit word jaṅgala (), meaning rough and arid. It came into the English language in the 18th century via the Hindustani word for forest (Hindi/Urdu: /) (Jangal). Jāṅgala has also been variously transcribed in English as jangal, jangla, jungal, and juṅgala. It has been suggested that an Anglo-Indian interpretation led to its connotation as a dense "tangled thicket". The term is prevalent in many languages of the Indian subcontinent, and the Iranian Plateau, where it is commonly used to refer to the plant growth replacing primeval forest or to the unkempt tropical vegetation that takes over abandoned areas. Wildlife Because jungles occur on all inhabited landmasses and may incorporate numerous vegetation and land types in different climatic zones, the wildlife of jungles cannot be straightforwardly defined. Varying usage As dense and tangled vegetation One of the most common meanings of jungle is land overgrown with tangled vegetation at ground level, especially in the tropics. Typically such vegetation is sufficiently dense to hinder movement by humans, requiring that travellers cut their way through. This definition draws a distinction between rainforest and jungle, since the understorey of rainforests is typically open of vegetation due to a lack of sunlight, and hence relatively easy to traverse. Jungles may exist within, or at the borders of, tropical forests in areas where the woodland has been opened through natural disturbance such as hurricanes, or through human activity such as logging. The successional vegetation that springs up following such disturbance, is dense and tangled and is a "typical" jungle. Jungle also typically forms along rainforest margins such as stream banks, once again due to the greater available light at ground level. Monsoon forests and mangroves are commonly referred to as jungles of this type. Having a more open canopy than rainforests, monsoon forests typically have dense understoreys with numerous lianas and shrubs making movement difficult, while the prop roots and low canopies of mangroves produce similar difficulties. As moist forest Because European explorers initially travelled through tropical forests largely by river, the dense tangled vegetation lining the stream banks gave a misleading impression that such jungle conditions existed throughout the entire forest. As a result, it was wrongly assumed that the entire forest was impenetrable jungle. This in turn appears to have given rise to the second popular usage of jungle as virtually any humid tropical forest. Jungle in this context is particularly associated with tropical rain forest, but may extend to cloud forest, temperate rainforest, and mangroves with no reference to the vegetation structure or the ease of travel. The terms "tropical forest" and "rainforest" have largely replaced "jungle" as the descriptor of humid tropical forests, a linguistic transition that has occurred since the 1970s. "Rainforest" itself did not appear in English dictionaries prior to the 1970s. The word "jungle" accounted for over 80% of the terms used to refer to tropical forests in print media prior to the 1970s; since then it has been steadily replaced by "rainforest", although "jungle" still remains in common use when referring to tropical rainforests. As metaphor As a metaphor, jungle often refers to situations that are unruly or lawless, or where the only law is perceived to be "survival of the fittest". This reflects the view of "city people" that forests are such places. Upton Sinclair gave the title The Jungle (1906) to his famous book about the life of workers at the Chicago Stockyards, portraying the workers as being mercilessly exploited with no legal or other lawful recourse. The term "The Law of the Jungle" is also used in a similar context, drawn from Rudyard Kipling's The Jungle Book (1894)—though in the society of jungle animals portrayed in that book and obviously meant as a metaphor for human society, that phrase referred to an intricate code of laws which Kipling describes in detail, and not at all to a lawless chaos. The word "jungle" carries connotations of untamed and uncontrollable nature and isolation from civilisation, along with the emotions that evokes: threat, confusion, powerlessness, disorientation and immobilisation. The change from "jungle" to "rainforest" as the preferred term for describing tropical forests has been a response to an increasing perception of these forests as fragile and spiritual places, a viewpoint not in keeping with the darker connotations of "jungle". Cultural scholars, especially post-colonial critics, often analyse the jungle within the concept of hierarchical domination and the demand western cultures often places on other cultures to conform to their standards of civilisation. For example: Edward Said notes that the Tarzan depicted by Johnny Weissmuller was a resident of the jungle representing the savage, untamed and wild, yet still a white master of it; and in his essay "An Image of Africa" about Heart of Darkness Nigerian novelist and theorist Chinua Achebe notes how the jungle and Africa become the source of temptation for white European characters like Marlowe and Kurtz. Former Israeli Prime Minister Ehud Barak compared Israel to "a villa in the jungle", a comparison which had been often quoted in Israeli political debates. Barak's critics on the left side of Israeli politics strongly criticised the comparison.
Physical sciences
Forests
null
341287
https://en.wikipedia.org/wiki/Smoke%20detector
Smoke detector
A smoke detector is a device that senses smoke, typically as an indicator of fire. Smoke detectors/alarms are usually housed in plastic enclosures, typically shaped like a disk about in diameter and thick, but shape and size vary. Smoke can be detected either optically (photoelectric) or by physical process (ionization). Detectors may use one or both sensing methods. Sensitive detectors can be used to detect and deter smoking in banned areas. Smoke detectors in large commercial and industrial buildings are usually connected to a central fire alarm system. Household smoke detectors, also known as smoke alarms, generally issue an audible or visual alarm from the detector itself or several detectors if there are multiple devices interconnected. Household smoke detectors range from individual battery-powered units to several interlinked units with battery backup. With interlinked units, if any unit detects smoke, alarms will trigger at all of the units. This happens even if household power has gone out. Residential smoke alarms are usually powered with a 9-volt battery, or by mains electricity. Some smoke alarms use a combination of the 2; usually using a battery as an extra power source in the event of an outage. Commercial smoke detectors issue a signal to a fire alarm control panel as part of a fire alarm system. Usually, an individual commercial smoke detector unit does not issue an alarm; some, however, have built-in sounders. The risk of dying in a residential fire is cut in half in houses with working smoke detectors. The US National Fire Protection Association reports 0.53 deaths per 100 fires in homes with working smoke detectors compared to 1.18 deaths without (2009–2013). History The first automatic electric fire alarm was patented in 1890 by Francis Robbins Upton, an associate of Thomas Edison. In 1902, George Andrew Darby patented the first European electrical heat detector in Birmingham, England. In the late 1930s, Swiss physicist Walter Jaeger attempted to invent a sensor for poison gas. He expected the gas entering the sensor to bind to ionized air molecules and thereby alter an electric current in a circuit of the instrument. However, his device did not achieve its purpose as small concentrations of gas did not affect the sensor's conductivity. Frustrated, Jaeger lit a cigarette and was surprised to notice that a meter on the instrument had registered a drop in current. Unlike poison gas, the smoke particles from his cigarette were able to alter the circuit's current. Jaeger's experiment was one of the developments that paved the way for the modern smoke detector. In 1939, Swiss physicist Ernst Meili devised an ionization chamber device capable of detecting combustible gases in mines. He also invented a cold cathode tube that could amplify the small signal generated by the detection mechanism so that it was strong enough to activate an alarm. In 1951, ionization smoke detectors were first sold in the United States. In the following years, they were used only in major commercial and industrial facilities due to their large size and high cost. In 1955, simple "fire detectors" for homes were developed, which detected high temperatures. In 1963, The United States Atomic Energy Commission (USAEC) granted the first license to distribute smoke detectors that used radioactive material. In 1965, the first low-cost smoke detector for domestic use was developed by Duane D. Pearsall and Stanley Bennett Peterson. It was an individual, replaceable, battery-powered unit that could be easily installed. The "SmokeGard 700" was beehive-shaped, fire-resistant, and made of steel. The company began mass-producing these units in 1975. Studies in the 1960s determined that smoke detectors respond to fires much faster than heat detectors. The first single-station smoke detector was invented in 1970 and was brought out the next year. It was an ionization detector powered by a single 9-volt battery. It cost about and sold at a rate of a few hundred thousand units per year. Several developments in smoke detector technology occurred between 1971 and 1976, including the replacement of cold-cathode tubes with solid-state electronics. This greatly reduced the detectors' cost and size and made it possible to monitor battery life. The previous alarm horns which required special batteries were replaced with horns that were more energy-efficient and allowed the use of widely available batteries. These detectors could also function with smaller amounts of radioactive source material, and the sensing chamber and smoke detector enclosure were redesigned to make the operation more effective. The rechargeable batteries were often replaced by a pair of AA batteries along with a plastic shell encasing the detector. The photoelectric (optical) smoke detector was invented by Donald Steele and Robert Emmark from Electro Signal Lab and patented in 1972. In 1995, the 10-year-lithium-battery-powered smoke alarm was introduced. Design Smoke can be detected using a photoelectric sensor or an ionization process. Fire without smoke can be detected by sensing carbon dioxide. Incomplete burning can be detected by sensing carbon monoxide. Photoelectric A photoelectric, or optical smoke detector, contains a source of infrared, visible, or ultraviolet light—typically an incandescent light bulb or light-emitting diode (LED)—a lens, and a photoelectric receiver—typically a photodiode. In spot-type detectors, all of these components are arranged inside a chamber where air, which may contain smoke from a nearby fire, flows. In large open areas such as atria and auditoriums, optical beam or projected-beam smoke detectors are used instead of a chamber within the unit: a wall-mounted unit emits a beam of infrared or ultraviolet light which is either received and processed by a separate device or reflected to the receiver by a reflector. In some types, particularly optical beam types, the light emitted by the light source passes through the air being tested and reaches the photosensor. The received light intensity will be reduced due to scattering from particulates of smoke, air-borne dust, or other substances; the circuitry detects the light intensity and generates the alarm if it is below a specified threshold, potentially due to smoke. In other types, typically chamber types, the light is not directed at the sensor, which is not illuminated in the absence of particles. If the air in the chamber contains particles (smoke or dust), the light is scattered and some of it reaches the sensor, triggering the alarm. According to the National Fire Protection Association (NFPA), "photoelectric smoke detection is generally more responsive to fires that begin with a long period of smoldering". Studies by Texas A&M and the NFPA cited by the City of Palo Alto, California state, "Photoelectric alarms react slower to rapidly growing fires than ionization alarms, but laboratory and field tests have shown that photoelectric smoke alarms provide adequate warning for all types of fires and have been shown to be far less likely to be deactivated by occupants." Although photoelectric alarms are highly effective at detecting smoldering fires and do provide adequate protection from flaming fires, fire safety experts and the NFPA recommend installing what are called combination alarms, which are alarms that either detect both heat and smoke or use both the ionization and photoelectric smoke sensing methods. Some combination alarms may also include a carbon monoxide detection capability. The type and sensitivity of light source and photoelectric sensor and type of smoke chamber differ between manufacturers. Ionization An ionization smoke detector uses a radioisotope, typically americium-241, to ionize air; a difference due to smoke is detected and an alarm is generated. Ionization detectors are more sensitive to the flaming stage of fires than optical detectors, while optical detectors are more sensitive to fires in the early smouldering stage. The smoke detector has two ionization chambers, one open to the air, and a reference chamber which does not allow the entry of particles. The radioactive source emits alpha particles into both chambers, which ionizes some air molecules. There is a potential difference (voltage) between pairs of electrodes in the chambers; the electrical charge on the ions allows an electric current to flow. The currents in both chambers should be the same as they are equally affected by air pressure, temperature, and the ageing of the source. If any smoke particles enter the open chamber, some of the ions will attach to the particles and not be available to carry the current in that chamber. An electronic circuit detects that a current difference has developed between the open and sealed chambers, and sounds the alarm. The circuitry also monitors the battery used to supply or back up power. It sounds an intermittent warning when it nears exhaustion. A user-operated test button simulates an imbalance between the ionization chambers and sounds the alarm if and only if the power supply, electronics, and alarm device are functional. The current drawn by an ionization smoke detector is low enough for a small battery used as a sole or backup power supply to be able to provide power for years without the need for external wiring. Ionization smoke detectors are usually less expensive to manufacture than optical detectors. Ionization detectors may be more prone than photoelectric detectors to false alarms triggered by non-hazardous events, and are much slower to respond to typical house fires. Radiation Americium-241 is an alpha emitter with a half-life of 432.6 years. Alpha particle radiation, as opposed to beta (electron) and gamma (electromagnetic) radiation, is used for two reasons: the alpha particles can ionize enough air to make a detectable current; and they have low penetrative power, meaning they will be stopped, safely, by the air or the plastic shell of the smoke detector. During the alpha decay, emits gamma radiation, but it is low-energy and therefore not considered a significant contributor to human exposure. The amount of elemental americium-241 in ionization smoke detectors is small enough to be exempt from the regulations applied to larger deployments. A smoke detector contains about of radioactive element americium-241 (), corresponding to about 0.3 μg of the isotope. This provides sufficient ion current to detect smoke while producing a very low level of radiation outside the device. Some Russian-made smoke detectors, most notably the RID-6m and IDF-1m models, contain a small amount of plutonium (18 MBq), rather than the typical source, in the form of reactor-grade mixed with titanium dioxide onto a cylindrical alumina surface. The amount of americium-241 contained in ionizing smoke detectors does not represent a significant radiological hazard. If the americium is left in the ionization chamber of the alarm, the radiological risk is insignificant because the chamber acts as a shield to the alpha radiation. A person would have to open the sealed chamber and ingest or inhale the americium for the dose to be comparable to natural background radiation. The radiation risk of exposure to an ionizing smoke detector operating normally is much smaller than natural background radiation. Disposal Disposal regulations and recommendations for ionization smoke detectors vary from region to region. The government of New South Wales, Australia considers it safe to discard up to 10 ionization smoke detectors in a batch with domestic rubbish. The U.S. EPA considers ionizing smoke detectors safe to dispose with household trash. Alternatively, smoke detectors can be returned to the manufacturer. Performance differences Photoelectric detectors and ionization detectors differ in their performance depending on the type of smoke generated by a fire. A presentation by Siemens and the Canadian Fire Alarm Association reports that the ionization detector is the best at detecting incipient-stage fires with invisibly small particles, fast-flaming fires with smaller 0.01–0.4 micron particles, and dark or black smoke, while more modern photoelectric detectors are best at detecting slow-smouldering fires with larger 0.4–10.0 micron particles, and light-coloured white/grey smoke. Photoelectric smoke detectors respond faster to fire that is in its early, smoldering stage. The smoke from the smoldering stage of a fire is typically made up of large combustion particles between 0.3 and 10.0 μm. Ionization smoke detectors respond faster (typically 30–60 seconds) to the flaming stage of a fire. The smoke from the flaming stage of a fire is typically made up of microscopic combustion particles between 0.01 and 0.3 μm. Also, ionization detectors are weaker in high airflow environments. Some European countries, including France, and some US states and municipalities have banned the use of domestic ionization smoke alarms because of concerns that they are not reliable enough as compared to other technologies. Where an ionizing smoke detector has been the only detector, fires in the early stages have not always been effectively detected. In June 2006, the Australian Fire & Emergency Service Authorities Council, the peak representative body for all Australian and New Zealand fire departments, published an official report, 'Position on Smoke Alarms in Residential Accommodation'. Clause 3.0 states, "Ionization smoke alarms may not operate in time to alert occupants to escape from a smoldering fire." In August 2008, the International Association of Fire Fighters (IAFF) passed a resolution recommending the use of photoelectric smoke alarms, saying that changing to photoelectric alarms "Will drastically reduce the loss of life among citizens and firefighters." In May 2011, the Fire Protection Association of Australia's (FPAA) official position on smoke alarms stated, "The Fire Prevention Association of Australia considers that all residential buildings should be fitted with photoelectric smoke alarms..." In December 2011, the Volunteer Firefighter's Association of Australia published a World Fire Safety Foundation report, "Ionization Smoke Alarms are DEADLY", citing research outlining substantial performance differences between ionization and photoelectric technology. In November 2013, the Ohio Fire Chiefs' Association (OFCA) published a position paper supporting the use of photoelectric technology in Ohioan residences. The OFCA's position states, "In the interest of public safety and to protect the public from the deadly effects of smoke and fire, the Ohio Fire Chiefs' Association endorses the use of photoelectric smoke alarms in both new construction and when replacing old smoke alarms or purchasing new alarms." In June 2014, tests by the Northeastern Ohio Fire Prevention Association (NEOFPA) on residential smoke alarms were broadcast on ABC's Good Morning America program. The NEOFPA tests showed ionization smoke alarms were failing to activate in the early, smoldering stage of a fire. The combination ionization/photoelectric alarms failed to activate for an average of over 20 minutes after the stand-alone photoelectric smoke alarms. This vindicated the June 2006 official position of the Australasian Fire & Emergency Service Authorities Council (AFAC) and the October 2008 official position of the International Association of Fire Fighters (IAFF). Both the AFAC and the IAFF recommend photoelectric smoke alarms, but not combination ionization/photoelectric smoke alarms. According to fire tests conformant to EN 54, the cloud from open fire can usually be detected before particulates. Due to the varying levels of detection capabilities between detector types, manufacturers have designed multi-criteria devices which cross-reference the separate signals to both rule out false alarms and improve response times to real fires. Obscuration is a unit of measurement that has become the standard way of specifying smoke detectors' sensitivity. Obscuration is the effect smoke has in reducing light intensity, expressed in percent absorption per unit length; higher concentrations of smoke result in higher obscuration levels. Carbon monoxide and carbon dioxide detection Carbon monoxide sensors detect potentially fatal concentrations of carbon monoxide, which may build up due to faulty ventilation where there are combustion appliances such as gas heaters and cookers, although there is no uncontrolled fire outside the appliance. High levels of carbon dioxide () may indicate a fire, and can be detected by a carbon dioxide sensor. Such sensors are often used to measure levels of which may be undesirable and harmful, but not indicative of a fire. This type of sensor can also be used to detect and warn of the much higher levels of generated by a fire. Some manufacturers say that detectors based on levels are the fastest fire indicators. Unlike ionization and optical detectors, they can also detect fires that do not generate smoke, such as those fueled by alcohol or gasoline. detectors are not susceptible to false alarms due to particles making them particularly suitable for use in dusty and dirty environments. Residential Smoke alarm systems used in a home or residential environment are typically smaller and less expensive than commercial units. The system may include one or more individual standalone units, or multiple, interconnected units. They typically generate a loud acoustic warning signal as their only action. Several detectors (whether standalone or interconnected) are normally used in the rooms of a dwelling. There are inexpensive smoke alarms that may be interconnected so that any detector triggers all alarms. They are powered by mains electricity, with disposable or rechargeable battery backup. They may be interconnected by wires, or wirelessly. They are required in new installations in some jurisdictions. Several smoke detection methods are used and documented in industry specifications published by Underwriters Laboratories. Alerting methods include: Audible tones Varies between 2,900 and 3,500 Hz depending on brand and model name 95 dB loudness at 3 ft, can vary between brands and models. Spoken voice alert Visual strobe lights 177 candela output Emergency light for illumination Tactile stimulation (e.g. bed or pillow shaker), although no standards existed as of 2008 for tactile stimulation alarm devices Some models have a hush or temporary silence feature that allows silencing, typically by pressing a button on the housing, without removing the battery. This is especially useful in locations where false alarms can be relatively common (e.g. near a kitchen), or situations where users might remove the battery permanently to avoid the annoyance of false alarms, preventing the alarm from detecting a fire should one break out. While current technology is very effective at detecting smoke and fire conditions, the deaf and hard-of-hearing community has raised concerns about the effectiveness of the alerting function in awakening sleeping individuals in certain high-risk groups. People part of groups like the elderly, those with hearing loss, and those who are intoxicated, may have a more difficult time using sound-based detectors. Between 2005 and 2007, research sponsored by the United States National Fire Protection Association (NFPA) focused on understanding the cause of the higher number of deaths in such high-risk groups. Initial research into the effectiveness of the various alerting methods is sparse. Research findings suggest that a mid-frequency (520 Hz) square wave output is significantly more effective at awakening high-risk individuals. Wireless smoke and carbon monoxide detectors linked to alert mechanisms such as vibrating pillow pads for the hearing impaired, strobes, and remote warning handsets are more effective at waking people with serious hearing loss than other alarms. Batteries Batteries are used either as sole or as backup power for residential smoke detectors. Mains-operated detectors have disposable or rechargeable batteries; others run only on 9-volt disposable batteries. When the battery is exhausted, a battery-only smoke detector becomes inactive; most smoke detectors chirp repeatedly if the battery is low in power. It has been found that battery-powered smoke detectors in many houses have dead batteries. It has been estimated that in the UK, over 30% of smoke alarms have dead or removed batteries. In response public information campaigns have been created to remind people to change smoke detector batteries regularly. In Australia, for example, a public information campaign suggests that smoke alarm batteries should be replaced on April Fools' Day every year. In regions using daylight saving time, campaigns may suggest that people change their batteries when they change their clocks or on a birthday. Some mains-powered detectors are fitted with a non-rechargeable lithium battery for backup with a life of typically ten years. After this, it is recommended that the detector be replaced. User-replaceable disposable 9-volt lithium batteries, which last at least twice as long as alkaline batteries, are also available for smoke detectors. The US National Fire Protection Association (NFPA) recommends that homeowners replace smoke detector batteries at least once per year when they start chirping (a signal that the battery is low on power output). Batteries should also be replaced when or if they fail a test, which the NFPA recommends to be carried out at least once per month by pressing the "test" button on the alarm. Reliability A 2004 NIST report concluded that "Smoke alarms of either the ionization type or the photoelectric type consistently provided time for occupants to escape from most residential fires," and, "Consistent with prior findings, ionization type alarms provided somewhat better response to flaming fires than photoelectric alarms (57 to 62 seconds faster response), and photoelectric alarms provided (often) considerably faster response to smoldering fires than ionization type alarms (47 to 53 minutes faster response)." Regular cleaning can prevent false alarms caused by the build-up of dust and insects, particularly on optical-type alarms as they are more susceptible to these factors. A vacuum cleaner can be used to clean domestic smoke detectors to remove detrimental dust. Optical detectors are less susceptible to false alarms in locations such as near a kitchen producing cooking fumes. On the night of May 31, 2001, Bill Hackert and his daughter Christine of Rotterdam, New York, died when their house caught fire and a First Alert brand ionization smoke detector failed to sound. The cause of the fire was a frayed electrical cord behind a couch that smoldered for hours before engulfing the house with flames and smoke. The ionization smoke detector was found to be defectively designed, and in 2006 a jury in the United States District Court for the Northern District of New York decided that First Alert, and its then parent company, BRK Brands, was liable for millions of dollars in damages. Installation and placement In the United States most state and local laws regarding the required number and placement of smoke detectors are based upon standards established in NFPA 72, National Fire Alarm and Signaling Code. Laws governing the installation of smoke detectors vary depending on the locality. However, some rules and guidelines for existing homes are relatively consistent throughout the developed world. For example, Canada and Australia require a building to have a working smoke detector on every level. The United States NFPA code, cited earlier, requires smoke detectors on every habitable level and within the vicinity of all bedrooms. Habitable levels include attics that are tall enough to allow access. Many other countries have comparable requirements. In new construction, minimum requirements are typically more stringent. For example, all smoke detectors must be hooked directly to the electrical wiring, be interconnected and have a battery backup. In addition, typically, smoke detectors are required either inside or outside every bedroom, depending on local codes. Smoke detectors on the outside will detect fires more quickly, assuming the fire does not begin in the bedroom, but the sound of the alarm will be reduced and may not wake some people. Some areas also require smoke detectors in stairways, main hallways and garages. A dozen or more detectors may be connected via wiring or wirelessly such that if one detects smoke, the alarms will sound on all the detectors in the network, improving the likelihood that occupants will be alerted even if smoke is detected far from their location. Wired interconnection is more practical in new construction than for existing buildings. In the UK, the installation of smoke alarms in new builds must comply with British Standard BS5839 pt6. BS 5839: Pt.6: 2004, which recommends that a new-build property consisting of no more than 3 floors (less than 200 square metres per floor) should be fitted with a Grade D, LD2 system. Building Regulations in England, Wales and Scotland recommend that BS 5839: Pt.6 should be followed, but as a minimum a Grade D, LD3 system should be installed. Building Regulations in Northern Ireland require a Grade D, LD2 system to be installed, with smoke alarms fitted in the escape routes and the main living room and a heat alarm in the kitchen; this standard also requires all detectors to have a mains supply and a battery backup. Commercial Commercial smoke detectors are either conventional or addressable, and are connected to security alarm or fire alarm systems controlled by fire alarm control panels (FACP). These are the most common type of detector and are usually significantly more expensive than single-station battery-operated residential smoke alarms. They are used in most commercial and industrial facilities and other places such as ships and trains, but are also part of some security alarm systems in homes. These detectors do not need to have built-in alarms, as alarm systems can be controlled by the connected FACP, which will set off relevant alarms, and can also implement complex functions such as a staged evacuation. Conventional The word "conventional" is slang used to distinguish the method used to communicate with the control unit in newer addressable systems. So-called "conventional detectors" are smoke detectors used in older interconnected systems and resemble electrical switches by their way of working. These detectors are connected in parallel to the signaling path so that the current flow is monitored to indicate a closure of the circuit path by any connected detector when smoke or other similar environmental stimuli sufficiently influences any detector. The resulting increase in current flow (or a dead short) is interpreted and processed by the control unit as a confirmation of the presence of smoke and a fire alarm signal is generated. In a conventional system, smoke detectors are typically wired together in each zone and a single fire alarm control panel usually monitors several zones which can be arranged to correspond to different areas of a building. In the event of a fire, the control panel can identify which zone or zones contain the detector or detectors in alarm. However, they cannot identify which individual detector or detectors are in a state of alarm. Addressable An addressable system gives each detector an individual number or address. Addressable systems allow the exact location of an alarm to be plotted on the FACP while allowing several detectors to be connected to the same zone. In certain systems, a graphical representation of the building is provided on the screen of the FACP which shows the locations of all of the detectors in the building, while in others the address and location of the detector or detectors in alarm are simply indicated. Addressable systems are usually more expensive than conventional non-addressable systems, and offer extra options, including a custom level of sensitivity (sometimes called Day/Night mode) which can determine the amount of smoke in a given area and contamination detection from the FACP that allows determination of a wide range of faults in detection capabilities of smoke detectors. Detectors become contaminated usually as a result of the build-up of atmospheric particulates in the detectors being circulated by the heating and air-conditioning systems in buildings. Other causes include carpentry, sanding, painting, and smoke in the event of a fire. Panels can also be interconnected to monitor a large number of detectors in multiple buildings. This is most commonly used in hospitals, universities, resorts and other large centres or institutions. Standards EN54 European standards Fire detection products have the European Standard EN 54 Fire Detection and Fire Alarm Systems that is a mandatory standard for every product that is going to be delivered and installed in any country in the European Union (EU). EN 54 part 7 is the standard for smoke detectors. European standards are developed to allow free movement of goods in the EU countries. EN 54 is widely recognized around the world. The EN 54 certification of each device must be issued annually. Coverage of smoke and temperature detectors with European standard EN54 EN54-7: Smoke detector EN54-5: Temperature detector SA: Surface area Smax (square meters): Maximum surface coverage Rmax (m): Maximum radio Information that is in bold is the standard coverage of the detector. Smoke detector coverage is 60 square meters and temperature smoke detector coverage is 20 square meters. The height from the ground is an important issue for correct protection. An additional (harmonised) EN14604 also exists, which tends to be the standard usually cited at the domestic point of sale. This standard expands on the EN54 recommendations for domestic smoke alarms and specifies requirements, test methods, performance criteria, and manufacturer's instructions. It also includes additional requirements for smoke alarms, which are suitable for use in leisure accommodation vehicles. However, much of EN14604 is voluntary. A study published in 2014 assessed six areas of compliance and found that 33% of devices claiming to meet this standard did not do so in one or more of the specifics. The study also found 19% of the products to have a problem with actual fire detection. Australia and United States In the United States, the first standard for home smoke alarms was established in 1967. In 1969, the USAEC allowed homeowners to use smoke detectors without a license. The Life Safety Code (NFPA 101), passed by the US National Fire Protection Association (NFPA) in 1976, first required smoke alarms in homes. Smoke alarm sensitivity requirements in UL 217 were modified in 1985 to reduce susceptibility to nuisance alarms. In 1988 BOCA, ICBO, and SBCCI model building codes begin requiring smoke alarms to be interconnected and located in all sleeping rooms. In 1989 NFPA 74 first required smoke alarms to be interconnected in every new home construction, and 1993 NFPA 72 first required that smoke alarms be installed in all bedrooms. The NFPA began requiring the replacement of smoke detectors after ten years in 1999. In 1999, the Underwriters Laboratory (UL) changed smoke alarm labeling requirements so that all smoke alarms must have a manufactured date written in plain English. In June 2013, a World Fire Safety Foundation report titled, 'Can Australian and U.S. Smoke Alarm Standards be Trusted?' was published in the official magazine of the Australian Volunteer Firefighter Association. The report brings into question the validity of testing criteria used by American and Australian government agencies when undergoing scientific testing of ionization smoke alarms. Legislation In June 2010 the City of Albany, California, enacted a photoelectric-only legislation after a unanimous decision by the Albany City Council; several other Californian and Ohioan cities enacted similar legislation shortly afterwards. In November 2011, the Northern Territory enacted Australia's first residential photoelectric legislation mandating the use of photoelectric smoke alarms in all new Northern Territory homes. From January 1, 2017, the Australian state of Queensland mandated all smoke alarms in new dwellings (or where a dwelling is substantially renovated) must be photoelectric, and not also contain an ionization sensor. They also were to be hardwired to the mains power supply with a secondary power source (i.e. battery) and be interconnected with every other smoke alarm in the dwelling. This is so all would be activated together. From that date, all replacement smoke alarms must be photoelectric; from January 1, 2022, all dwellings sold, leased, or where a lease is renewed must comply as for new dwellings; and from January 1, 2027, all dwellings must comply as for new dwellings. In June 2013, in an Australian Parliamentary speech, the question was asked, "Are ionization smoke alarms defective?" This was further to the Australian Government's scientific testing agency (the Commonwealth Scientific and Industrial Research Organisation – CSIRO) data revealing serious performance problems with ionization technology in the early, smoldering stage of a fire, a rise in litigation involving ionization smoke alarms, and increasing legislation mandating the installation of photoelectric smoke alarms. The speech cited in May 2013, World Fire Safety Foundation report published in the Australian Volunteer Firefighter Association's magazine titled, 'Can Australian and U.S. Smoke Alarm Standards be Trusted?' The speech concluded with a request for one of the world's largest ionization smoke alarm manufacturers and the CSIRO to disclose the level of visible smoke required to trigger the manufacturers' ionization smoke alarms under CSIRO scientific testing. The US state of California banned the sale of smoke detectors with replaceable batteries. Privacy concerns regarding smart smoke detectors Smart smoke detectors, like other Internet of things devices, can collect and transmit a significant amount of data. This can include data about when and where the device is used, the frequency of alarms, and even audio and video data if the device includes a microphone or camera. This data can potentially infer sensitive information about a user's habits, routines, and lifestyle. Since smart smoke detectors are connected to the internet, they are vulnerable to hacking. An unauthorized person could potentially access the device and the data it collects. In extreme cases, if the device includes a camera or microphone, a hacker could use it to spy on the home's inhabitants. Many smart device manufacturers share user data with third parties, often for advertising or data analysis purposes. This can be a significant privacy concern if the data includes sensitive or personally identifiable information. Some manufacturers may also cooperate with law enforcement agencies, potentially providing them with access to users' data without their knowledge or consent. Many users have taken steps to protect their privacy when using smart smoke detectors. This can include using strong, unique passwords for their devices, disabling unnecessary features, and regularly updating device software to protect against security vulnerabilities. Some users may also choose to use traditional smoke detectors that do not connect to the internet, to completely avoid these privacy concerns.
Technology
Fire protection
null
341442
https://en.wikipedia.org/wiki/Cantor%27s%20theorem
Cantor's theorem
In mathematical set theory, Cantor's theorem is a fundamental result which states that, for any set , the set of all subsets of known as the power set of has a strictly greater cardinality than itself. For finite sets, Cantor's theorem can be seen to be true by simple enumeration of the number of subsets. Counting the empty set as a subset, a set with elements has a total of subsets, and the theorem holds because for all non-negative integers. Much more significant is Cantor's discovery of an argument that is applicable to any set, and shows that the theorem holds for infinite sets also. As a consequence, the cardinality of the real numbers, which is the same as that of the power set of the integers, is strictly larger than the cardinality of the integers; see Cardinality of the continuum for details. The theorem is named for Georg Cantor, who first stated and proved it at the end of the 19th century. Cantor's theorem had immediate and important consequences for the philosophy of mathematics. For instance, by iteratively taking the power set of an infinite set and applying Cantor's theorem, we obtain an endless hierarchy of infinite cardinals, each strictly larger than the one before it. Consequently, the theorem implies that there is no largest cardinal number (colloquially, "there's no largest infinity"). Proof Cantor's argument is elegant and remarkably simple. The complete proof is presented below, with detailed explanations to follow. By definition of cardinality, we have for any two sets and if and only if there is an injective function but no bijective function from It suffices to show that there is no surjection from . This is the heart of Cantor's theorem: there is no surjective function from any set to its power set. To establish this, it is enough to show that no function (that maps elements in to subsets of ) can reach every possible subset, i.e., we just need to demonstrate the existence of a subset of that is not equal to for any . Recalling that each is a subset of , such a subset is given by the following construction, sometimes called the Cantor diagonal set of : This means, by definition, that for all , if and only if . For all the sets and cannot be equal because was constructed from elements of whose images under did not include themselves. For all either or . If then cannot equal because by assumption and by definition. If then cannot equal because by assumption and by the definition of . Equivalently, and slightly more formally, we have just proved that the existence of such that implies the following contradiction: Therefore, by reductio ad absurdum, the assumption must be false. Thus there is no such that ; in other words, is not in the image of and does not map onto every element of the power set of , i.e., is not surjective. Finally, to complete the proof, we need to exhibit an injective function from to its power set. Finding such a function is trivial: just map to the singleton set . The argument is now complete, and we have established the strict inequality for any set that . Another way to think of the proof is that , empty or non-empty, is always in the power set of . For to be onto, some element of must map to . But that leads to a contradiction: no element of can map to because that would contradict the criterion of membership in , thus the element mapping to must not be an element of meaning that it satisfies the criterion for membership in , another contradiction. So the assumption that an element of maps to must be false; and cannot be onto. Because of the double occurrence of in the expression "", this is a diagonal argument. For a countable (or finite) set, the argument of the proof given above can be illustrated by constructing a table in which each row is labelled by a unique from , in this order. is assumed to admit a linear order so that such table can be constructed. each column of the table is labelled by a unique from the power set of ; the columns are ordered by the argument to , i.e. the column labels are , ..., in this order. the intersection of each row and column records a true/false bit whether . Given the order chosen for the row and column labels, the main diagonal of this table thus records whether for each . One such table will be the following: The set constructed in the previous paragraphs coincides with the row labels for the subset of entries on this main diagonal (which in above example, coloured red) where the table records that is false. Each row records the values of the indicator function of the set corresponding to the column. The indicator function of coincides with the logically negated (swap "true" and "false") entries of the main diagonal. Thus the indicator function of does not agree with any column in at least one entry. Consequently, no column represents . Despite the simplicity of the above proof, it is rather difficult for an automated theorem prover to produce it. The main difficulty lies in an automated discovery of the Cantor diagonal set. Lawrence Paulson noted in 1992 that Otter could not do it, whereas Isabelle could, albeit with a certain amount of direction in terms of tactics that might perhaps be considered cheating. When A is countably infinite Let us examine the proof for the specific case when is countably infinite. Without loss of generality, we may take , the set of natural numbers. Suppose that is equinumerous with its power set . Let us see a sample of what looks like: Indeed, contains infinite subsets of , e.g. the set of all positive even numbers , along with the empty set . Now that we have an idea of what the elements of are, let us attempt to pair off each element of with each element of to show that these infinite sets are equinumerous. In other words, we will attempt to pair off each element of with an element from the infinite set , so that no element from either infinite set remains unpaired. Such an attempt to pair elements would look like this: Given such a pairing, some natural numbers are paired with subsets that contain the very same number. For instance, in our example the number 2 is paired with the subset {1, 2, 3}, which contains 2 as a member. Let us call such numbers selfish. Other natural numbers are paired with subsets that do not contain them. For instance, in our example the number 1 is paired with the subset {4, 5}, which does not contain the number 1. Call these numbers non-selfish. Likewise, 3 and 4 are non-selfish. Using this idea, let us build a special set of natural numbers. This set will provide the contradiction we seek. Let be the set of all non-selfish natural numbers. By definition, the power set contains all sets of natural numbers, and so it contains this set as an element. If the mapping is bijective, must be paired off with some natural number, say . However, this causes a problem. If is in , then is selfish because it is in the corresponding set, which contradicts the definition of . If is not in , then it is non-selfish and it should instead be a member of . Therefore, no such element which maps to can exist. Since there is no natural number which can be paired with , we have contradicted our original supposition, that there is a bijection between and . Note that the set may be empty. This would mean that every natural number maps to a subset of natural numbers that contains . Then, every number maps to a nonempty set and no number maps to the empty set. But the empty set is a member of , so the mapping still does not cover . Through this proof by contradiction we have proven that the cardinality of and cannot be equal. We also know that the cardinality of cannot be less than the cardinality of because contains all singletons, by definition, and these singletons form a "copy" of inside of . Therefore, only one possibility remains, and that is that the cardinality of is strictly greater than the cardinality of , proving Cantor's theorem. Related paradoxes Cantor's theorem and its proof are closely related to two paradoxes of set theory. Cantor's paradox is the name given to a contradiction following from Cantor's theorem together with the assumption that there is a set containing all sets, the universal set . In order to distinguish this paradox from the next one discussed below, it is important to note what this contradiction is. By Cantor's theorem for any set . On the other hand, all elements of are sets, and thus contained in , therefore . Another paradox can be derived from the proof of Cantor's theorem by instantiating the function f with the identity function; this turns Cantor's diagonal set into what is sometimes called the Russell set of a given set A: The proof of Cantor's theorem is straightforwardly adapted to show that assuming a set of all sets U exists, then considering its Russell set RU leads to the contradiction: This argument is known as Russell's paradox. As a point of subtlety, the version of Russell's paradox we have presented here is actually a theorem of Zermelo; we can conclude from the contradiction obtained that we must reject the hypothesis that RU∈U, thus disproving the existence of a set containing all sets. This was possible because we have used restricted comprehension (as featured in ZFC) in the definition of RA above, which in turn entailed that Had we used unrestricted comprehension (as in Frege's system for instance) by defining the Russell set simply as , then the axiom system itself would have entailed the contradiction, with no further hypotheses needed. Despite the syntactical similarities between the Russell set (in either variant) and the Cantor diagonal set, Alonzo Church emphasized that Russell's paradox is independent of considerations of cardinality and its underlying notions like one-to-one correspondence. History Cantor gave essentially this proof in a paper published in 1891 "Über eine elementare Frage der Mannigfaltigkeitslehre", where the diagonal argument for the uncountability of the reals also first appears (he had earlier proved the uncountability of the reals by other methods). The version of this argument he gave in that paper was phrased in terms of indicator functions on a set rather than subsets of a set. He showed that if f is a function defined on X whose values are 2-valued functions on X, then the 2-valued function G(x) = 1 − f(x)(x) is not in the range of f. Bertrand Russell has a very similar proof in Principles of Mathematics (1903, section 348), where he shows that there are more propositional functions than objects. "For suppose a correlation of all objects and some propositional functions to have been affected, and let phi-x be the correlate of x. Then "not-phi-x(x)," i.e. "phi-x does not hold of x" is a propositional function not contained in this correlation; for it is true or false of x according as phi-x is false or true of x, and therefore it differs from phi-x for every value of x." He attributes the idea behind the proof to Cantor. Ernst Zermelo has a theorem (which he calls "Cantor's Theorem") that is identical to the form above in the paper that became the foundation of modern set theory ("Untersuchungen über die Grundlagen der Mengenlehre I"), published in 1908. See Zermelo set theory. Generalizations Lawvere's fixed-point theorem provides for a broad generalization of Cantor's theorem to any category with finite products in the following way: let be such a category, and let be a terminal object in . Suppose that is an object in and that there exists an endomorphism that does not have any fixed points; that is, there is no morphism that satisfies . Then there is no object of such that a morphism can parameterize all morphisms . In other words, for every object and every morphism , an attempt to write maps as maps of the form must leave out at least one map .
Mathematics
Discrete mathematics
null
341482
https://en.wikipedia.org/wiki/L%C3%B6wenheim%E2%80%93Skolem%20theorem
Löwenheim–Skolem theorem
In mathematical logic, the Löwenheim–Skolem theorem is a theorem on the existence and cardinality of models, named after Leopold Löwenheim and Thoralf Skolem. The precise formulation is given below. It implies that if a countable first-order theory has an infinite model, then for every infinite cardinal number κ it has a model of size κ, and that no first-order theory with an infinite model can have a unique model up to isomorphism. As a consequence, first-order theories are unable to control the cardinality of their infinite models. The (downward) Löwenheim–Skolem theorem is one of the two key properties, along with the compactness theorem, that are used in Lindström's theorem to characterize first-order logic. In general, the Löwenheim–Skolem theorem does not hold in stronger logics such as second-order logic. Theorem In its general form, the Löwenheim–Skolem Theorem states that for every signature σ, every infinite σ-structure M and every infinite cardinal number , there is a σ-structure N such that and such that if then N is an elementary substructure of M; if then N is an elementary extension of M. The theorem is often divided into two parts corresponding to the two cases above. The part of the theorem asserting that a structure has elementary substructures of all smaller infinite cardinalities is known as the downward Löwenheim–Skolem Theorem. The part of the theorem asserting that a structure has elementary extensions of all larger cardinalities is known as the upward Löwenheim–Skolem Theorem. Discussion Below we elaborate on the general concept of signatures and structures. Concepts Signatures A signature consists of a set of function symbols Sfunc, a set of relation symbols Srel, and a function representing the arity of function and relation symbols. (A nullary function symbol is called a constant symbol.) In the context of first-order logic, a signature is sometimes called a language. It is called countable if the set of function and relation symbols in it is countable, and in general the cardinality of a signature is the cardinality of the set of all the symbols it contains. A first-order theory consists of a fixed signature and a fixed set of sentences (formulas with no free variables) in that signature. Theories are often specified by giving a list of axioms that generate the theory, or by giving a structure and taking the theory to consist of the sentences satisfied by the structure. Structures / Models Given a signature σ, a σ-structure M is a concrete interpretation of the symbols in σ. It consists of an underlying set (often also denoted by "M") together with an interpretation of the function and relation symbols of σ. An interpretation of a constant symbol of σ in M is simply an element of M. More generally, an interpretation of an n-ary function symbol f is a function from Mn to M. Similarly, an interpretation of a relation symbol R is an n-ary relation on M, i.e. a subset of Mn. A substructure of a σ-structure M is obtained by taking a subset N of M which is closed under the interpretations of all the function symbols in σ (hence includes the interpretations of all constant symbols in σ), and then restricting the interpretations of the relation symbols to N. An elementary substructure is a very special case of this; in particular an elementary substructure satisfies exactly the same first-order sentences as the original structure (its elementary extension). Consequences The statement given in the introduction follows immediately by taking M to be an infinite model of the theory. The proof of the upward part of the theorem also shows that a theory with arbitrarily large finite models must have an infinite model; sometimes this is considered to be part of the theorem. A theory is called categorical if it has only one model, up to isomorphism. This term was introduced by , and for some time thereafter mathematicians hoped they could put mathematics on a solid foundation by describing a categorical first-order theory of some version of set theory. The Löwenheim–Skolem theorem dealt a first blow to this hope, as it implies that a first-order theory which has an infinite model cannot be categorical. Later, in 1931, the hope was shattered completely by Gödel's incompleteness theorem. Many consequences of the Löwenheim–Skolem theorem seemed counterintuitive to logicians in the early 20th century, as the distinction between first-order and non-first-order properties was not yet understood. One such consequence is the existence of uncountable models of true arithmetic, which satisfy every first-order induction axiom but have non-inductive subsets. Let N denote the natural numbers and R the reals. It follows from the theorem that the theory of (N, +, ×, 0, 1) (the theory of true first-order arithmetic) has uncountable models, and that the theory of (R, +, ×, 0, 1) (the theory of real closed fields) has a countable model. There are, of course, axiomatizations characterizing (N, +, ×, 0, 1) and (R, +, ×, 0, 1) up to isomorphism. The Löwenheim–Skolem theorem shows that these axiomatizations cannot be first-order. For example, in the theory of the real numbers, the completeness of a linear order used to characterize R as a complete ordered field, is a non-first-order property. Another consequence that was considered particularly troubling is the existence of a countable model of set theory, which nevertheless must satisfy the sentence saying the real numbers are uncountable. Cantor's theorem states that some sets are uncountable. This counterintuitive situation came to be known as Skolem's paradox; it shows that the notion of countability is not absolute. Proof sketch Downward part For each first-order -formula , the axiom of choice implies the existence of a function such that, for all , either or . Applying the axiom of choice again we get a function from the first-order formulas to such functions . The family of functions gives rise to a preclosure operator on the power set of for . Iterating countably many times results in a closure operator . Taking an arbitrary subset such that , and having defined , one can see that also . Then is an elementary substructure of by the Tarski–Vaught test. The trick used in this proof is essentially due to Skolem, who introduced function symbols for the Skolem functions into the language. One could also define the as partial functions such that is defined if and only if . The only important point is that is a preclosure operator such that contains a solution for every formula with parameters in which has a solution in and that . Upward part First, one extends the signature by adding a new constant symbol for every element of . The complete theory of for the extended signature is called the elementary diagram of . In the next step one adds many new constant symbols to the signature and adds to the elementary diagram of the sentences for any two distinct new constant symbols and . Using the compactness theorem, the resulting theory is easily seen to be consistent. Since its models must have cardinality at least , the downward part of this theorem guarantees the existence of a model which has cardinality exactly . It contains an isomorphic copy of as an elementary substructure. In other logics Although the (classical) Löwenheim–Skolem theorem is tied very closely to first-order logic, variants hold for other logics. For example, every consistent theory in second-order logic has a model smaller than the first supercompact cardinal (assuming one exists). The minimum size at which a (downward) Löwenheim–Skolem–type theorem applies in a logic is known as the Löwenheim number, and can be used to characterize that logic's strength. Moreover, if we go beyond first-order logic, we must give up one of three things: countable compactness, the downward Löwenheim–Skolem Theorem, or the properties of an abstract logic. Historical notes This account is based mainly on . To understand the early history of model theory one must distinguish between syntactical consistency (no contradiction can be derived using the deduction rules for first-order logic) and satisfiability (there is a model). Somewhat surprisingly, even before the completeness theorem made the distinction unnecessary, the term consistent was used sometimes in one sense and sometimes in the other. The first significant result in what later became model theory was Löwenheim's theorem in Leopold Löwenheim's publication "Über Möglichkeiten im Relativkalkül" (1915): For every countable signature σ, every σ-sentence that is satisfiable is satisfiable in a countable model. Löwenheim's paper was actually concerned with the more general Peirce–Schröder calculus of relatives (relation algebra with quantifiers). He also used the now antiquated notations of Ernst Schröder. For a summary of the paper in English and using modern notations see . According to the received historical view, Löwenheim's proof was faulty because it implicitly used Kőnig's lemma without proving it, although the lemma was not yet a published result at the time. In a revisionist account, considers that Löwenheim's proof was complete. gave a (correct) proof using formulas in what would later be called Skolem normal form and relying on the axiom of choice: Every countable theory which is satisfiable in a model M, is satisfiable in a countable substructure of M. also proved the following weaker version without the axiom of choice: Every countable theory which is satisfiable in a model is also satisfiable in a countable model. simplified . Finally, Anatoly Ivanovich Maltsev (Анато́лий Ива́нович Ма́льцев, 1936) proved the Löwenheim–Skolem theorem in its full generality . He cited a note by Skolem, according to which the theorem had been proved by Alfred Tarski in a seminar in 1928. Therefore, the general theorem is sometimes known as the Löwenheim–Skolem–Tarski theorem. But Tarski did not remember his proof, and it remains a mystery how he could do it without the compactness theorem. It is somewhat ironic that Skolem's name is connected with the upward direction of the theorem as well as with the downward direction: "I follow custom in calling Corollary 6.1.4 the upward Löwenheim-Skolem theorem. But in fact Skolem didn't even believe it, because he didn't believe in the existence of uncountable sets." – . "Skolem [...] rejected the result as meaningless; Tarski [...] very reasonably responded that Skolem's formalist viewpoint ought to reckon the downward Löwenheim-Skolem theorem meaningless just like the upward." – . "Legend has it that Thoralf Skolem, up until the end of his life, was scandalized by the association of his name to a result of this type, which he considered an absurdity, nondenumerable sets being, for him, fictions without real existence." – .
Mathematics
Model theory
null
341566
https://en.wikipedia.org/wiki/Halite
Halite
Halite ( ), commonly known as rock salt, is a type of salt, the mineral (natural) form of sodium chloride (NaCl). Halite forms isometric crystals. The mineral is typically colorless or white, but may also be light blue, dark blue, purple, pink, red, orange, yellow or gray depending on inclusion of other materials, impurities, and structural or isotopic abnormalities in the crystals. It commonly occurs with other evaporite deposit minerals such as several of the sulfates, halides, and borates. The name halite is derived from the Ancient Greek word for "salt", ἅλς (háls). Occurrence Halite dominantly occurs within sedimentary rocks where it has formed from the evaporation of seawater or salty lake water. Vast beds of sedimentary evaporite minerals, including halite, can result from the drying up of enclosed lakes and restricted seas. Such salt beds may be hundreds of meters thick and underlie broad areas. Halite occurs at the surface today in playas in regions where evaporation exceeds precipitation such as in the salt flats of Badwater Basin in Death Valley National Park. In the United States and Canada, extensive underground beds extend from the Appalachian Basin of western New York through parts of Ontario and under much of the Michigan Basin. Other deposits are in Ohio, Kansas, New Mexico, Nova Scotia and Saskatchewan. Deposits can also be found near Dasol, Pangasinan, Philippines. The Khewra salt mine is a massive deposit of halite near Islamabad, Pakistan. Salt domes are vertical diapirs or pipe-like masses of salt that have been essentially "squeezed up" from underlying salt beds by mobilization due to the weight of the overlying rock. Salt domes contain anhydrite, gypsum, and native sulfur, in addition to halite and sylvite. They are common along the Gulf coasts of Texas and Louisiana and are often associated with petroleum deposits. Germany, Spain, the Netherlands, Denmark, Romania and Iran also have salt domes. Salt glaciers exist in arid Iran where the salt has broken through the surface at high elevation and flows downhill. In these cases, halite is said to be behaving like a rheid. Unusual, purple, fibrous vein-filling halite is found in France and a few other localities. Halite crystals termed hopper crystals appear to be "skeletons" of the typical cubes, with the edges present and stairstep depressions on, or rather in, each crystal face. In a rapidly crystallizing environment, the edges of the cubes simply grow faster than the centers. Halite crystals form very quickly in some rapidly evaporating lakes resulting in modern artifacts with a coating or encrustation of halite crystals. Halite flowers are rare stalactites of curling fibers of halite that are found in certain arid caves of Australia's Nullarbor Plain. Halite stalactites and encrustations are also reported in the Quincy native copper mine of Hancock, Michigan. Mining The world's largest underground salt mine is the Sifto Salt Mine. It produces over 7 million tons of rock salt per year using the room and pillar mining method. It is located half a kilometre under Lake Huron in Ontario, Canada. In the United Kingdom there are three mines; the largest of these is at Winsford in Cheshire, producing, on average, one million tonnes of salt per year. Uses Salt is used extensively in cooking as a flavor enhancer, and to cure a wide variety of foods such as bacon and fish. It is frequently used in food preservation methods across various cultures. Larger pieces can be ground in a salt mill or dusted over food from a shaker as finishing salt. Halite is also often used both residentially and municipally for managing ice. Because brine (a solution of water and salt) has a lower freezing point than pure water, putting salt or saltwater on ice that is below will cause it to melt—this effect is called freezing-point depression. It is common for homeowners in cold climates to spread salt on their sidewalks and driveways after a snow storm to melt the ice. It is not necessary to use so much salt that the ice is completely melted; rather, a small amount of salt will weaken the ice so that it can be easily removed by other means. Also, many cities will spread a mixture of sand and salt on roads during and after a snowstorm to improve traction. Using salt brine is more effective than spreading dry salt because moisture is necessary for the freezing-point depression to work and wet salt sticks to the roads better. Otherwise the salt can be wiped away by traffic. In addition to de-icing, rock salt is occasionally used in agriculture. An example of this would be inducing salt stress to suppress the growth of annual meadow grass in turf production. Other examples involve exposing weeds to salt water to dehydrate and kill them preventing them from affecting other plants. Salt is also used as a household cleaning product. Its coarse nature allows for its use in various cleaning scenarios including grease/oil removal, stain removal, dries out and hardens sticky spills for an easier clean. Some cultures, especially in Africa and Brazil, prefer a wide variety of different rock salts for different dishes. Pure salt is avoided as particular colors of salt indicates the presence of different impurities. Many recipes call for particular kinds of rock salt, and imported pure salt often has impurities added to adapt to local tastes. Historically, salt was used as a form of currency in barter systems and was exclusively controlled by authorities and their appointees. In some ancient civilizations the practice of salting the earth was done to make conquered land of an enemy infertile and inhospitable as an act of domination or spite. One biblical reference to this practice is in Judges 9:45: "he killed the people in it, pulled the wall down and sowed the site with salt." Polyhalite, a mineral fertilizer, is not an NaCl-polymer, but hydrated sulfate of potassium, calcium and magnesium (K2Ca2Mg-sulfate). Shotgun shells containing rock salt (instead of metal pellets) are a less lethal deterrent. Gallery
Physical sciences
Minerals
Earth science
600841
https://en.wikipedia.org/wiki/Abies%20balsamea
Abies balsamea
Abies balsamea or balsam fir is a North American fir, native to most of eastern and central Canada (Newfoundland west to central Alberta) and the northeastern United States (Minnesota east to Maine, and south in the Appalachian Mountains to West Virginia). Description Balsam fir is a small to medium-size evergreen tree typically tall, occasionally reaching a height of . The narrow conic crown consists of dense, dark-green leaves. The bark on young trees is smooth, grey, and with resin blisters (which tend to spray when ruptured), becoming rough and fissured or scaly on old trees. The leaves are flat and needle-like, long, dark green above often with a small patch of stomata near the tip, and two white stomatal bands below, and a slightly notched tip. They are arranged spirally on the shoot, but with the leaf bases twisted so that the leaves appear to be in two more-or-less horizontal rows on either side of the shoot. The needles become shorter and thicker the higher they are on the tree. The seed cones are erect, long, dark purple, ripening brown and disintegrating to release the winged seeds in September. Medicinal For thousands of years Native Americans used balsam fir for medicinal and therapeutic purposes. The needles are eaten directly off the tree by many animals and humans. Higher content dosage is ingested in tea. Balsam fir contains vitamin C, which has been studied for its effects on bacterial and viral infections. Balsam fir's essential oil and some of its compounds have shown efficacy against ticks. Reproduction The male reproductive organs generally develop more rapidly and appear sooner than the female organs. The male organs contain microsporangia which divide to form sporogenous tissue, composed of cells which become archesporial cells. These develop into microspores, or pollen-mother cells, once they are rounded and filled with starch grains. When the microspores undergo meiosis in the spring, four haploid microspores are produced which eventually become pollen grains. Once the male strobilus has matured the microsporangia are exposed at which point the pollen is released. The female megasporangiate is larger than the male. It contains bracts and megasporophylls, each of which contains two ovules, arranged in a spiral. These then develop a nucellus in which a mother cell is formed. Meiosis occurs and a megaspore is produced as the first cell of the megagametophyte. As cell division takes place the nucleus of the megaspore thickens, and cell differentiation occurs to produce prothallial tissue containing an ovum. The remaining undifferentiated cells then form the endosperm. When the male structure releases its pollen grains, some fall onto the female strobilus and reach the ovule. At this point the pollen tube begins to generate, and eventually the sperm and egg meet at which point fertilization occurs. Varieties There are two varieties: Abies balsamea var. balsamea (balsam fir) – bracts subtending seed scales short, not visible on the closed cones. Most of the species' range. Abies balsamea var. phanerolepis (bracted balsam fir or Canaan fir) – bracts subtending seed scales longer, visible on the closed cone. The southeast of the species' range, from southernmost Quebec to West Virginia. The name Canaan fir derives from one of its native localities, the Canaan Valley in West Virginia. Some botanists regard this variety as a natural hybrid between balsam fir and Fraser fir (Abies fraseri), which occurs further south in the Appalachian mountains. This produces a slight change in color, making it appear similar to a true Fraser Fir. Ecology Balsam firs are very shade tolerant, and tend to grow in cool climates, ideally with a mean annual temperature of , with consistent moisture at their roots. They typically grow in the following four forest types: Swamp – swamp forest types never completely dry out, so balsam firs have constant access to water. The ground is covered in sphagnum and other mosses. In swamps, balsam firs grow densely and slowly, and are slender. Flat – sometimes referred to as "dry swamps", these areas are better drained than swamps but still retain moisture well. Fern moss covers the ground and there is a possibility of ground rot. In flat areas balsam fir grows fast, tall, and large, mixed with red spruce. Hardwood slope – ground rot is common in this well-drained area, and leaf litter covers the forest floor. Balsam firs grow fast, tall, and large along with big hardwood trees such as yellow birch, sugar maple and beech. Mountain top – On mountain tops, stands of balsam fir occasionally develop fir waves. They often grow at an elevation of in pure strands, or in association with black spruce, white spruce, white birch, and trembling aspen. The development is similar to that in swamps with slow growth resulting in slender, short trees. Some of the low branches touch the ground, and may grow roots to produce an independent tree. The foliage is browsed by moose and deer. The seeds are eaten by American red squirrels, grouse, and pine mice; the tree also provides food for crossbills and chickadees, as well as shelter for moose, snowshoe hares, white-tailed deer, ruffed grouse, and other small mammals and songbirds. The needles are eaten by some lepidopteran caterpillars, for example the Io moth (Automeris io). Abies balsamea is one of the most cold-hardy trees known, surviving at temperatures as low as (USDA Hardiness Zone 2). Specimens even showed no ill effects when immersed in liquid nitrogen at . Conservation status It is listed as endangered in Connecticut. This status applies to native populations only. Pests The balsam fir is the preferred main host of the eastern spruce budworm, which is a major destructive pest throughout the eastern United States and Canada. During cyclical population outbreaks, major defoliation of the balsam fir can occur, which may significantly reduce radial growth. This can kill the tree. An outbreak in Quebec in 1957 killed over 75% of balsam fir in some stands. The needles of balsam fir can be infected by the fungus Delphinella balsameae. Cultivation Christmas trees Both varieties of the species are very popular as Christmas trees, particularly in the northeastern United States. Balsam firs cut for Christmas are typically grown on large plantations, not taken from the forest. The balsam fir is one of the greatest exports of Quebec and New England. It is celebrated for its rich green needles, natural conical shape, and needle retention after being cut, and it is notably the most fragrant of all Christmas tree varieties. The balsam fir was used six times for the US Capitol Christmas Tree between 1964 and 2019. Horticulture Abies balsamea is also grown as an ornamental tree for parks and gardens. Very hardy down to or below, it requires a sheltered spot in full sun. The dwarf cultivar A. balsamea 'Hudson' (Hudson fir), grows to only tall by broad, and has distinctive blue-green foliage with pale undersides. It does not bear cones. It has gained the Royal Horticultural Society's Award of Garden Merit. Other cultivars include: 'Angustata' 'Argentea' 'Brachylepis' 'Coerulea' 'Columnaris' 'Glauca' 'Globosa' 'Longifolia' 'Lutescens' 'Macrocarpa' 'Marginata' 'Nana' 'Nudicaulis' 'Paucifolia' 'Prostrata' 'Pyramidalis' 'Variegata' 'Versicolor' Other uses The resin is used to produce Canada balsam, and was traditionally used as a cold remedy and as a glue for glasses, optical instrument components, and for preparing permanent mounts of microscope specimens. Given its use as a traditional remedy and the relatively high ascorbic acid content of its needles, historian Jacques Mathieu has argued that the balsam fir was the "aneda" that cured scurvy during the second expedition into Canada of Jacques Cartier. The wood is milled for framing lumber (part of SPF lumber), siding and pulped for paper manufacture. Balsam fir oil is an EPA approved nontoxic rodent repellent. The balsam fir is also used as an air freshener and as incense. Prior to the availability of foam rubber and air mattresses, balsam fir boughs were a preferred mattress in places where trees greatly outnumbered campers. Many fir limbs are vertically bowed from alternating periods of downward deformation from snow loading and new growth reaching upward for sunlight. Layers of inverted freshly cut limbs from small trees created a pleasantly fragrant mattress lifting bedding off the wet ground; and the bowed green limbs were springs beneath the soft needles. Upper layers of limbs were placed with the cut ends of the limbs touching the earth to avoid uncomfortably sharp spots and sap. Native American ethnobotany Native Americans use it for a variety of medicinal purposes. The Abenaki use the gum for slight itches and as an antiseptic ointment. They stuff the leaves, needles, and wood into pillows as a panacea. The Algonquin people of Quebec apply a poultice of the gum to open sores, insect bites, boils and infections, use the needles as a sudatory for women after childbirth and for other purposes, use the roots for heart disease, use the needles to make a laxative tea, and use the needles for making poultices. The Atikamekw chew the sap as a cold remedy, and use the boughs as mats for the tent floor. The Cree use the pitch for menstrual irregularity, and take an infusion of the bark and sometimes the wood for coughs. They use the pitch and grease used as an ointment for scabies and boils. They apply a poultice of pitch applied to cuts. They also use a decoction of pitch and sturgeon oil used for tuberculosis, and take an infusion of bark for tuberculosis. They also use the boughs to make brush shelters and use the wood to make paddles. The Innu people grate the inner bark and eat it to benefit their diet. The Iroquois use a steam from a decoction of branches as a bath for rheumatism and parturition, and ingest a decoction of the plant for rheumatism. They take a compound decoction for colds and coughs, sometimes mixing it with alcohol. They apply a compound decoction of the plant for cuts, sprains, bruises and sores. They apply a poultice of the gum and dried beaver kidneys for cancer. They also take a compound decoction in the early stages of tuberculosis, and they use the plant for bedwetting and gonorrhea. The Maliseet use the juice of the plant as a laxative, use the pitch in medicines, and use an infusion of the bark, sometimes mixed with spruce and tamarack bark, for gonorrhea. They use the needles and branches as pillows and bedding, the roots as thread, and use the pitch to waterproof seams in canoes. The Menominee use the inner bark as a seasoner for medicines, take an infusion of the inner bark for chest pain, and use the liquid balsam pressed from the trunk for colds and pulmonary troubles. They also use the inner bark as a poultice for unspecified illnesses. They also apply gum from plant blisters to sores. The Miꞌkmaq use a poultice of inner bark for an unspecified purpose, use the buds, cones and inner bark for diarrhea, use the gum for burns, colds, fractures, sores and wounds, use the cones for colic, and use the buds as a laxative. They also use the bark used for gonorrhea and buds used as a laxative. They use the boughs to make beds, use the bark to make a beverage, and use the wood for kindling and fuel. The Ojibwe melt the gum on warm stones and inhale the fumes for headache. They also use a decoction of the root as an herbal steam for rheumatic joints. They also combine the gum with bear's grease and use it as an ointment for hair. They use the needle-like leaves in as part of ceremony involving the sweat bath, and use the gum for colds and inhale the leaf smoke for colds. They use the plant as a cough medicine. The gum is used for sores and a compound containing leaves is used as wash. The liquid balsam from bark blisters is used for sore eyes. They boil the resin twice and add it to suet or fat to make a canoe pitch. The bark gum is taken for chest soreness from colds, applied to cuts and sores, and decoction of the bark is used to induce sweating. The bark gum is also taken for gonorrhea. The Penobscot smear the sap over sores, burns, and cuts. The Potawatomi use the needles to make pillows, believing that the aroma prevented one from getting a cold. They also use the balsam gum as a salve for sores, and take an infusion of the bark for tuberculosis and other internal afflictions. Tree emblem Balsam fir is the provincial tree of New Brunswick.
Biology and health sciences
Pinaceae
Plants
601958
https://en.wikipedia.org/wiki/Process%20engineering
Process engineering
Process engineering is the understanding and application of the fundamental principles and laws of nature that allow humans to transform raw material and energy into products that are useful to society, at an industrial level. By taking advantage of the driving forces of nature such as pressure, temperature and concentration gradients, as well as the law of conservation of mass, process engineers can develop methods to synthesize and purify large quantities of desired chemical products. Process engineering focuses on the design, operation, control, optimization and intensification of chemical, physical, and biological processes. Their work involves analyzing the chemical makeup of various ingredients and determining how they might react with one another. A process engineer can specialize in a number of areas, including the following: Agriculture processing Food and dairy production Beer and whiskey production Cosmetics production Pharmaceutical production Petrochemical manufacturing Mineral processing Printed circuit board production Overview Process engineering involves the utilization of multiple tools and methods. Depending on the exact nature of the system, processes need to be simulated and modeled using mathematics and computer science. Processes where phase change and phase equilibria are relevant require analysis using the principles and laws of thermodynamics to quantify changes in energy and efficiency. In contrast, processes that focus on the flow of material and energy as they approach equilibria are best analyzed using the disciplines of fluid mechanics and transport phenomena. Disciplines within the field of mechanics need to be applied in the presence of fluids or porous and dispersed media. Materials engineering principles also need to be applied, when relevant. Manufacturing in the field of process engineering involves an implementation of process synthesis steps. Regardless of the exact tools required, process engineering is then formatted through the use of a process flow diagram (PFD) where material flow paths, storage equipment (such as tanks and silos), transformations (such as distillation columns, receiver/head tanks, mixing, separations, pumping, etc.) and flowrates are specified, as well as a list of all pipes and conveyors and their contents, material properties such as density, viscosity, particle-size distribution, flowrates, pressures, temperatures, and materials of construction for the piping and unit operations. The process flow diagram is then used to develop a piping and instrumentation diagram (P&ID) which graphically displays the actual process occurring. P&ID are meant to be more complex and specific than a PFD. They represent a less muddled approach to the design. The P&ID is then used as a basis of design for developing the "system operation guide" or "functional design specification" which outlines the operation of the process. It guides the process through operation of machinery, safety in design, programming and effective communication between engineers. From the P&ID, a proposed layout (general arrangement) of the process can be shown from an overhead view (plot plan) and a side view (elevation), and other engineering disciplines are involved such as civil engineers for site work (earth moving), foundation design, concrete slab design work, structural steel to support the equipment, etc. All previous work is directed toward defining the scope of the project, then developing a cost estimate to get the design installed, and a schedule to communicate the timing needs for engineering, procurement, fabrication, installation, commissioning, startup, and ongoing production of the process. Depending on needed accuracy of the cost estimate and schedule that is required, several iterations of designs are generally provided to customers or stakeholders who feed back their requirements. The process engineer incorporates these additional instructions (scope revisions) into the overall design and additional cost estimates, and schedules are developed for funding approval. Following funding approval, the project is executed via project management. Principal areas of focus in process engineering Process engineering activities can be divided into the following disciplines: Process design: synthesis of energy recovery networks, synthesis of distillation systems (azeotropic), synthesis of reactor networks, hierarchical decomposition flowsheets, superstructure optimization, design multiproduct batch plants, design of the production reactors for the production of plutonium, design of nuclear submarines. Process control: model predictive control, controllability measures, robust control, nonlinear control, statistical process control, process monitoring, thermodynamics-based control, denoted by three essential items, a collection of measurements, method of taking measurements, and a system of controlling the desired measurement. Process operations: scheduling process networks, multiperiod planning and optimization, data reconciliation, real-time optimization, flexibility measures, fault diagnosis. Supporting tools: sequential modular simulation, equation-based process simulation, AI/expert systems, large-scale nonlinear programming (NLP), optimization of differential algebraic equations (DAEs), mixed-integer nonlinear programming (MINLP), global optimization, optimization under uncertainty, and quality function deployment (QFD). Process Economics: This includes using simulation software such as ASPEN, Super-Pro to find out the break even point, net present value, marginal sales, marginal cost, return on investment of the industrial plant after the analysis of the heat and mass transfer of the plant. Process Data Analytics: Applying data analytics and machine learning methods for process manufacturing problems. History of process engineering Various chemical techniques have been used in industrial processes since time immemorial. However, it wasn't until the advent of thermodynamics and the law of conservation of mass in the 1780s that process engineering was properly developed and implemented as its own discipline. The set of knowledge that is now known as process engineering was then forged out of trial and error throughout the industrial revolution. The term process, as it relates to industry and production, dates back to the 18th century. During this time period, demands for various products began to drastically increase, and process engineers were required to optimize the process in which these products were created. By 1980, the concept of process engineering emerged from the fact that chemical engineering techniques and practices were being used in a variety of industries. By this time, process engineering had been defined as "the set of knowledge necessary to design, analyze, develop, construct, and operate, in an optimal way, the processes in which the material changes". By the end of the 20th century, process engineering had expanded from chemical engineering-based technologies to other applications, including metallurgical engineering, agricultural engineering, and product engineering.
Technology
Disciplines
null
602700
https://en.wikipedia.org/wiki/Imago
Imago
In biology, the imago (Latin for "image") is the last stage an insect attains during its metamorphosis, its process of growth and development; it is also called the imaginal stage ("imaginal" being "imago" in adjective form), the stage in which the insect attains maturity. It follows the final ecdysis of the immature instars. In a member of the Ametabola or Hemimetabola, species in which metamorphosis is "incomplete", the final ecdysis follows the last immature or nymphal stage. In members of the Holometabola, in which there is a pupal stage, the final ecdysis follows emergence from the pupa, after which the metamorphosis is complete, although there is a prolonged period of maturation in some species. The imago is the only stage during which the insect is sexually mature and, if it is a winged species, the only stage that has functional wings. The imago often is referred to as the adult stage. Members of the order Ephemeroptera (mayflies) do not have a pupal stage, but they briefly pass through an intermediate winged stage called the subimago. Insects at this stage have functional wings but are not yet sexually mature. The Latin plural of imago is imagines, and this is the term generally used by entomologists when a plural form is required –however, imagoes is also acceptable.
Biology and health sciences
Animal ontogeny
null
16161443
https://en.wikipedia.org/wiki/IOS
IOS
iOS (formerly iPhone OS) is a mobile operating system developed by Apple exclusively for its mobile devices. It was unveiled in January 2007 for the first-generation iPhone, which launched in June 2007. Major versions of iOS are released annually; the current stable version, iOS 18, was released to the public on September 16, 2024. It is the operating system that powers many of the company's mobile devices, including the iPhone, and is the basis for three other operating systems made by Apple: iPadOS, tvOS, and watchOS. iOS formerly also powered iPads until iPadOS was introduced in 2019 and the iPod Touch line of devices until its discontinuation. iOS is the world's second most widely installed mobile operating system, after Android. As of December 2023, Apple's App Store contains more than 3.8 million iOS mobile apps. iOS is based on macOS. Like macOS, it includes components of the Mach microkernel and FreeBSD. It is a Unix-like operating system. Although some parts of iOS are open source under the Apple Public Source License and other licenses, iOS is proprietary software. History In 2005, when Steve Jobs began planning the iPhone, he had a choice to either "shrink the Mac, which would be an epic feat of engineering, or enlarge the iPod". Jobs favored the former approach but pitted the Macintosh and iPod teams, led by Scott Forstall and Tony Fadell, respectively, against each other in an internal competition, with Forstall winning by creating iPhone OS. The decision enabled the success of the iPhone as a platform for third-party developers: using a well-known desktop operating system as its basis allowed the many third-party Mac developers to write software for the iPhone with minimal retraining. Forstall was also responsible for creating a software development kit for programmers to build iPhone apps, as well as an App Store within iTunes. The operating system was unveiled with the iPhone at the Macworld Conference & Expo on January 9, 2007, and released in June of that year. At the time of its unveiling in January, Steve Jobs claimed: "iPhone runs OS X" and runs "desktop class applications", but at the time of the iPhone's release, the operating system was renamed "iPhone OS". Initially, third-party native applications were not supported. Jobs' reasoning was that developers could build web applications through the Safari web browser that "would behave like native apps on the iPhone". In October 2007, Apple announced that a native software development kit (SDK) was under development and that they planned to put it "in developers' hands in February". On March 6, 2008, Apple held a press event, announcing the iPhone SDK. The iOS App Store was opened on July 10, 2008, with an initial 500 applications available. This quickly grew to 3,000 in September 2008, 15,000 in January 2009, 50,000 in June 2009, 100,000 in November 2009, 250,000 in August 2010, 650,000 in July 2012, 1 million in October 2013, 2 million in June 2016, and 2.2 million in January 2017. , 1 million apps are natively compatible with the iPad tablet computer. These apps have collectively been downloaded more than 130 billion times. App intelligence firm Sensor Tower estimated that the App Store would reach 5 million apps by 2020. In September 2007, Apple announced the iPod Touch, a redesigned iPod based on the iPhone form factor. On January 27, 2010, Apple introduced their much-anticipated media tablet, the iPad, featuring a larger screen than the iPhone and iPod Touch, and designed for web browsing, media consumption, and reading, and offering multi-touch interaction with multimedia formats including newspapers, e-books, photos, videos, music, word processing documents, video games, and most existing iPhone apps using a screen. It also includes a mobile version of Safari for web browsing, as well as access to the App Store, iTunes Library, iBookstore, Contacts, and
Technology
Operating Systems
null
16163668
https://en.wikipedia.org/wiki/Avulsion%20injury
Avulsion injury
In medicine, an avulsion is an injury in which a body structure is torn off by either trauma or surgery (from the Latin avellere, meaning "to tear off"). The term most commonly refers to a surface trauma where all layers of the skin have been torn away, exposing the underlying structures (i.e., subcutaneous tissue, muscle, tendons, or bone). This is similar to an abrasion but more severe, as body parts such as an eyelid or an ear can be partially or fully detached from the body. Skin avulsions The most common avulsion injury, skin avulsion often occurs during motor vehicle collisions. The severity of avulsion ranges from skin flaps (minor) to degloving (moderate) and amputation of a finger or limb (severe). Suprafascial avulsions are those in which the depth of the removed skin reaches the subcutaneous tissue layer, while subfascial avulsions extend deeper than the subcutaneous layer. Small suprafascial avulsions can be repaired by suturing, but most avulsions require skin grafts or reconstructive surgery. Rock climbing In rock climbing, a "flapper" is an injury in which parts of the skin are torn off, resulting in a loose flap of skin on the fingers. This is usually the result of friction forces between the climber's fingers and the holds, arising when the climber slips off a hold. To fix this injury and to be able to continue climbing, many climbers will apply sports tape to the flapped finger to cover up the sensitive area of broken skin. Some climbers may even use super-glue to adhere the loose skin back to the finger. Ear avulsions The ear is particularly vulnerable to avulsion injuries because of its position on the side of the head. The most common cause of ear avulsions are bite injuries, primarily human-inflicted, followed by motor vehicle accidents, burns, and complications resulting from otoplasty. A partially avulsed ear can be reattached through suturing or microvascular surgery, depending on the severity of the injury. Microvascular surgery can also be used to reattach a completely avulsed ear, but its success rate is lower because of the need for venous drainage. The ear can also be reconstructed with cartilage and skin grafts or an external ear prosthesis can be made by an anaplastologist. Eyelid avulsions Eyelid avulsions are uncommon, but can be caused by motor vehicle collisions, dog bites, or human bites. Eyelid avulsions are repaired by suturing after a CT scan is performed to determine where damage to the muscles, nerves, and blood vessels of the eyelid has occurred. More severe injuries require reconstruction, however, this usually results in some loss of function and subsequent surgeries may be necessary to improve structure and function. Microvascular surgery is another method of repair but is rarely used to treat eye avulsions. Sometimes botulinum toxin is injected into the eyelid to paralyze the muscles while the eyelid heals. Nail avulsions Trauma to the nail can cause the nail plate to be torn from the nail bed. Unlike other types of avulsion, when a nail is lost, it is not typically reattached. Following the loss of the nail, the nail bed forms a germinal layer which hardens as the cells acquire keratin and becomes a new nail. Until this layer has formed, the exposed nail bed is highly sensitive, and is typically covered with a non-adherent dressing, as an ordinary dressing will stick to the nail bed and cause pain upon removal. In the average person, fingernails require 3 to 6 months to regrow completely, while toenails require 12 to 18 months. Brachial plexus avulsions In brachial plexus avulsions, the brachial plexus (a bundle of nerves that communicates signals between the spine and the arms, shoulders, and hands) is torn from its attachment to the spinal cord. One common cause of brachial plexus avulsions is when a baby's shoulders rotate in the birth canal during delivery, which causes the brachial plexus to stretch and tear. It occurs in 1 to 2 out of every 1,000 births. Shoulder trauma during motor vehicle collisions is another common cause of brachial plexus avulsions. Detachment of the nerves can cause pain and loss of function in the arms, shoulders, and hands. Neuropathic pain can be treated with medication, but it is only through surgical reattachment or nerve grafts that function can be restored. For intractable pain, a procedure called dorsal root entry zone (DREZ) lesioning can be effective. Tooth avulsions During a tooth avulsion, a tooth is completely or partially (such that the dental pulp is exposed) detached from its socket. Secondary (permanent) teeth can be replaced and stabilised by a dentist. Primary (baby) teeth are not replaced because they tend to become infected and interfere with the growth of the secondary teeth. A completely avulsed tooth that is replaced within one hour of the injury can be permanently retained. The long-term retention rate decreases as the time that the tooth is detached increases, and eventually root resorption makes replacement of the tooth impossible. To minimize damage to the root, the tooth should be kept in milk or sterile saline while it is outside the mouth. Periosteal avulsions During a periosteal avulsion, the periosteum (a fibrous layer that surrounds a bone) detaches the bone's surface. An example of a periosteal avulsion is an ALPSA (anterior labral periosteal sleeve avulsion). Surgical avulsions An avulsion is sometimes performed surgically to relieve symptoms of a disorder, or to prevent a chronic condition from recurring. Small incision avulsion (also called ambulatory phlebectomy) is used to remove varicose veins from the legs in disorders such as chronic venous insufficiency. A nail avulsion is performed to remove all or part of a chronic ingrown nail. Facial nerve avulsion is used to treat the involuntary twitching involved in benign essential blepharospasm. However, it often requires additional surgeries to retain function and botulinum toxin injections have been shown to be more effective than surgical avulsions in treating benign essential blepharospasm, while causing fewer complications.
Biology and health sciences
Types
Health
3060044
https://en.wikipedia.org/wiki/Process%20design
Process design
In chemical engineering, process design is the choice and sequencing of units for desired physical and/or chemical transformation of materials. Process design is central to chemical engineering, and it can be considered to be the summit of that field, bringing together all of the field's components. Process design can be the design of new facilities or it can be the modification or expansion of existing facilities. The design starts at a conceptual level and ultimately ends in the form of fabrication and construction plans. Process design is distinct from equipment design, which is closer in spirit to the design of unit operations. Processes often include many unit operations. Documentation Process design documents serve to define the design and they ensure that the design components fit together. They are useful in communicating ideas and plans to other engineers involved with the design, to external regulatory agencies, to equipment vendors, and to construction contractors. In order of increasing detail, process design documents include: Block flow diagrams (BFD): Very simple diagrams composed of rectangles and lines indicating major material or energy flows. Process flow diagrams (PFD): Typically more complex diagrams of major unit operations as well as flow lines. They usually include a material balance, and sometimes an energy balance, showing typical or design flowrates, stream compositions, and stream and equipment pressures and temperatures. It is the key document in process design. Piping and instrumentation diagrams (P&ID): Diagrams showing each and every pipeline with piping class (carbon steel or stainless steel) and pipe size (diameter). They also show valving along with instrument locations and process control schemes. Specifications: Written design requirements of all major equipment items. Process designers typically write operating manuals on how to start-up, operate and shut-down the process. They often also develop accident plans and projections of process operation on the environment. Documents are maintained after construction of the process facility for the operating personnel to refer to. The documents also are useful when modifications to the facility are planned. A primary method of developing the process documents is process flowsheeting. Design considerations Design conceptualization and considerations can begin once objectives are defined and constraints identified. Objectives that a design may strive to meet include: Throughput rate Process yield Product purity Constraints include: Capital cost: investment required to implement the design including cost of new equipment and disposal of obsolete equipment. Available space: the area of land or room in building to place new or modified equipment. Safety concerns: risk of accidents and posed by hazardous materials. Environmental impact and projected effluents, emissions, and waste production. Operating and maintenance costs. Other factors that designers may include are: Reliability Redundancy Flexibility Anticipated variability in feed stock and allowable variability in product. Sources of design information Designers usually do not start from scratch, especially for complex projects. Often the engineers have pilot plant data available or data from full-scale operating facilities. Other sources of information include proprietary design criteria provided by process licensors, published scientific data, laboratory experiments, and suppliers of feedstocks and utilities. Design process Design starts with process synthesis - the choice of technology and combinations of industrial units to achieve goals. More detailed design proceeds as other engineers and stakeholders sign off on each stage: conceptual to detailed design. Simulation software is often used by design engineers. Simulations can identify weaknesses in designs and allow engineers to choose better alternatives. However, engineers still rely on heuristics, intuition, and experience when designing a process. Human creativity is an element in complex designs.
Physical sciences
Chemical engineering
Chemistry
1572073
https://en.wikipedia.org/wiki/Volcanic%20plug
Volcanic plug
A volcanic plug, also called a volcanic neck or lava neck, is a volcanic object created when magma hardens within a vent on an active volcano. When present, a plug can cause an extreme build-up of high gas pressure if rising volatile-charged magma is trapped beneath it, and this can sometimes lead to an explosive eruption. In a plinian eruption the plug is destroyed and ash is ejected. Glacial erosion can lead to exposure of the plug on one side, while a long slope of material remains on the opposite side. Such landforms are called crag and tail. If a plug is preserved, erosion may remove the surrounding rock while the erosion-resistant plug remains, producing a distinctive upstanding landform. Examples of volcanic plugs Africa Near the village of Rhumsiki in the Far North Province of Cameroon, Kapsiki Peak is an example of a volcanic plug and is one of the most photographed parts of the Mandara Mountains. Spectacular volcanic plugs are present in the center of La Gomera island in the Canary Islands archipelago, within the Garajonay National Park. Asia Sigiriya is an ancient rock fortress near the town of Dambulla in Sri Lanka. Approximately 180m high, it is now a UNESCO listed World Heritage Site. Europe Borgarvirki is a volcanic plug located in north Iceland. A volcanic plug is situated in the town of Motta Sant'Anastasia in Italy. Saint Michel d'Aiguilhe chapel, whose construction started in 969, near Le Puy-en-Velay in France. The volcanic plug rises about above the surroundings. Another building on a volcanic plug is the 14th century Trosky Castle in the Czech Republic. Strombolicchio, the northernmost of the Aeolian Islands, and Rockall, a small, uninhabited, remote islet in the North Atlantic Ocean, are also volcanic plugs. In the United Kingdom, two examples of a building on a volcanic plug are the Castle Rock in Edinburgh, Scotland, and Deganwy Castle, Wales. The Law, Dundee, Ailsa Craig, Bass Rock, North Berwick Law and Dumgoyne hill are other examples of volcanic plugs located in Scotland. There are over 30 volcanic plugs in Northern Ireland, including Slemish in Ballymena, Tievebulliagh, Scawt Hill, Carrickarede, Scrabo and Slieve Gallion. North America and the Caribbean There are several volcanic plugs in the United States, including Morro Rock in California, Devils Elbow located in the Heceta Head Lighthouse Scenic State Park on the Oregon coast, Thumb Butte in the Sierra Prieta of Arizona, and Shiprock in New Mexico. Devils Tower in Wyoming and Little Devils Postpile in Yosemite National Park, California, are also believed, by many geologists, to be volcanic plugs. In Canada, the Northern Cordilleran Volcanic Province gives rise to several confirmed and suspected plugs. Chief among these is Castle Rock, located in British Columbia, which last erupted during the Pleistocene. The southern coast of Saint Lucia is dominated by the iconic Pitons, a UNESCO World Heritage Site. The twin peaks, Gros Piton and Petit Piton, steeply rise more than above the Caribbean. South America Pinnacle Rock, Galápagos, Ecuador. Oceania There are several volcanic plugs in the North Island of New Zealand, including: the Pinnacles in the Coromandel Peninsula Bream Head in Northland Paritutu and the adjacent Sugar Loaf Islands in Taranaki St. Paul's Rock at Whangaroa Harbour Piha's Lion Rock, which hosted a fortified Maori pā Mount Pohaturoa near the village of Ātiamuri, a distinctive sight for travelers along State Highway 1 In New Zealand's South Island, Onawe Peninsula on Banks Peninsula is a prominent volcanic plug, and erosion of Saddle Hill near Dunedin has also revealed a plug. Dunedin's Mount Cargill displays two plugs: its main summit and the subsidiary summit of Buttar's Peak. In Australia, The Nut in Tasmania are further examples, along with Mount Warning and the several peaks in the Warrumbungles in New South Wales. The 11 peaks of the Glasshouse Mountains National Park including Mount Beerwah, Mount Tibrogargan, Mount Coonowrin, Mount Cooroora, Mount Ngungun, Mount Tibberoowuccum, Mount Tunbubudla, and Mount Beerburrum, in South East Queensland are volcanic plugs. Gallery
Physical sciences
Volcanology
Earth science
1572904
https://en.wikipedia.org/wiki/Mecoptera
Mecoptera
Mecoptera (from the Greek: mecos = "long", ptera = "wings") is an order of insects in the superorder Holometabola with about six hundred species in nine families worldwide. Mecopterans are sometimes called scorpionflies after their largest family, Panorpidae, in which the males have enlarged genitals raised over the body that look similar to the stingers of scorpions, and long beaklike rostra. The Bittacidae, or hangingflies, are another prominent family and are known for their elaborate mating rituals, in which females choose mates based on the quality of gift prey offered to them by the males. A smaller group is the snow scorpionflies, family Boreidae, adults of which are sometimes seen walking on snowfields. In contrast, the majority of species in the order inhabit moist environments in tropical locations. The Mecoptera are closely related to the Siphonaptera (fleas), and a little more distantly to the Diptera (true flies). They are somewhat fly-like in appearance, being small to medium-sized insects with long slender bodies and narrow membranous wings. Most breed in moist environments such as leaf litter or moss, and the eggs may not hatch until the wet season arrives. The larvae are caterpillar-like and mostly feed on vegetable matter, and the non-feeding pupae may pass through a diapause until weather conditions are favorable. Early Mecoptera may have played an important role in pollinating extinct species of gymnosperms before the evolution of other insect pollinators such as bees. Adults of modern species are overwhelmingly predators or consumers of dead organisms. In a few areas, some species are the first insects to arrive at a cadaver, making them useful in forensic entomology. Diversity Mecopterans vary in length from . There are about six hundred extant species known, divided into thirty-four genera in nine families. The majority of the species are contained in the families Panorpidae and Bittacidae. Besides this there are about four hundred known fossil species in about eighty-seven genera, which are more diverse than the living members of the order. The group is sometimes called the scorpionflies, from the turned-up "tail" of the male's genitalia in the Panorpidae. Distribution of mecopterans is worldwide; the greatest diversity at the species level is in the Afrotropic and Palearctic realms, but there is greater diversity at the generic and family level in the Neotropic, Nearctic and Australasian realms. They are absent from Madagascar and many islands and island groups; this may demonstrate that their dispersal ability is low, with Trinidad, Taiwan and Japan, where they are found, having had recent land bridges to the nearest continental land masses. Evolution and phylogeny Taxonomic history The European scorpionfly was named Panorpa communis by Linnaeus in 1758. The Mecoptera were named by Alpheus Hyatt and Jennie Maria Arms in 1891. The name is from the Greek, mecos meaning long, and ptera meaning wings. The families of Mecoptera are well accepted by taxonomists but their relationships have been debated. In 1987, R. Willman treated the Mecoptera as a clade, containing the Boreidae as sister to the Meropeidae, but in 2002 Michael F. Whiting declared the Mecoptera so-defined as paraphyletic, with the Boreidae as sister to another order, the Siphonaptera (fleas). Fossil history Among the earliest members of the Mecoptera are the Nannochoristidae of Upper Permian age. Fossil Mecoptera become abundant and diverse during the Cretaceous, for example in China, where panorpids such as Jurassipanorpa, hangingflies (Bittacidae and Cimbrophlebiidae), Orthophlebiidae, and Cimbrophlebiidae have been found. Extinct Mecoptera species may have been important pollinators of early gymnosperm seed plants during the late Middle Jurassic to mid–Early Cretaceous periods before other pollinating groups such as the bees evolved. These were mainly wind-pollinated plants, but fossil mecopterans had siphon-feeding apparatus that could have fertilized these early gymnosperms by feeding on their nectar and pollen. The lack of iron enrichment in their fossilized probosces rules out their use for drinking blood. Eleven species have been identified from three families, Mesopsychidae, Aneuretopsychidae, and Pseudopolycentropodidae within the clade Aneuretopsychina. Their lengths range from in Parapolycentropus burmiticus to in Lichnomesopsyche gloriae. The proboscis could be as long as . It has been suggested that these mecopterans transferred pollen on their mouthparts and head surfaces, as do bee flies and hoverflies today, but no such associated pollen has been found, even when the insects were finely preserved in Eocene Baltic amber. They likely pollinated plants such as Caytoniaceae, Cheirolepidiaceae, and Gnetales, which have ovulate organs that are either poorly suited for wind pollination or have structures that could support long-proboscid fluid feeding. The Aneuretopsychina were the most diverse group of mecopterans in the Latest Permian, taking the place of the Permochoristidae, to the Middle Triassic. During the Late Triassic through the Middle Jurassic, Aneuretopsychina species were gradually replaced by species from the Parachoristidae and Orthophlebiidae. Modern mecopteran families are derived from the Orthophlebiidae. External relationships Mecoptera have special importance in the evolution of the insects. Two of the most important insect orders, Lepidoptera (butterflies and moths) and Diptera (true flies), along with Trichoptera (caddisflies), probably evolved from ancestors belonging to, or strictly related to, the Mecoptera. Evidence includes anatomical and biochemical similarities as well as transitional fossils, such as Permotanyderus and Choristotanyderus, which lie between the Mecoptera and Diptera. The group was once much more widespread and diverse than it is now, with four suborders during the Mesozoic. It is unclear as of 2020 whether the Mecoptera form a single clade, or whether the Siphonaptera (fleas) are inside that clade, so that the traditional "Mecoptera" taxon is paraphyletic. However the earlier suggestion that the Siphonaptera are sister to the Boreidae is not supported; instead, there is the possibility that they are sister to another Mecopteran family, the Nannochoristidae. The two possible trees are shown below: (a) Mecoptera (clades in boldface) is paraphyletic, containing Siphonaptera: (b) Mecoptera is monophyletic, sister to Siphonaptera: Internal relationships All the families were formerly treated as part of a single order, Mecoptera. The relationships between the families are, however, a matter of debate. The cladogram, from Cracraft and Donoghue 2004, places the Nannochoristidae as a separate order, with the Boreidae, as the sister group to the Siphonaptera, also as its own order. The Eomeropidae is suggested to be the sister group to the rest of the Mecoptera, with the position of the Bittacidae unclear. Of those other families, the Meropeidae is the most basal, and the relationships of the rest are not completely clear. Biology Morphology Mecoptera are small to medium-sized insects with long beaklike rostra, membranous wings and slender, elongated bodies. They have relatively simple mouthparts, with a long labium, long mandibles and fleshy palps, which resemble those of the more primitive true flies. Like many other insects, they possess compound eyes on the sides of their heads, and three ocelli on the top. The antennae are filiform (thread-shaped) and contain multiple segments. The fore and hind wings are similar in shape, being long and narrow, with numerous cross-veins, and somewhat resembling those of primitive insects such as mayflies. A few genera, however, have reduced wings, or have lost them altogether. The abdomen is cylindrical with eleven segments, the first of which is fused to the metathorax. The cerci consist of one or two segments. The abdomen typically curves upwards in the male, superficially resembling the tail of a scorpion, the tip containing an enlarged structure called the genital bulb. The caterpillar-like larvae have hard sclerotised heads with mandibles (jaws), short true legs on the thorax, prolegs on the first eight abdominal segments, and a suction disc or pair of hooks on the terminal tenth segment. The pupae have free appendages rather than being secured within a cocoon (they are exarate). Ecology Mecopterans mostly inhabit moist environments although a few species are found in semi-desert habitats. Scorpionflies, family Panorpidae, generally live in broad-leaf woodlands with plentiful damp leaf litter. Snow scorpionflies, family Boreidae, appear in winter and are to be seen on snowfields and on moss; the larvae being able to jump like fleas. Hangingflies, family Bittacidae, occur in forests, grassland and caves with high moisture levels. They mostly breed among mosses, in leaf litter and other moist places, but their reproductive habits have been little studied, and at least one species, Nannochorista philpotti, has aquatic larvae. Adult mecopterans are mostly scavengers, feeding on decaying vegetation and the soft bodies of dead invertebrates. Panorpa raid spider webs to feed on trapped insects and even the spiders themselves, and hangingflies capture flies and moths with their specially modified legs. Some groups consume pollen, nectar, midge larvae, carrion and moss fragments. Most mecopterans live in moist environments; in hotter climates, the adults may therefore be active and visible only for short periods of the year. Mating behaviour Various courtship behaviours have been observed among mecopterans, with males often emitting pheromones to attract mates. The male may provide an edible gift such as a dead insect or a brown salivary secretion to the female. Some boreids have hook-like wings which the male uses to pick up and place the female on his back while copulating. Male panorpids vibrate their wings or even stridulate while approaching a female. Hangingflies (Bittacidae) provide a nuptial meal in the form of a captured insect prey, such as a caterpillar, bug, or fly. The male attracts a female with a pheromone from vesicles on his abdomen; he retracts these once a female is nearby, and presents her with the prey. While she evaluates the gift, he locates her genitalia with his. If she stays to eat the prey, his genitalia attach to hers, and the female lowers herself into an upside-down hanging position, and eats the prey while mating. Larger prey result in longer mating times. In Hylobittacus apicalis, prey long give between 1 and 17 minutes of mating. Larger males of that species give prey as big as houseflies, earning up to 29 minutes of mating, maximal sperm transfer, more oviposition, and a refractory period during which the female does not mate with other males: all of these increase the number of offspring the male is likely to have. Life-cycle The female lays the eggs in close contact with moisture, and the eggs typically absorb water and increase in size after deposition. In species that live in hot conditions, the eggs may not hatch for several months, the larvae only emerging when the dry season has finished. More typically, however, they hatch after a relatively short period of time. The larvae are usually quite caterpillar-like, with short, clawed, true legs, and a number of abdominal prolegs. They have sclerotised heads with mandibulate mouthparts. Larvae possess compound eyes, which is unique among holometabolous insects. The tenth abdominal segment bears either a suction disc, or, less commonly, a pair of hooks. They generally eat vegetation or scavenge for dead insects, although some predatory larvae are known. The larva crawls into the soil or decaying wood to pupate, and does not spin a cocoon. The pupae are exarate, meaning the limbs are free of the body, and are able to move their mandibles, but are otherwise entirely nonmotile. In drier environments, they may spend several months in diapause, before emerging as adults once the conditions are more suitable. Interaction with humans Forensic entomology makes use of scorpionflies' habit of feeding on human corpses. In areas where the family Panorpidae occurs, such as the eastern United States, these scorpionflies can be the first insects to arrive at a donated human cadaver, and remain on a corpse for one or two days. The presence of scorpionflies thus indicates that a body must be fresh. Scorpionflies are sometimes described as looking "sinister", particularly from the male's raised "tail" resembling a scorpion's sting. A popular but incorrect belief is that they can sting with their tails.
Biology and health sciences
Insects: General
Animals
1572944
https://en.wikipedia.org/wiki/1%2C4-Dioxane
1,4-Dioxane
1,4-Dioxane () is a heterocyclic organic compound, classified as an ether. It is a colorless liquid with a faint sweet odor similar to that of diethyl ether. The compound is often called simply dioxane because the other dioxane isomers (1,2- and 1,3-) are rarely encountered. Dioxane is used as a solvent for a variety of practical applications as well as in the laboratory, and also as a stabilizer for the transport of chlorinated hydrocarbons in aluminium containers. Synthesis Dioxane is produced by the acid-catalysed dehydration of diethylene glycol, which in turn is obtained from the hydrolysis of ethylene oxide. In 1985, the global production capacity for dioxane was between 11,000 and 14,000 tons. In 1990, the total U.S. production volume of dioxane was between 5,250 and 9,150 tons. Structure The dioxane molecule is centrosymmetric, meaning that it adopts a chair conformation, typical of relatives of cyclohexane. However, the molecule is conformationally flexible, and the boat conformation is easily adopted, e.g. in the chelation of metal cations. Dioxane resembles a smaller crown ether with only two ethyleneoxyl units. Uses Trichloroethane transport In the 1980s, most of the dioxane produced was used as a stabilizer for 1,1,1-trichloroethane for storage and transport in aluminium containers. Normally aluminium is protected by a passivating oxide layer, but when these layers are disturbed, the metallic aluminium reacts with trichloroethane to give aluminium trichloride, which in turn catalyses the dehydrohalogenation of the remaining trichloroethane to vinylidene chloride and hydrogen chloride. Dioxane "poisons" this catalysis reaction by forming an adduct with aluminium trichloride. As a solvent Dioxane is used in a variety of applications as a versatile aprotic solvent, e.g. for inks, adhesives, and cellulose esters. It is substituted for tetrahydrofuran (THF) in some processes, because of its lower toxicity and higher boiling point (101 °C, versus 66 °C for THF). While diethyl ether is rather insoluble in water, dioxane is miscible and in fact is hygroscopic. At standard pressure, the mixture of water and dioxane in the ratio 17.9:82.1 by mass is a positive azeotrope that boils at 87.6 °C. The oxygen atoms are weakly Lewis-basic. It forms adducts with a variety of Lewis acids. It is classified as a hard base and its base parameters in the ECW model are EB = 1.86 and CB = 1.29. Dioxane produces coordination polymers by linking metal centers. In this way, it is used to drive the Schlenk equilibrium, allowing the synthesis of dialkyl magnesium compounds. Dimethylmagnesium is prepared in this manner: 2 CHMgBr + (CHO) → MgBr(CHO) + (CH)Mg Spectroscopy Dioxane is used as an internal standard for nuclear magnetic resonance spectroscopy in deuterium oxide. Toxicology Safety Dioxane has an of 5170 mg/kg in rats. It is irritating to the eyes and respiratory tract. Exposure may cause damage to the central nervous system, liver and kidneys. In a 1978 mortality study conducted on workers exposed to 1,4-dioxane, the observed number of deaths from cancer was not significantly different from the expected number. Dioxane is classified by the National Toxicology Program as "reasonably anticipated to be a human carcinogen". It is also classified by the IARC as a Group 2B carcinogen: possibly carcinogenic to humans because it is a known carcinogen in other animals. The United States Environmental Protection Agency classifies dioxane as a probable human carcinogen (having observed an increased incidence of cancer in controlled animal studies, but not in epidemiological studies of workers using the compound), and a known irritant (with a no-observed-adverse-effects level of 400 milligrams per cubic meter) at concentrations significantly higher than those found in commercial products. Animal studies in rats suggest that the greatest health risk is associated with inhalation of vapors in the pure form. The State of New York has adopted a first-in-the-nation drinking water standard for 1,4-Dioxane and set the maximum contaminant level of 1 part per billion. Explosion hazard Like some other ethers, dioxane combines with atmospheric oxygen upon prolonged exposure to air to form potentially explosive peroxides. Distillation of these mixtures is dangerous. Storage over metallic sodium could limit the risk of peroxide accumulation. Environment Dioxane tends to concentrate in the water and has little affinity for soil. It is resistant to abiotic degradation in the environment, and was formerly thought to also resist biodegradation. However, more recent studies since the 2000s have found that it can be biodegraded through a number of pathways, suggesting that bioremediation can be used to treat 1,4-dioxane contaminated water. Dioxane has affected groundwater supplies in several areas. Dioxane at the level of 1 μg/L (~1 ppb) has been detected in many locations in the US. In the U.S. state of New Hampshire, it had been found at 67 sites in 2010, ranging in concentration from 2 ppb to over 11,000 ppb. Thirty of these sites are solid waste landfills, most of which have been closed for years. In 2019, the Southern Environmental Law Center successfully sued Greensboro, North Carolina's Wastewater treatment after 1,4-Dioxane was found at 20 times above EPA safe levels in the Haw River. Cosmetics As a byproduct of the ethoxylation process, a route to some ingredients found in cleansing and moisturizing products, dioxane can contaminate cosmetics and personal care products such as deodorants, perfumes, shampoos, toothpastes and mouthwashes. The ethoxylation process makes the cleansing agents, such as sodium laureth sulfate and ammonium laureth sulfate, less abrasive and offers enhanced foaming characteristics. 1,4-Dioxane is found in small amounts in some cosmetics, a yet unregulated substance used in cosmetics in both China and the U.S. Research has found the chemical in ethoxylated raw ingredients and in off-the-shelf cosmetic products. The Environmental Working Group (EWG) found that 97% of hair relaxers, 57% of baby soaps and 22 percent of all products in Skin Deep, their database for cosmetic products, are contaminated with 1,4-dioxane. Since 1979 the U.S. Food and Drug Administration (FDA) have conducted tests on cosmetic raw materials and finished products for the levels of 1,4-dioxane. 1,4-Dioxane was present in ethoxylated raw ingredients at levels up to 1410 ppm (~0.14%wt), and at levels up to 279 ppm (~0.03%wt) in off the shelf cosmetic products. Levels of 1,4-dioxane exceeding 85 ppm (~0.01%wt) in children's shampoos indicate that close monitoring of raw materials and finished products is warranted. While the FDA encourages manufacturers to remove 1,4-dioxane, it is not required by federal law. On 9 December 2019, New York passed a bill to ban the sale of cosmetics with more than 10 ppm of 1,4-dioxane as of the end of 2022. The law will also prevent the sale of household cleaning and personal care products containing more than 2 ppm of 1,4-dioxane at the end of 2022.
Physical sciences
Esters and ethers
Chemistry
1573393
https://en.wikipedia.org/wiki/Conversion%20%28chemistry%29
Conversion (chemistry)
Conversion and its related terms yield and selectivity are important terms in chemical reaction engineering. They are described as ratios of how much of a reactant has reacted (X — conversion, normally between zero and one), how much of a desired product was formed (Y — yield, normally also between zero and one) and how much desired product was formed in ratio to the undesired product(s) (S — selectivity). There are conflicting definitions in the literature for selectivity and yield, so each author's intended definition should be verified. Conversion can be defined for (semi-)batch and continuous reactors and as instantaneous and overall conversion. Assumptions The following assumptions are made: The following chemical reaction takes place: , where and are the stoichiometric coefficients. For multiple parallel reactions, the definitions can also be applied, either per reaction or using the limiting reaction. Batch reaction assumes all reactants are added at the beginning. Semi-Batch reaction assumes some reactants are added at the beginning and the rest fed during the batch. Continuous reaction assumes reactants are fed and products leave the reactor continuously and in steady state. Conversion Conversion can be separated into instantaneous conversion and overall conversion. For continuous processes the two are the same, for batch and semi-batch there are important differences. Furthermore, for multiple reactants, conversion can be defined overall or per reactant. Instantaneous conversion Semi-batch In this setting there are different definitions. One definition regards the instantaneous conversion as the ratio of the instantaneously converted amount to the amount fed at any point in time: . with as the change of moles with time of species i. This ratio can become larger than 1. It can be used to indicate whether reservoirs are built up and it is ideally close to 1. When the feed stops, its value is not defined. In semi-batch polymerisation, the instantaneous conversion is defined as the total mass of polymer divided by the total mass of monomer fed: . Overall conversion Batch (This is the generally stated form) Semi-batch Total conversion of the formulation: Total conversion of the fed reactants: Continuous (This is the generally stated form) Yield Yield in general refers to the amount of a specific product (p in 1..m) formed per mole of reactant consumed (Definition 1). However, it is also defined as the amount of product produced per amount of product that could be produced (Definition 2). If not all of the limiting reactant has reacted, the two definitions contradict each other. Combining those two also means that stoichiometry needs to be taken into account and that yield has to be based on the limiting reactant (k in 1..n): Continuous The version normally found in the literature: Selectivity Instantaneous selectivity is the production rate of one component per production rate of another component. For overall selectivity the same problem of the conflicting definitions exists. Generally, it is defined as the number of moles of desired product per the number of moles of undesired product (Definition 1). However, the definitions of the total amount of reactant to form a product per total amount of reactant consumed is used (Definition 2) as well as the total amount of desired product formed per total amount of limiting reactant consumed (Definition 3). This last definition is the same as definition 1 for yield. Batch or semi-batch The version normally found in the literature: Continuous The version normally found in the literature: Combination For batch and continuous reactors (semi-batch needs to be checked more carefully) and the definitions marked as the ones generally found in the literature, the three concepts can be combined: For a process with the only reaction A -> B this mean that S=1 and Y=X. Abstract example For the following abstract example and the amounts depicted on the right, the following calculation can be performed with the above definitions, either in batch or a continuous reactor. A -> B A -> C B is the desired product. There are 100 mol of A at the beginning or at the entry to the continuous reactor and 10 mol A, 72 mol B and 18 mol C at the end of the reaction or the exit of the continuous reactor. The three properties are found to be: The property holds. In this reaction, 90% of substance A is converted (consumed), but only 80% of the 90% is converted to the desired substance B and 20% to undesired by-products C. So, conversion of A is 90%, selectivity for B 80% and yield of substance B 72%. Literature Werner Kullbach: Mengenberechnungen in der Chemie. Verlag Chemie, Weinheim 1980, . Eberhard Aust, Burkhard Bittner: Stöchiometrie - Chemisches Rechnen, CICERO-Verlag, Pegnitz, 4. Auflage, 2011, . Uwe Hillebrand: Stöchiometrie, Eine Einführung in die Grundlagen mit Beispielen und Übungsaufgaben, 2. Aufl., Springer-Verlag, Berlin Heidelberg 2009, .
Physical sciences
Reaction
Chemistry
1574190
https://en.wikipedia.org/wiki/Ornithomimosauria
Ornithomimosauria
Ornithomimosauria ("bird-mimic lizards") are theropod dinosaurs which bore a superficial resemblance to the modern-day ostrich. They were fast, omnivorous or herbivorous dinosaurs from the Cretaceous Period of Laurasia (now Asia, Europe and North America), as well as Africa and possibly Australia. The group first appeared in the Early Cretaceous and persisted until the Late Cretaceous. Primitive members of the group include Nqwebasaurus, Pelecanimimus, Shenzhousaurus, Hexing and Deinocheirus, the arms of which reached 2.4 m (8 feet) in length. More advanced species, members of the family Ornithomimidae, include Gallimimus, Struthiomimus, and Ornithomimus. Some paleontologists, like Paul Sereno, consider the enigmatic alvarezsaurids to be close relatives of the ornithomimosaurs and place them together in the superfamily Ornithomimoidea (see classification below). Description The skulls of ornithomimosaurs were small, with large eyes, above relatively long and slender necks. The most basal members of the taxon (such as Pelecanimimus and Harpymimus) had a jaw with small teeth, while the later and more derived species had a toothless beak. The fore limbs ("arms") were long and slender and bore powerful claws. The hind limbs were long and powerful, with a long foot and short, strong toes terminating in hooflike claws. Ornithomimosaurs were probably among the fastest of all dinosaurs. Like other coelurosaurs, the ornithomimosaurian hide was feathered rather than scaly. Feathers Unambiguous evidence of feathers is known from Ornithomimus edmontonicus, of which there are multiple specimens preserving feather traces. Deinocheirus and Pelecanimimus have been speculated to be feathered as well, the former due to the presence of a pygostyle, and the later due to possible impressions (otherwise taken to be collagen fibers). There is a debate on whether ornithomimids possessed the pennaceous feathers seen in Pennaraptora. Otherwise, a very ostrich-like plumage and feather range is known in one specimen of Ornithomimus. Classification Named by O.C. Marsh in 1890, the family Ornithomimidae was originally classified as a group of "megalosaurs" (a "wastebasket taxon" containing any medium to large sized theropod dinosaurs), but as more theropod diversity was uncovered, their true relationships to other theropods started to resolve, and they were moved to the Coelurosauria. Recognizing the distinctiveness of ornithomimids compared to other dinosaurs, Rinchen Barsbold placed ornithomimids within their own infraorder, Ornithomimosauria, in 1976. The contents of Ornithomimidae and Ornithomimosauria varied from author to author as cladistic definitions began to appear for the groups in the 1990s. In the early 1990s, prominent paleontologists such as Thomas R. Holtz Jr. proposed a close relationship between theropods with an arctometatarsalian foot; that is, bipedal dinosaurs in which the upper foot bones were 'pinched' together, an adaptation for running. Holtz (1994) defined the clade Arctometatarsalia as "the first theropod to develop the arctometatarsalian pes and all of its descendants." This group included the Troodontidae, Tyrannosauroidea, and Ornithomimosauria. Holtz (1996, 2000) later refined this definition to the branch-based "Ornithomimus and all theropods sharing a more recent common ancestor with Ornithomimus than with birds." Subsequently, the idea that all arctometatarsalian dinosaurs formed a natural group was abandoned by most paleontologists, including Holtz, as studies began to demonstrate that tyrannosaurids and troodontids were more closely related to other groups of coelurosaurs than they were to ornithomimosaurs. Since the strict definition of Arctometatarsalia was based on Ornithomimus, it became redundant with the name Ornithomimosauria under broad definitions of that clade, and the name Arctometatarsalia was mostly abandoned. The paleontologist Paul Sereno, in 2005, proposed the clade "Ornithomimiformes", defining them as all species closer to Ornithomimus edmontonicus than to Passer domesticus. Because he had redefined Ornithomimosauria in a much narrower sense, a new term was made necessary within his preferred terminology to denote the clade containing the sistergroups Ornithomimosauria and Alvarezsauridae — previously the latter had been contained within the former. However, this concept only appeared on Sereno's Web site and has not yet been officially published as a valid name. "Ornithomimiformes" was identical in content to Holtz's Arctometatarsalia, as it has a very similar definition. While "Ornithomimiformes" is the newer group, Sereno rejected the idea that Arctometatarsalia should take precedence, because the meaning of the former name has been changed very radically by Holtz. Phylogeny Ornithomimosauria has variously been used for the branch-based group of all dinosaurs closer to Ornithomimus than to birds, and in more restrictive senses. The more exclusive sense began to grow in popularity when the possibility arose that alvarezsaurids might fall under Ornithomimosauria if an inclusive definition were adopted. Another clade, Ornithomimiformes, was defined by Sereno (2005) as (Ornithomimus velox > Passer domesticus) and replaces the more inclusive use of Ornithomimosauria when alvarezsaurids or some other group are found to be closer relatives of ornithomimosaurs than maniraptorans, with Ornithomimosauria redefined to include dinosaurs closer to Ornithomimus than to alvarezsaurids. Gregory S. Paul has proposed that Ornithomimosauria might be a group of primitive, flightless birds, more advanced than Deinonychosauria and Oviraptorosauria. The cladogram below follows an analysis by Yuong-Nam Lee, Rinchen Barsbold, Philip J. Currie, Yoshitsugu Kobayashi, Hang-Jae Lee, Pascal Godefroit, François Escuillié & Tsogtbaatar Chinzorig. The analysis was published in 2014, and includes many ornithomimosaurian taxa. The cladogram below follows the phylogenetic analysis by Scott Hartman and colleagues in 2019, which has included a vast majority of species and uncertain specimens, resulting in a novel phylogenetic arrangement. Below is a cladogram by Serrano-Brañas et al., 2020, showing an analysis more in line with previous assumptions about ornithomimosaur classification. Palaeobiology Ornithomimosaurs probably acquired most of their calories from plants. Many ornithomimosaurs, including primitive species, have been found with numerous gastroliths in their stomachs, characteristic of herbivores. Henry Fairfield Osborn suggested that the long, sloth-like "arms" of ornithomimosaurs may have been used to pull down branches on which to feed, an idea supported by further study of their strange, hook-like hands. The sheer abundance of ornithomimids — they are the most common small dinosaurs in North America — is consistent with the idea that they were plant eaters, as herbivores usually outnumber carnivores in an ecosystem. However, they may have been omnivores that ate both plants and small animal prey. Comparisons between the scleral rings of two ornithomimosaur genera (Garudimimus and Ornithomimus) and modern birds and reptiles indicate that they may have been cathemeral, active throughout the day at short intervals. Social behavior Ornithomimosaurs are fairly well known for their gregarious life-styles. Some of the first findings of ornithomimosaur bonebeds were reported from the Iren Dabasu Formation in 1993 by Charles W. Gilmore. The bonebed consisted of numerous individuals of Archaeornithomimus ranging from young to adult remains. Multiple specimens of Sinornithomimus were collected from a single monospecific bonebed with a considerable density of juvenile individuals—out of 14, 11 were juveniles—, suggesting a gregarious behavior for an increased protection from predators. The notable abundance of juveniles indicates a high mortality in them or that a large mass-mortality event of an entire group occurred, with more susceptibility in juveniles. Additionally, the increase in the tibia-femur ratio through the ontogeny of Sinornithomimus may indicate higher cursorial capacities in adults than in juveniles. Moreover, and also contrary to the Sinornithomimus bonebed, a high concentration of ornithomimosaur specimens from the Bayshi Tsav locality was collected in a single multitaxic bonebed that is composed of at least five individuals at different ontogenetic stages. It is unlikely that the individuals of this bonebed represent a strategical social behaviour of a single species given the identification of at least two different taxa. Under this consideration, it is possible that a small pack of more than 10 individuals of different ornithomimosaurian herds was travelling together in optimal areas to find food resources, nesting sites or something else. Palaeopathology A right second metatarsal belonging to a large-bodied ornithomimosaur weighing approximately 432 kg has been described from Mississippi with a "butterfly" fragment fracture pattern characteristic of blunt force trauma, likely as a result of an interaction with a predator or a violent bout of intraspecific competition.
Biology and health sciences
Theropods
Animals
1574221
https://en.wikipedia.org/wiki/Tanystropheus
Tanystropheus
Tanystropheus (~ 'long' + 'hinged') is an extinct genus of archosauromorph reptile which lived during the Triassic Period in Europe, Asia, and North America. It is recognisable by its extremely elongated neck, longer than the torso and tail combined. The neck was composed of 13 vertebrae strengthened by extensive cervical ribs. Tanystropheus is one of the most well-described non-archosauriform archosauromorphs, known from numerous fossils, including nearly complete skeletons. Some species within the genus may have reached a total length of , making Tanystropheus the longest non-archosauriform archosauromorph as well. Tanystropheus is the namesake of the family Tanystropheidae, a clade collecting many long-necked Triassic archosauromorphs previously described as "protorosaurs" or "prolacertiforms". Tanystropheus contains at least two valid species as well as fossils which cannot be referred to a specific species. The type species of Tanystropheus is T. conspicuus, a dubious name applied to particularly large fossils from Germany and Poland. Complete skeletons are common in the Besano Formation at Monte San Giorgio, on the border of Italy and Switzerland. Monte San Giorgio fossils belong to two species: the smaller T. longobardicus and the larger T. hydroides. These two species were formally differentiated in 2020 primarily on the basis of their strongly divergent skull anatomy. When T. longobardicus was first described in 1886, it was initially mistaken for a pterosaur and given the name "Tribelesodon". Starting in the 1920s, systematic excavations at Monte San Giorgio unearthed many more Tanystropheus fossils, revealing that the putative wing bones of "Tribelesodon" were actually neck vertebrae. Most Tanystropheus fossils hail from marine or coastal deposits of the Middle Triassic epoch (Anisian and Ladinian stages), with some exceptions. For example, a vertebra from Nova Scotia was recovered from primarily freshwater sediments. The youngest fossils in the genus are a pair of well-preserved skeletons from the Zhuganpo Formation, a geological unit in China which dates to the earliest part of the Late Triassic (early Carnian stage). The oldest putative fossils belong to "T. antiquus", a European species from the latest part of the Early Triassic (late Olenekian stage). T. antiquus had a proportionally shorter neck than other Tanystropheus species, so some paleontologists consider that T. antiquus deserves a separate genus, Protanystropheus. The lifestyle of Tanystropheus has been the subject of much debate. Tanystropheus is unknown from drier environments and its neck is rather stiff and ungainly, suggesting a reliance on water. Conversely, the limbs and tail lack most adaptations for swimming and closely resemble their equivalents in terrestrial reptiles. Recent studies have supported an intermediate position, reconstructing Tanystropheus as an animal equally capable on land and in the water. Despite its length, the neck was lightweight and stabilized by tendons, so it would not been a fatal hindrance to terrestrial locomotion. The hindlimbs and the base of the tail were large and muscular, capable of short bursts of active swimming in shallow water. Tanystropheus was most likely a piscivorous ambush predator: the narrow subtriangular skull of T. longobardicus is supplied with three-cusped teeth suited for holding onto slippery prey, while the broader skull of T. hydroides bears an interlocking set of large curved fangs similar to the fully aquatic plesiosaurs. History and species Monte San Giorgio species 19th century excavations at Monte San Giorgio, a UNESCO world heritage site on the Italy-Switzerland border, revealed a fragmentary fossil of an animal with three-cusped (tricuspid) teeth and elongated bones. Monte San Giorgio preserves the Besano Formation (also known as the Grenzbitumenzone), a late Anisian-early Ladinian lagerstätte recognised for its spectacular fossils. In 1886, Francesco Bassani interpreted the unusual tricuspid fossil as a pterosaur, which he named Tribelesodon longobardicus. The holotype specimen of Tribelesodon longobardicus was stored in the Museo Civico di Storia Naturale di Milano (Natural History Museum of Milan), and was destroyed by allied bombing of Milan in World War II. Excavations by University of Zürich paleontologist Bernhard Peyer in the late 1920s and 1930s revealed many more complete fossils of the species from Monte San Giorgio. Peyer's discoveries allowed Tribelesodon longobardicus to be recognised as a non-flying reptile, more than 40 years after its original description. Its supposed elongated finger bones were recognized as neck vertebrae, which compared favorably with those previously described as Tanystropheus from Germany and Poland. Thus, Tribelesodon longobardicus was renamed to Tanystropheus longobardicus and its anatomy was revised into a long-necked, non-pterosaur reptile. Specimen PIMUZ T 2791, which was discovered in 1929, has been designated as the neotype of the species. Well-preserved T. longobardicus fossils continue to be recovered from Monte San Giorgio up to the present day. Fossils from the mountain are primarily stored at the rebuilt Museo Civico di Storia Naturale di Milano (MSNM), the Paleontological Museum of Zürich (PIMUZ), and the Museo Cantonale di Scienze Naturali di Lugano (MCSN). Rupert Wild reviewed and redescribed all specimens known at the time via several large monographs in 1973/4 and 1980. In 2005, Silvio Renesto described a T. longobardicus specimen from Switzerland which preserved the impressions of skin and other soft tissue. Five new MSNM specimens of T. longobardicus were described by Stefania Nosotti in 2007, allowing for a more comprehensive view of the species' anatomy. A small but well-preserved skull and neck, specimen PIMUZ T 3901, was found in the slightly younger Meride Limestone at Monte San Giorgio. Wild (1980) gave it a new species, T. meridensis, based on a set of skull and vertebral traits proposed to differ from T. longobardicus. Later reinvestigations failed to confirm the validity of these differences, rendering T. meridensis a junior synonym of T. longobardicus. A 2019 revision of Tanystropheus found that T. longobardicus and T. antiquus were the only valid species in the genus. Tanystropheus specimens from Monte San Giorgio have long been segregated into two morphotypes based on their tooth structure. Smaller specimens bear tricuspid teeth at the back of the jaw while larger specimens have a set of single-pointed fangs. The two morphotypes were originally considered to represent juvenile and adult specimens of T. longobardicus, though many studies have supported the hypothesis that they represent separate species. A 2020 study found numerous differences between the skulls of large and small specimens, formalizing the proposal to divide the two into separate species. Moreover, a histological investigation revealed that one small specimen, PIMUZ T 1277, was a skeletally mature adult at a length of only 1.5 meters (4.9 ft). The larger one-cusped morphotype was named as a new species, Tanystropheus hydroides (referencing the Hydra of Greek mythology), while the smaller tricuspid morphotype retains the name T. longobardicus. Polish and German species The first Tanystropheus specimens to be described were found in the mid-19th century. They included eight large vertebrae from the Upper Muschelkalk of Germany, and a partial skeleton from the Lower Keuper of Poland. These geological units occupy part of the Middle Triassic, from the latest Anisian to middle Ladinian stages. Though the fossils were initially given the name Macroscelosaurus by Count Georg Zu Münster, the publication containing this name is lost and its genus is considered a nomen oblitum. In 1855, Hermann von Meyer supplied the name Tanystropheus conspicuus, the type species of Tanystropheus, to the fossils. They were later regarded as Tanystropheus fossils undiagnostic relative to other species, rendering T. conspicuus a nomen dubium possibly synonymous with T. hydroides. Over 500 "Tanystropheus conspicuus" specimens have been recovered from a Lower Keuper bonebed near the Silesian village of Miedary. This is the largest known concentration of Tanystropheus fossils, more than double the number collected from Monte San Giorgio. Though the Miedary specimens are individually limited to isolated postcranial bones, they are preserved in three dimensions and show great potential for elucidating the morphology of the genus. The Miedary locality represents an isolated brackish body of water close to the coast, and the abundance of Tanystropheus fossils suggests that it was an animal well-suited for this kind of habitat. In the late 1900s, Friedrich von Huene named several dubious Tanystropheus species from Germany and Poland. T. posthumus, from the Norian of Germany, was later reevaluated as an indeterminate theropod vertebra and a nomen dubium. Several more von Huene species, including "Procerosaurus cruralis", "Thecodontosaurus latespinatus", and "Thecodontosaurus primus", have been reconsidered as indeterminate material of Tanystropheus or other archosauromorphs. One of Von Huene's species appears to be valid: T. antiquus, from the Gogolin Formation of Poland, was based on cervical vertebrae which were proportionally shorter than those of other Tanystropheus species. Long considered destroyed in World War II, several T. antiquus fossils were rediscovered in the late 2010s. The proportions of T. antiquus fossils are easily distinguishable, and it is currently considered a valid species of archosauromorph, though its referral to the genus Tanystropheus has been questioned. The Gogolin Formation ranges from the upper Olenekian (latest part of the Early Triassic) to the lower Anisian in age. Assuming they belong within Tanystropheus, the fossils of T. antiquus may be the oldest in the genus. Specimens likely referable to T. antiquus are also known from throughout Germany and the fossiliferous Winterswijk site in the Netherlands. Other Tanystropheus fossils In the 1880s, E.D. Cope named three supposed new Tanystropheus species (T. bauri, T. willistoni, and T. longicollis) from the Late Triassic Chinle Formation in New Mexico. However, these fossils were later determined to be tail vertebrae belonging to theropod dinosaurs, which were named under the new genus Coelophysis. Authentic Tanystropheus specimens from the Makhtesh Ramon in Israel were described as a new species, T. haasi, in 2001. However, this species may be dubious due to the difficulty of distinguishing its vertebrae from T. conspicuus or T. longobardicus. Another new species, T. biharicus, was described from Romania in 1975. It has also been considered possibly synonymous with T. longobardicus. A Tanystropheus-like vertebra from the middle Ladinian Erfurt Formation (Lettenkeuper) of Germany was described in 1846 as one of several fossils gathered under the name "Zanclodon laevis". Though likely the first Tanystropheus fossil to be discovered, the vertebra is now lost, and surviving jaw fragments and other fossil scraps of "Zanclodon laevis" represent indeterminate archosauriforms with no relation to Tanystropheus. Tanystropheus vertebrae have also been found in the Villány Mountains of Hungary. The most well-preserved Tanystropheus fossils outside of Monte San Giorgio come from the Guizhou province of China, as described by Li (2007) and Rieppel (2010). They are also among the youngest and easternmost fossils in the genus, hailing from the upper Ladinian or lower Carnian Zhuganpo Formation. Although the postcrania is complete and indistinguishable from the fossils of Monte San Giorgio, no skull material is preserved, and their younger age precludes unambiguous placement into any Tanystropheus species. The Chinese material includes a large morphotype (T. hydroides?) specimen, GMPKU-P-1527, and an indeterminate juvenile skeleton, IVPP V 14472. Indeterminate Tanystropheus remains are also known from the Jilh Formation of Saudi Arabia and various Anisian-Ladinian sites in Spain, France, Italy, and Switzerland. The youngest Tanystropheus fossil in Europe is a vertebra from the lower Carnian Fusea site in Friuli, Italy. In 2015, a large Tanystropheus cervical vertebra was described from the Economy Member of the Wolfville Formation, in the Bay of Fundy of Nova Scotia, Canada. The Wolfville Formation spans the Anisian to Carnian stages, and the Economy Member is likely Middle Triassic (Anisian-Ladinian) in age. It is a rare example of predominantly freshwater strata preserving Tanystropheus fossils. Tanystropheus-like tanystropheid fossils are known from another freshwater formation in North America: the Anisian-age Moenkopi Formation of Arizona and New Mexico. Several new tanystropheid genera have been named from former Tanystropheus fossils. Fossils from the Anisian Röt Formation in Germany, previously referred to Tanystropheus antiquus, were named as a new genus and species in 2006: Amotosaurus rotfeldensis. In 2011, fossils from the Lipovskaya Formation of Russia were given the new genus and species Augustaburiania vatagini by A.G. Sennikov. He also named the new genus Protanystropheus for T. antiquus, though a few studies continued to retain that species within Tanystropheus. Tanystropheus fossai, from the Norian-age Argillite di Riva di Solto in Italy, was given its own genus Sclerostropheus in 2019. Anatomy [[File:Tanystropheus Size Comparison.svg|thumb|400x400px|Size comparison between T. 'conspicuus''', T. hydroides (PIMUZ T 2793), and T. longobardicus (MSNM V 3730)]]Tanystropheus was one of the longest known non-archosauriform archosauromorphs. Vertebrae referred to "T. conspicuus" may correspond to an animal up to five or six meters (16.4 to 20 feet) in length. T. hydroides was around the same size, with the largest specimens at an estimated length of 5.25 meters (17.2 feet). T. longobardicus was significantly smaller, with an absolute maximum size of two meters (6.6 feet). Despite the large size of some Tanystropheus species, the animal was lightly built. One mass estimate used crocodiles as a density guideline for a 3.6 meter (11.8 feet)-long model of a Tanystropheus skeleton. For a Tanystropheus individual of that length, the weight estimate varied between 32.9 kg (72.5 lbs) and 74.8 kg (164.9 lbs), depending on the volume estimation method. This was significantly lighter than crocodiles of the same length, and more similar to large lizards. Skull of Tanystropheus longobardicus The skull of Tanystropheus longobardicus is roughly triangular when seen from the side and top, narrowing towards the snout. Each premaxilla (the toothed bone at the tip of the snout) has a long tooth row, with six teeth. The premaxillary teeth are conical, fluted by longitudinal ridges, and have subthecodont implantation, meaning that the inner wall of each tooth socket is lower than the outer wall. The premaxilla meets the maxilla (the succeeding toothed bone) along a long, slanted contact. This shape is produced by an elongated postnarial process (rear prong) of the premaxilla, which extends below and behind the nares (nostril holes). The nasals (bones at the top edge of the snout) are poorly known, but were likely narrow and flat. A 2020 reinvestigation revealed that the front part of the nasals and the inner spur of the premaxillae are too short to keep the nares divided. This leaves a single central narial opening for the nostrils, opening upwards. An undivided naris is seen in few other archosauromorphs, namely rhynchosaurs, most allokotosaurs, modern crocodilians, and Teyujagua. The maxilla is triangular, reaching its maximum height at mid-length and tapering to the front and rear. There are up to 14 or 15 teeth in the maxilla, though some individuals have fewer. T. longobardicus is a reptile with heterodont dentition, meaning that it had more than one type of tooth shape. In contrast to the simple fang-like premaxillary teeth, most or all of the maxillary teeth have a distinctive tricuspid shape, with the crown split into three stout triangular cusps (points). The cusps are arranged in a line from front-to-back, with the central cusp larger than the other two cusps. Among Triassic reptiles, early pterosaurs such as Eudimorphodon developed an equivalent tooth shape, and tricuspid teeth can also be found in a few modern lizard species. Some individuals of T. longobardicus have tricuspid teeth along their entire maxilla, while in others up to seven maxillary teeth are single-cusped fangs similar to the premaxillary teeth. The front edge of each orbit (eye socket) is marked by two bones: the prefrontal and lacrimal. The prefrontal is tall and projects a low vertical ridge in front of the orbit. The small, sliver-shaped lacrimal is nestled further down along the maxilla. The lower edge of the orbit is formed by the jugal, a bone with a slender anterior process (front branch) and a somewhat broader dorsal process (upper branch). There is also a very short pointed posterior process (rear branch) which ends freely and fails to contact any other bone. The shape of the jugal in Tanystropheus is typical for early archosauromorphs; the underdeveloped posterior process indicates that the margin of the infratemporal fenestra (lower skull hole behind the eye) was incomplete and open from below. The postorbital bone, which links the jugal to the top of the skull, was tall and roughly boomerang-shaped, though poor preservation obscures some details. The squamosal bone, which extends behind the postorbital, is also poorly known in T. longobardicus, and many supposed squamosal fossils in the species have been reinterpreted as displaced postorbitals. The quadrate bone, which forms the rear edge of the skull and upper half of the jaw joint, is wide and tall. It has a strong lateral crest and a low pterygoid ramus (a vertical internal plate, articulating with the pterygoid bone in the roof of the mouth). No fossils of T. longobardicus preserve a quadratojugal, a bone which normally lies along the quadrate at the rear lower corner of the skull. Nevertheless, a quadratojugal was likely present in the species, since it occurs in T. hydroides and nearly every other early archosauromorph. The paired frontals (skull roof bones above the orbits) have been described as "axe-shaped flanges", projecting broad curved plates above each orbit. Together, the frontals are narrowest at the front, terminating at a three-lobed contact with the nasals. The sutures between the frontals and their neighboring bones are coarse and interdigitating (interlocking). A small triangular bone, the postfrontal, wedges behind the rear outer corner of each frontal. A pair of larger plate-like bones, the parietals, sit directly behind the frontals on the skull roof. In T. longobardicus, the parietals are fairly broad and flat, with a shallowly concave outer edge. Like the frontals, the paired parietals are seemingly separate bones, unfused to each other in every member of the species. A large hole, the pineal foramen (sometimes called the parietal foramen), is present at the midline of the skull between the front part of each parietal. When seen from below, a pair of curved crests along the frontals and parietals mark the edge of the forebrain, as defined by a bulbous central hollow. The eye was supported by more than 10 rectangular ossicles (tiny plate-like bones) connecting into a scleral ring, though a full reconstruction of the ring, with 18 ossicles, is conjectural. Few details of the braincase and palate (bony roof of the mouth) are known for T. longobardicus. The scant available evidence suggests that these regions of the skull are rather unspecialized in this species. The vomers (front components of the palate) are narrow and dotted with at least nine tiny teeth. The succeeding palatine and pterygoid bones are also supplied with rows of teeth: up to six relatively large teeth in the former and at least 12 small teeth in the latter. Teeth on the vomers, palatines, and pterygoids are the norm for early archosauromorphs and reptiles as a whole. The lower jaw is slender, and most of its length is devoted to the toothed dentary bone. The dentary is downturned at its tip and its outer surface is dotted with a row of prominent foramina (blood vessel pits). There are up to 19 teeth in the dentary. Most commonly, the first six teeth are prominent conical fangs, akin to the premaxilla, while the remainder are small and tricuspid, akin to the maxilla. There is some variation in the number of each tooth shape, and some individuals may have up to 11 conical teeth. The inner surface of the dentary is joined by a splint-shaped bone, the splenial, at its lower edge. The splenial was most likely not visible in lateral view. At its rear, the dentary seems to be partially overlapped by the surangular, a bone which comprises much of the rear part of the jaw. Although it is plausible that a small coronoid bone could be present in front of the surangular, evidence is ambiguous at best for all Tanystropheus species. A sheathe-like bone, the angular, is well-exposed under the dentary and surangular, though sutures between these bones are difficult to interpret with certainty. The joint at the back of the jaw lies on the articular, a lumpy rectangular bone which is floored and reinforced by a similar bone: the prearticular. In Tanystropheus species with known skull material, both the articular and prearticular contribute equally to a segment of the jaw extending back beyond the level of the jaw joint. This projection, known as a retroarticular process, is enlarged to a similar degree to that of early rhynchosaurs. Skull of Tanystropheus hydroides The skull of Tanystropheus hydroides is broader and flatter than that of T. longobardicus. The first five of six teeth in the premaxilla are very large and fang-like, forming an interlocking "fish trap" similar to Dinocephalosaurus and many sauropterygians such as plesiosaurs and nothosaurs. All teeth in the skull have a single cusp which is sharp, curved, and unserrated. They have an oval-shaped cross section and shallow subthecodont implantation. Like T. longobardicus, T. hydroides has a single central narial opening. Unlike T. longobardicus, T. hydroides has a nearly vertical rear edge of the premaxilla, without a postnarial process. The maxilla is low, with a large and rectangular front portion. There is a perforation near the front of the bone, which would have been penetrated by the tenth dentary tooth when the mouth was closed. Towards the rear, the maxilla develops a concave edge overlooking a long and slender posterior process (rear branch) that projects under the rounded orbit. There are 15 teeth in the maxilla, increasing in size up to the eighth tooth, which is about as large as the premaxillary fangs. T.hydroides is not known to possess a septomaxilla, a neomorphic bone at the rear tip of the naris in some reptiles. The nasals are broad and plate-like, with a depressed central portion. The lacrimal and prefrontal, though incompletely known, were likely similar to those of T. longobardicus. T. hydroides has a particularly large nasolacrimal duct, a tubular channel opening out of the rear of the lacrimal. The frontals are quite wide and form much of the upper edge of the orbit, a condition akin to T. longobardicus. However, the paired frontals meet along a straight suture with a low ridge on the lower (internal) surface, in contrast to T. longobardicus, where the frontals meet at an interdigitating suture with a broad furrow on the underside.The parietals are strongly modified in T. hydroides. They are fused into a single X-shaped bone, somewhat resembling the parietals of erythrosuchids. This shape may have resulted from fusion between the parietals' anterolateral processes (front branches) and the postfrontals, which are separate bones in T. longobardicus but not apparent in T. hydroides. A prominent pineal foramen is positioned near the straight contact with the frontals, one of the few similarities with T. longobardicus. Strong supratemporal fossae excavate into the outer edge of the parietal and define a low sagittal crest along the midline of the skull. This trend is shared with other large archosauromorphs, like Dinocephalosaurus and Azendohsaurus. The supratemporal fenestrae (upper skull holes behind the eye) are wide and semi-triangular, exposed almost entirely from above. The postorbital has large and blocky ventral and medial processes (lower and inward branches), which meet at a sharper angle than in any other early archosauromorph. The jugal, conversely, is basically indistinguishable from that of T. longobardicus. The squamosal is deep and rectangular when viewed from the side, with little differentiation between the tall suture with the postorbital and the small suture with the quadratojugal. As a result, most of the posterior skull is clustered together, and the infratemporal fenestra is reduced to a small diagonal hole. The quadratojugal is a curved sliver of bone which twists back alongside the quadrate. Relative to T. longobardicus, the quadrate has a larger pterygoid ramus and a strongly hooked projection at its upper extent. The palate of T. hydroides has several unique traits. The vomers are wide and tongue-shaped, each hosting a single row of 15 relatively large curved teeth along the outer edge of the bone, adjacent to the elongated choanae (internal openings of the nasal cavity). Most other archosauromorphs, T. longobardicus included, have restricted vomers with rows of minuscule teeth. The rest of the palate is completely toothless in T. hydroides, even the palatines and pterygoids, which bear tooth rows in most early archosauromorphs. The pterygoids are also unusual for their broad palatal ramus (front plate) and a loose, strongly overlapping connection to the ectopterygoids (linking bones between the pterygoid and maxilla). The epipterygoids (vertical bones in front of the braincase) are tall and flattened from the side.T. hydroides is a rare example of an early archosauromorph with a three-dimensionally preserved braincase. The basioccipital (rear lower component of the braincase) was small, with inset basitubera (vertical plates connecting to neck muscles) linked by a transverse ridge, similar to allokotosaurs and archosauriforms. The parabasisphenoid (front lower component) is less specialized; it lies flat and tapers forwards to a blade-like cultriform process. The rear part of the bone has a deep triangular excavation (known as a median pharyngeal recess) on its underside, flanked by low crests and a pair of small basipterygoid processes (knobs connecting to the pterygoid). The remainder of the braincase is fully fused together into a strongly ossified composite bone, and its constituents must be estimated by comparison to other reptiles. The exoccipitals, which mostly encompass the foramen magnum (spinal cord hole), are perforated with nerve foramina. Each exoccipital merges outwards into the opisthotic, which sends out a straight, elongated paroccipital process (thick outer branch) to the edge of the cranium. In T. longobardicus, the paroccipital processes are shorter and narrower at their base. The stapes, a bone which transmits vibrations from the ear to the braincase, is slender and splits into two small prongs where it contacts the opisthotic. The opisthotic merges forwards into the prootic, which extensively contacts the parabasisphenoid and hosts a range of larger nerve foramina. The prootic forms much of the front edge of the paroccipital process, akin to the condition in archosauriforms. Another archosauriform-like feature is the presence of a laterosphenoid, an additional braincase component in front of the prootic and above the exit hole for the trigeminal nerve (also known as cranial nerve V). The laterosphenoid is small, similar to that of Azendohsaurus. The upper rear part of the braincase is formed by the supraoccipitals, which were presumably fused together as a continuous surface sloping smoothly down to the foramen magnum. In the lower jaw, the dentaries meet each other at a robust symphysis with an interdigitating suture. The front end of the dentary hosts a prominent keel on its lower edge, a unique trait of the species. There are at least 18 dentary teeth; the first three are by far the largest teeth in the skull, forming the lower half of the interlocking "fish trap" with the premaxilla. Most other teeth in the dentary are small, with the exception of the tenth tooth, which juts up to pierce the maxilla. The remainder of the jaw contains the same set of bones as in T. longobardicus, but some details differ in T. hydroides. For example, the splenial is plate-like and covers a larger portion of the internal dentary than in T. longobardicus. In addition, the rear of the dentary overlaps a large portion of the surangular, rather than the surangular acting as the overlapping bone where they meet. The surangular internally bears a large fossa for the jaw's adductor (vertical biting) muscles, and a prominent surangular foramen is positioned in front of the jaw joint. Neck The most recognisable feature of Tanystropheus is its hyperelongate neck, equivalent to the combined length of the body and tail. Tanystropheus has 13 cervical (neck) vertebrae, most of which are massive, though the two closest to the head are smaller and less strongly developed. The atlas (first cervical), which connects to the skull, is a small, four-part bone complex. It consists of an atlantal intercentrum (small lower component) and pleurocentrum (large lower component), and a pair of atlantal neural arches (prong-like upper components). There does not appear to be a proatlas, which slots between the atlas and skull in some other reptiles. The intercentrum and pleurocentrum are not fused to each other, unlike the single-part atlas of allokotosaurs. The tiny crescent-shaped intercentrum is overlain by a semicircular pleurocentrum, which acts as a base to the backswept neural arches. The axis (second cervical) is larger, with a small axial intercentrum followed by a much larger axial pleurocentrum. The axial pleurocentrum is longer than tall, has a low neural spine set forwards, and small prezygapophyses (front articular plates). The large postzygophyses (rear articular plates) are separated by a broad trough and support pointed epipophyses (additional projections). The third to eleventh cervicals are hyperelongate in T. longobardicus and T. hydroides, ranging from three to 15 times longer than tall. They are somewhat less elongated in T. antiquus, less than 6 times longer than tall. The cervicals gradually increase in size and proportional length, with the ninth cervical typically being the largest vertebra in the skeleton. In general structure, the elongated cervicals resemble the axial pleurocentrum. However, the axis also has a keel on its underside and an incomplete neural canal, unlike its immediate successors. In the rest of the cervicals, all but the front of each neural spine is so low that it is barely noticeable as a thin ridge. The zygapophyses are closely set and tightly connected between vertebrae. The epipophyses develop into hooked spurs. The cervicals are also compressed from the side, so they are taller than wide. Many specimens have a longitudinal lamina (ridge) on the side of each cervical. Ventral keels return to vertebrae in the rear half of the neck. All cervicals, except potentially the atlas, connected to holocephalous (single-headed) cervical ribs via facets at their front lower corner. Each cervical rib bears a short stalk connecting to two spurs running under and parallel to the vertebrae. The forward-projecting spurs were short and stubby, while the rear-projecting spurs were extremely narrow and elongated, up to three times longer than their respective vertebrae. This bundle of rod-like bones running along the neck afforded a large degree of rigidity. The 12th cervical and its corresponding ribs, though still longer than tall, are notably shorter (from front-to-back) than their predecessors. The 12th cervical has a prominent neural spine and robust zygapophyses, also unlike its predecessors. The 13th vertebra has long been assumed to be the first dorsal (torso) vertebra. This was justified by its general stout shape and supposedly dichocephalous (two-headed) rib facets, unlike the cervicals. However, specimen GMPKU-P-1527 has shown that the 13th vertebra's rib simply has a single wide articulation and an unconnected forward branch, more similar to the cervical ribs than the dorsal ribs. The elongation of Tanystropheus's neck is mostly a consequence of particular vertebrae lengthening. This is a contrast to trachelosaurids such as Dinocephalosaurus, which achieve a long neck by the addition of numerous cervicals, for a total cervical count exceeding 30. Nevertheless, Tanystropheus does have more vertebrae in its neck than typical archosauromorphs. Protorosaurus, for example, has only seven cervicals, while Macrocnemus and Prolacerta have eight. To achieve a cervical count of 13, Tanystropheus acquired four additional elongated cervicals in the front half of the neck, in addition to a stout vertebra which shifted from the dorsal series into the base of the neck, transforming into the 13th cervical. Tanystropheids are unusual among reptiles in that they acquire their long necks without prolonged somitogenesis (an increase in the overall number of presacral vertebrae during early development). Instead, their overall number of presacral vertebrae remains at a constant count of 25, the same as their shorter-necked ancestors. This would require a shift in regionalization, encouraging the development of new cervical vertebrae rather than dorsals. Torso and tail There are 12 dorsal (torso) vertebrae. This count is very low among early archosauromorphs: Protorosaurus has up to 19, Prolacerta has 18, and Macrocnemus has 17. Tanystropheuss dorsals are smaller and less specialized than the cervicals. Though their neural spines are taller than those of the cervicals, they are still rather short. The dorsal ribs are double-headed close to the shoulder and single-headed in the rest of the torso, sitting on stout transverse processes projecting outwards from the front half of each vertebra. More than 20 angled rows of gastralia extend along the belly, each gastral element represented by a pair of segmented rods which intermingle at the midline. The two sacral (hip) vertebrae are low but robust, bridging over to the hip with expanded sacral ribs. The latter sacral rib is a single unit without a bifurcated structure. The tail is long, with at least 30 and possible up to 50 caudal vertebrae. The first few caudals are large, with closely interlinked zygapophyses and widely projecting pleurapophyses (a term for transverse processes lacking ribs). The length of the pleurapophyses decreases until they disappear between the eighth and thirteenth caudal. The height of the neural spines also decreases gradually down the tail. A row of long chevrons is present under a short portion of the tail, though not immediately behind the hips. Shoulder and forelimbs The pectoral girdle (shoulder girdle) has a fairly standard form shared with other tanystropheids. The clavicles (collarbones) were curved and slightly twisted rods. They lie along the front edge of the interclavicle, a plate-like bone at the center of the chest with a rhombic (broad, diamond-shaped) front region followed by a long stalk at the rear. The interclavicle is rarely preserved and its connections to the rest of the pectoral girdle are mostly inferred from Macrocnemus. The scapula (upper shoulder blade) has the form of a large semicircular plate on a short, broad stalk. It lies above the coracoid (lower shoulder blade), which is a large oval-shaped plate with a broad glenoid facet (shoulder socket). The humerus (upper arm bone) is straight and slightly constricted at the middle. Near the elbow it is expanded and twisted, with an ectepicondylar groove on its outer edge. The radius (outer forearm bone) is slender and somewhat curved, while the ulna (inner forearm bone) is similar in shape to the humerus and lacks a distinct olecranon (elbow projection). There are four carpals (wrist bones): the ulnare, radiale, and two distal carpals. The ulnare and radiale are large and cuboid, enclosing a small foramen (gap) between them. The larger outer distal carpal connects to metacarpals III and IV, while the much smaller inner distal carpal connects to metacarpals II and III. Metacarpals III and IV are the largest bones in the hand, followed closely by metacarpal II. Metacarpals I and V are both short. The hand's phalangeal formula (joints per finger) is 2-3-4-4-3. The terminal phalanges (fingertips) may have formed thick, blunt claws. Hip and hindlimbs The components of the pelvis (hip) are proportionally small, though their shape is unremarkable relative to other tanystropheids. The ilium (upper hip blade) is low and extends to a tapered point at the rear. The pubis (lower front hip blade) is vertically oriented, with a small but distinct obturator foramen and a concave rear edge. The lower front tip of the large, fan-shaped ischium (lower rear hip blade) converges towards the pubis, but does not contact it. The large oval-shaped gap between the pubis and ischium is known as the thyroid foramen. Two pairs of large, curved bones, known as heterotopic ossifications or postcloacal bones, sit behind the hips in about half of known specimens preserving the area. They occupy the base of the tail, a region which lacks chevrons. These bones are possibly sexually dimorphic, and have also been reported in the small American tanystropheid Tanytrachelos. Heterotopic ossifications may be linked to reproductive biology, supporting reproductive organs (if they belong to males) or an egg pouch (if they belong to females). The hindlimbs are significantly larger than the forelimbs, though similar in overall structure and proportions. The femur (thigh bone) is long, slender, and sigmoid (curved at both ends). It has a longitudinal muscle crest for muscle attachment (the internal trochanter) on its underside, and it contacts the acetabulum (hip socket) at a broad smooth joint. The tibia and fibula (shin bones) are straight, with the former much thicker and more expanded at the knee. The large proximal tarsals (ankle or heel bones contacting the shin) consist of a rounded calcaneum and a blocky astragalus, which meet each other along a straight or shallowly indented contact in most specimens. Like most non-aquatic reptiles, a set of small pebble-shaped distal tarsals are present between the proximal tarsals and the foot bones. Tanystropheus has a reduced number of distal tarsals: only a small fourth distal tarsal and a minuscule third distal tarsal. There are five closely appressed metatarsals (foot bones), with the fourth and third being the longest. Though the first four metatarsals are slender and similar in length, the fifth (outermost) is very stout and subtly hooked, slotting into the ankle along a smooth joint. The estimated phalangeal formula (joints per toe) is 2-3-4-5-4. The first phalange of the fifth toe was very long, filling a metatarsal-like role as seen in other tanystropheids. Classification Historical interpretations (1920s-1980s) Knowledge on the anatomy of Tanystropheus was transformed by Bernhard Peyer's discoveries in the 1920s and 1930s, but its relationship to other reptiles remained enigmatic for much of the 20th century. Most paleontologists (including modern authorities) agree that Tanystropheus was closely related to Macrocnemus, a smaller and less specialized reptile found in the same geological strata. Beyond this conclusion, Peyer initially suggested that Tanystropheus was related to other long-necked Triassic reptiles. Sauropterygians such as plesiosaurs and nothosaurs were one possibility, and another was the fragmentary German reptile Trachelosaurus. Later, Peyer classified Tanystropheus and Macrocnemus closer to "protorosaurs", a term initially used for Permian reptiles such as Protorosaurus and Araeoscelis. In the early and mid-20th century, it was commonplace for Permian and Triassic reptiles of uncertain affinity to intermingle together in classification schemes. Names such as "Eosuchia", "Euryapsida", "Younginiformes", "Protorosauria", and others were all applied by different authors with little consistency. The Early Triassic reptile Prolacerta, from South Africa, also became involved upon its discovery. Prolacerta was the namesake of yet another term introduced into the convoluted space of reptile taxonomy: "Prolacertiformes". As the century progressed, two competing hypotheses for the affinities of Tanystropheus developed from the groundwork set by Peyer. Both hypotheses were justified by patterns of skull fenestration (the shape of holes in the skull behind the eye) and cranial kinesis (the flexibility of joints within the skull). One idea was that Tanystropheus and kin (particularly Macrocnemus and Prolacerta) were ancestral to "lacertilians", an antequated term for lizards. This hypothesis was supported up until the 1980s by German and Swiss paleontologists, including Rupert Wild, and Peyer's successor at Zürich, Emil Kuhn-Schnyder. The other idea maintained that Tanystropheus was a "protorosaur", closer to Protorosaurus and Araeoscelis and unrelated to Prolacerta. This was popular among American paleontologists like Alfred Romer. Some publications from the mid-20th century argued that "protorosaurs" were "euryapsids" (reptiles with only an upper temporal fenestra) related to sauropterygians, though later accounts admitted that Euryapsida was likely polyphyletic, with its members lacking a common ancestor. In 1975, a paper by South African paleontologist C.E. Gow argued that none of these hypotheses were entirely correct. He proposed that Prolacerta, and by extension Macrocnemus and Tanystropheus, occupied an extinct spur on the reptile family tree near the ancestry of archosaurs, a diverse group of reptiles with lightweight skulls and serrated teeth set in deep sockets. Dinosaurs are among the most famous subset of archosaurs, as are modern crocodilians and their prehistoric ancestors. Several newly discovered "prolacertiforms", including Tanystropheus-, Protorosaurus-, and Prolacerta-like species, were described in the 1970s, not long after the field of paleontology was reinvigorated by the "dinosaur renaissance" in the 1960s and beyond. Cladistics and Archosauromorpha (1980s-1990s) In the 1980s, the advent of cladistics saw a paradigm shift in the field of taxonomy, emphasizing monophyletic clades (all-encompassing groups defined by shared ancestry) over other categorization styles. Phylogenetic analyses were invented to evaluate reptile evolution in a quantitative manner, by collecting a set of characteristics in sampled species and then using computational models to find the simplest (most parsimonious) path evolution could take to produce that character distribution. Cladistics stabilized and defined a fundamental split in the family tree of reptiles: one side of the family tree, Lepidosauromorpha, leads to lepidosaurs such as squamates (lizards and snakes) and the tuatara. The other side, Archosauromorpha, leads to archosaurs. Cladistics was one of many lines of evidence that helped to demonstrate the dinosaurian origin of birds. This left crocodilians and birds as the two surviving archosaur groups. A series of phylogenetic analyses in the late 1980s and 1990s strongly supported the proposal of Gow (1975). Tanystropheus, Macrocnemus, Protorosaurus, and Prolacerta were always placed as members of Archosauromorpha, closer to archosaurs than to squamates. "Protorosauria" and "Prolacertiformes" were used interchangeably for the archosauromorph subgroup encompassing these superficially lizard-like reptiles. Some authors preferred "Protorosauria" for its priority. Most others used "Prolacertiformes" arguing that "Protorosauria" was a name that carried too much historical baggage, since it had previously encompassed non-archosauromorph "euryapsids" like Araeoscelis. As a "prolacertiform", Tanystropheus is typically considered the sister taxon to Tanytrachelos, a much smaller tanystropheid from Virginia. Another small tanystropheid, Cosesaurus from Spain, is allied with the Tanystropheus + Tanytrachelos clade in many analyses of the 1980s and 1990s. Within Archosauromorpha, "prolacertiforms" are joined by several other groups. The clade Archosauriformes is a diverse archosauromorph subset including crown group archosaurs and their predatory close relatives such as Euparkeria and Proterosuchus. Stocky Triassic herbivores like rhynchosaurs, Trilophosaurus, and azendohsaurids additionally qualify as archosauromorphs. The bizarre chameleon-like drepanosaurs were also included by many analyses, though more recently they have been reinterpreted as a more basal type of reptile unrelated to Archosauromorpha. The following cladogram is from Dilkes (1998), a study with a small sample of "prolacertiforms" but closer resemblance to most analyses of the 2000s and 2010s: Recent studies and the rejection of "prolacertiform" monophyly (2000s-present) Starting with Dilkes (1998), many phylogenetic analyses began to recover Prolacerta in a position close to archosauriforms and away from other "prolacertiforms". In addition, a 2009 redescription of Protorosaurus shifted it away from Tanystropheus and close to the base of Archosauromorpha. These results have driven paleontologists to the conclusion that "Protorosauria" / "Prolacertiformes" is not a natural monophyletic clade and fails to adequately describe the structure of Archosauromorpha. In the modern cladistic framework, it could be considered a paraphyletic grade or polyphyletic category of archosauromorphs united by "primitive" characteristics (such as a slender neck and lizard-like body) rather than a shared evolutionary history. The family Tanystropheidae has come to succeed those older names, acting as a monophyletic clade oriented around Tanystropheus. Tanystropheidae hosts a growing list of former "protorosaurs" with closer affinities to Tanystropheus than to Prolacerta, Protorosaurus, or other major archosauromorph groups. Tanystropheus is well-nested within Tanystropheidae, sometimes as the sister taxon to Amotosaurus. Macrocnemus is most commonly the basal-most (first diverging) tanystropheid. The following cladogram is from Pritchard et al. (2015), a study focused specifically on tanystropheids: The following cladogram is from Ezcurra (2016), a study focused generally on archosauromorphs and early archosauriforms: A set of phylogenetic analyses by Spiekman et al. (2021) attempted to tackle the question of "protorosaur" relationships using an expanded and updated sample of archosauromorph species described over the past few decades. Tanystropheus was split into five taxonomic units in this study: T. longobardicus, T. hydroides, T. "conspicuus", "T. antiquus" (Protanystropheus), and GMPKU P1527 (the large Chinese Tanystropheus specimen). Two types of analyses were designed to test for bias: one disregarded non-discrete characters and character state ordering, while the other included these settings. In some analyses, "wildcard" taxa with inconsistent positions were excluded to improve resolution. Regardless of the setting, T. longobardicus, T. hydroides, T. "conspicuus", and GMPKU P1527 always formed a clade, though the latter two were excluded from some analyses as "wildcards". Under some settings (but not the most stable analysis), another tanystropheid was added to this clade: Raibliania calligarisi, from the Carnian of Italy. The main Tanystropheus clade was well-nested within Tanystropheidae. "Tanystropheus antiquus", whenever included in an analysis, was never found to clade with the other Tanystropheus taxa. Instead, it was consistently allied with Dinocephalosaurus and Pectodens, forming the newly named clade Dinocephalosauridae, outside of Tanystropheidae. Sclerostropheus fossai, another species formerly referred to Tanystropheus, was an unpredictable "wildcard", sometimes placed within Dinocephalosauridae and other times within Tanystropheidae. A 2024 study recognized Trachelosaurus as a close relative of Dinocephalosaurus, with their family as the sister taxon to Tanystropheidae. Dinocephalosauridae was renamed to Trachelosauridae, and the Trachelosauridae + Tanystropheidae clade was given the name Tanysauria.The following cladogram is a simplified representation of the most stable analysis preferred by Spiekman et al. (2021), analysis 4. In this particular analysis, ratio (continuous) characters are included, certain characters are ordered, and five wildcard taxa are excluded before running the analysis: Czatkowiella harae, Tanystropheus "conspicuus", "Tanystropheus antiquus", Orovenator mayorum and Elessaurus gondwanoccidens. Paleoecology Diet The diet of Tanystropheus has been strongly debated in the past, though most recent studies consider it a piscivorous (fish-eating) reptile. The teeth at the front of the snout are long, conical, and interlocking, similar to those of nothosaurs and plesiosaurs. This was likely an adaptation for catching aquatic prey. Additionally, fish scales and hooklets from cephalopod tentacles have been found in the stomach region of some specimens, further support for a piscivorous diet. Small specimens from Monte San Giorgio (T. longobardicus) are noted to possess tricuspid teeth at the back of the jaw. This shape is unorthodox and uncommon among extinct or living reptiles. Wild (1973/1974) considered these three-cusped teeth to be an adaptation for gripping insects. Cox (1985) noted that marine iguanas, which feed on algae, also have three-cusped teeth. As a result, he attributed the same preferences to Tanystropheus. Taylor (1989) rejected both of these hypotheses, as he interpreted the neck of Tanystropheus to be too inflexible for the animal to be successful at either lifestyle. The most likely function of tricuspid teeth, as explained by Nosotti (2007), was that they assisted the piscivorous diet of the reptile by helping to grip slippery prey such as fish or squid. Several modern species of seals, such as the hooded seal and crabeater seal, also have multi-cusped teeth which assist their diet to a similar effect. Similar teeth have also been found in the pterosaur Eudimorphodon and the fellow tanystropheid Langobardisaurus, both of which are considered piscivores. Crustaceans and other soft invertebrates are also plausible food items for Tanystropheus longobardicus. Larger individuals (Tanystropheus hydroides) lack three-cusped teeth, instead possessing typical conical fangs along the entire rim of the mouth. This difference in dentition indicates a degree of niche partitioning, with T. hydroides preferring larger and more active prey than T. longobardicus. Predation While long necks were a successful evolutionary strategy for many marine reptile clades during the Mesozoic, they also increased the animals' vulnerability to predation. Spiekman and Mujal (2023) investigated two Tanystropheus fossils (PIMUZ T 2819 and PIMUZ T 3901), each consisting solely of a skull attached to an articulated partial neck. PIMUZ T 2819 (a large specimen of T. hydroides) is preserved up to cervical vertebra 10, which is splintered by punctures and scoring. The shape of the marks indicate that the neck was severed in two rapid bites by a predator attacking from above and behind. A similar predation attempt occurred against PIMUZ T 3901 (the Meride Limestone specimen of T. longobardicus), which was bitten at cervical 5 and severed at cervical 7. The authors further suggested that since the decapitation occurred in the mid-section of the neck, this was likely an optimal target due to its distance from the head and the muscular base of the neck. While many contemporary marine reptiles were capable of attacking PIMUZ T 3901, only the largest predators of the Besano Formation could have attacked PIMUZ T 2819. Paranothosaurus giganteus, Cymbospondylus buchseri, and Helveticosaurus zollingeri are all candidates for the latter case. Paleobiology Skull biomechanics In T. hydroides, the connection between the quadrate and squamosal is loose, with the upper extremity of the quadrate hooking into a deep concavity on the squamosal. This would have enabled a degree of flexibility along the quadrate-squamosal contact, allowing the quadrate to swivel around an otic joint. This a condition is a form of cranial kinesis (movement among bones in the cranium) known as streptostyly, which is found in some living lizards. The quadrate is also loosely connected to the pterygoid, and the quadratojugal fails to contact the jugal, two qualities which allow movement of the quadrate without hindrance. While streptostyly is possible in the reconstructed skull, it cannot be demonstrated whether it was actively used by the living animal. Fragments of rod-like hyobranchial elements (throat bones) have been found in fossils of both T. hydroides and T. longobardicus. These hyobranchials are very slender and disarticulated, without a bony corpus (thickened "body" of the hyoid apparatus) to connect elements from either side of the throat. These traits indicate that Tanystropheus relied on biting and enlarged teeth to capture prey. Suction feeding is rejected, since it is correlated with a more robust and integrated hyoid apparatus. Growth and development Histological sampling has demonstrated that Tanystropheus had a fairly slow growth rate. The femur, cervical vertebrae, cervical ribs, and postcloacal bones all have a lamellar or parallel-fibered cortex. This corresponds to slow and sturdy bone accumulation. Lamellar deposition is characteristic of the cervical ribs and the upper part of the vertebra, and sharpey's fibers are abundant in the cervical ribs and postcloacal bones. The upper part of the vertebra is subject to remodeling by secondary osteons, smoothing out and strengthening that part of the bone as the animal grows. There is no evidence for woven-fibered bone, a type of uneven fast-developing texture apparent in many archosauromorphs, including other "protorosaurs" like Aenigmastropheus and Prolacerta. This suggests that Tanystropheus (and its relative Macrocnemus) retained an ancestrally low metabolic rate more similar to lizards than to archosauriforms. Respiration As neck length increases, so does tracheal volume, which imposes a biological limitation on breathing. Every time the animal inhales, a significant portion of oxygenated air (so-called dead space volume) fails to pass fully through the trachea and reach the lungs. Many long-necked animals have adaptations meant to overcome this limitation. For example, giraffes have a narrow trachea and infrequent breathing, which reduces the dead space volume. Sauropod dinosaurs supplement their trachea with air sacs that allow for greater air movement through the respiratory system. Birds utilize both air sacs and infrequent breathing. Tanystropheus would need to rely on exceptionally specialized lungs which exceed any allometric predictions based on modern reptiles. In a compromise between energy usage and minimizing dead space volume, the ideal trachea width for Tanystropheus is around 1 cm (0.4 inches), for a neck 1.7 meters (5.6 feet) in length. During periods of high activity, the only lung structure capable of meeting oxygen needs is a multicameral lung (partitioned into multiple smaller chambers) with unidirectional air flow and infrequent breathing. This type of respiratory system is seen in modern archosaurs and turtles. In any case, Tanystropheus's lung capacity was too small for frequent activity or life at higher altitudes. This supports its proposed ecology as coastal ambush predator. Soft tissue A specimen described by Renesto in 2005 displayed an unusual "black material" around the rear part of the body, with smaller patches at the middle of the back and tail. Although most of the material was amorphous, the portion just in front of the hip seemingly preserved scale impressions, indicating that the black material was the remnants of soft tissue. The scales seem to be semi-rectangular and do not overlap with each other, similar to the integument reported in a juvenile Macrocnemus described in 2002. The portion of the material at the base of the tail is particularly thick and rich in phosphate. Many small spherical structures are also present in this portion, which upon further preparation were revealed to be composed of calcium carbonate. These chemicals suggest that the black material was formed as a product of the specimen's proteins decaying in a warm, stagnant, and acidic environment. As in Macrocnemus, the concentration of this material at the base of the tail suggests that the specimen had a quite noticeable amount of muscle behind its hips. Brain and inner ear Impressions on the frontal bones of Tanystropheus longobardicus fossils indicate that that species at least had a bulbous forebrain with paired olfactory bulbs. The complete braincase of Tanystropheus hydroides specimen PIMUZ T 2790 allowed for a partial reconstruction of the brain cavity and inner ear via a digital endocast. The flocculus is large and broad and leads forward to the rest of the cerebellum, which is narrowest between the endosseus labyrinth (inner ear canals). A large flocculus may relate to greater head and eye stabilization, though evidence is inconclusive. Long-necked sauropods show a reduction of the flocculus and there is no clear correlation between flocculus size and function in modern mammals and birds. Like other reptiles, Tanystropheus has three semicircular canals ringing out of the inner ear. Tanystropheus likely stayed in shallow waters or on land, since its semicircular canals are much thinner than those of deep-diving seabirds. The anterior semicircular canal, which curves up and around the flocculus, is enlarged. The posterior semicircular canal (which slopes backwards and outwards from the brain) is smaller, as is the lateral semicircular canal (which arches outwards). The lateral semicircular canal is nearly horizontal in orientation, which possibly relates to a horizontal head posture. There is also a long straight cochlear duct extending outwards, and a long cochlear duct typically indicates good hearing ability in living reptiles. Terrestrial capabilities The lifestyle of Tanystropheus is controversial, with different studies favoring a terrestrial or aquatic lifestyle for the animal. Major studies on Tanystropheus anatomy and ecology by Rupert Wild (1973/1974, 1980) argued that it was an active terrestrial predator, keeping its head held high with an S-shaped flexion. Though this interpretation is not wholly consistent with its proposed neck biomechanics, more recent arguments have supported the idea that Tanystropheus was fully capable of movement on land. Renesto (2005) argued that the neck of Tanystropheus was lighter than previously suggested, and that the entire front half of the body was more lightly built than the more robust and muscular rear half. In addition to strengthening the hind limbs, the large hip and tail muscles would have shifted the animal's center of mass rearwards, stabilizing the animal as it maneuvered its elongated neck. The neck of Tanystropheus has low neural spines, a condition which posits that its epaxial musculature was underdeveloped. This would suggest that intrinsic back muscles (such as the m. longus cervicis) were the driving force behind neck movement instead. The zygapophyses of the neck overlap horizontally, which would have limited lateral movement. The elongated cervical ribs would have formed a brace along the underside of the neck. They may have played a similar role to the ossified tendons of many large dinosaurs, transmitting forces from the weight of the head and neck down to the pectoral girdle, as well as providing passive support by limiting dorsoventral (vertical) flexion. Unlike ossified tendons, the cervical ribs of Tanystropheus are dense and fully ossified throughout the animal's lifetime, so its neck was even more inflexible than that of dinosaurs. A pair of 2015 blog posts by paleoartist Mark Witton estimated that the neck made up only 20% of the entire animal's mass, due to its light and hollow vertebrae. By comparison, in pterosaurs of the family Azhdarchidae, which were clearly large terrestrial predators, the neck and head made up almost 50% of their mass. Witton proposed that Tanystropheus would have hunted prey from the seashore, akin to a heron. Renesto (2005) supported this type of lifestyle as well. A later published estimate argued that the neck comprised about 30 to 43% of the body mass. Terrestrial or semi-terrestrial habits are supported by taphonomic evidence: Tanystropheus specimens preserved at Monte San Giorgio have high completeness (most bones are present in an average fossil) but variable articulation (bones are not always preserved in life position). This is similar to Macrocnemus (which was terrestrial) and opposite the pattern seen in Serpianosaurus (which was fully aquatic). Renesto and Franco Saller's 2018 follow-up to Renesto (2005) offered more information on the reconstructed musculature of Tanystropheus. This study determined that the first few tail vertebrae of Tanystropheus would have housed powerful tendons and ligaments that would have made the body more stiff, keeping the belly off the ground and preventing the neck from pulling the body over. Aquatic capabilities Tschanz (1986, 1988) suggested that Tanystropheus lacked the musculature to raise its neck above the ground, and that it was probably completely aquatic, swimming by undulating its body and tail side-to-side like a snake or crocodile. This interpretation has been contradicted by later studies, although Tanystropheus may have still spent a large portion of its life in shallow water. Renesto (2005) argued that Tanystropheus lacked clear adaptations for underwater swimming to the same degree as most other aquatic reptiles. The tail of Tanystropheus was compressed vertically (from top-to-bottom) at the base and thinned towards the tip, so it would not have been useful as a fin for lateral (side-to-side) movement. The long neck and short front limbs shifted the center of mass back to the long hind limbs, which would have made four-limbed swimming inefficient and unstable if that was the preferred form of locomotion. He additionally claimed that thrusting with only the hind limbs, as in swimming frogs, was an inefficient form of locomotion for a large animal such as Tanystropheus.thumb|Reconstruction of the major muscles between the legs, hip, and tail in Tanystropheus, from Renesto and Saller (2018) Contrary to earlier arguments, Renesto and Saller (2018) found some evidence that Tanstropheus was adapted for an unusual style of swimming. They noted that, based on reconstructions of muscle mass, the hind limbs would have been quite flexible and powerful according to muscle correlations on the legs, pelvis, and tail vertebrae. Their proposal was that Tanystropheus made use of a specialized mode of underwater movement: extending the hind limbs forward and then simultaneously retracting them, creating a powerful 'jump' forward. Further support for this hypothesis is based on the ichnogenus (trackway fossil) Gwyneddichnium, which was likely created by small tanystropheids such as Tanytrachelos. Some Gwyneddichnium tracks seem to represent a succession of paired sprawling footprints from the hind limbs, without any hand prints. These tracks may have been created by the same form of movement which Renesto and Saller (2018) hypothesized as the preferred method of swimming in Tanystropheus. Nevertheless, lateral undulation cannot be disregarded as a potential swimming style; vertebrae near the hips have extended transverse processes, which are associated with powerful undulating tail muscles in reptiles such as crocodilians. Tail movements may be more effective for swimming than paddling or thrusting with the hindlimbs, since the foot bones of Tanystropheus are narrowly bundled together with little room for webbing. The skull of Tanystropheus shows additional support for a semiaquatic habits: both T. hydroides and T. longobardicus have large undivided nares positioned on the upper surface of the snout, a location consistent with this lifestyle in other animals. In addition, the femur density approaches that of Lariosaurus, an aquatic nothosaur. When hunting underwater, Tanystropheus may have acted as an ambush predator, using its long neck to stealthily approach schools of fish or squid while keeping its large body undetected. Upon selecting a suitable prey item, it would dash forwards or snap to the side. T. hydroides was particularly well-suited for lateral biting, thanks to its low skull and procumbent fangs. A methodical and intermittent approach to underwater hunting would be appropriate for Tanystropheus, considering its lack of adaptations for an exclusively aquatic life. It was likely incapable of pursuit predation, in contrast to more persistent and specialized marine reptiles such as ichthyosaurs or plesiosaurs.
Biology and health sciences
Other prehistoric reptiles
Animals
1574901
https://en.wikipedia.org/wiki/Cardinality%20of%20the%20continuum
Cardinality of the continuum
In set theory, the cardinality of the continuum is the cardinality or "size" of the set of real numbers , sometimes called the continuum. It is an infinite cardinal number and is denoted by (lowercase Fraktur "c") or The real numbers are more numerous than the natural numbers . Moreover, has the same number of elements as the power set of . Symbolically, if the cardinality of is denoted as , the cardinality of the continuum is This was proven by Georg Cantor in his uncountability proof of 1874, part of his groundbreaking study of different infinities. The inequality was later stated more simply in his diagonal argument in 1891. Cantor defined cardinality in terms of bijective functions: two sets have the same cardinality if, and only if, there exists a bijective function between them. Between any two real numbers a < b, no matter how close they are to each other, there are always infinitely many other real numbers, and Cantor showed that they are as many as those contained in the whole set of real numbers. In other words, the open interval (a,b) is equinumerous with , as well as with several other infinite sets, such as any n-dimensional Euclidean space (see space filling curve). That is, The smallest infinite cardinal number is (aleph-null). The second smallest is (aleph-one). The continuum hypothesis, which asserts that there are no sets whose cardinality is strictly between and , means that . The truth or falsity of this hypothesis is undecidable and cannot be proven within the widely used Zermelo–Fraenkel set theory with axiom of choice (ZFC). Properties Uncountability Georg Cantor introduced the concept of cardinality to compare the sizes of infinite sets. He famously showed that the set of real numbers is uncountably infinite. That is, is strictly greater than the cardinality of the natural numbers, : In practice, this means that there are strictly more real numbers than there are integers. Cantor proved this statement in several different ways. For more information on this topic, see Cantor's first uncountability proof and Cantor's diagonal argument. Cardinal equalities A variation of Cantor's diagonal argument can be used to prove Cantor's theorem, which states that the cardinality of any set is strictly less than that of its power set. That is, (and so that the power set of the natural numbers is uncountable). In fact, the cardinality of , by definition , is equal to . This can be shown by providing one-to-one mappings in both directions between subsets of a countably infinite set and real numbers, and applying the Cantor–Bernstein–Schroeder theorem according to which two sets with one-to-one mappings in both directions have the same cardinality. In one direction, reals can be equated with Dedekind cuts, sets of rational numbers, or with their binary expansions. In the other direction, the binary expansions of numbers in the half-open interval , viewed as sets of positions where the expansion is one, almost give a one-to-one mapping from subsets of a countable set (the set of positions in the expansions) to real numbers, but it fails to be one-to-one for numbers with terminating binary expansions, which can also be represented by a non-terminating expansion that ends in a repeating sequence of 1s. This can be made into a one-to-one mapping by that adds one to the non-terminating repeating-1 expansions, mapping them into . Thus, we conclude that The cardinal equality can be demonstrated using cardinal arithmetic: By using the rules of cardinal arithmetic, one can also show that where n is any finite cardinal ≥ 2 and where is the cardinality of the power set of R, and . Alternative explanation for Every real number has at least one infinite decimal expansion. For example, (This is true even in the case the expansion repeats, as in the first two examples.) In any given case, the number of decimal places is countable since they can be put into a one-to-one correspondence with the set of natural numbers . This makes it sensible to talk about, say, the first, the one-hundredth, or the millionth decimal place of π. Since the natural numbers have cardinality each real number has digits in its expansion. Since each real number can be broken into an integer part and a decimal fraction, we get: where we used the fact that On the other hand, if we map to and consider that decimal fractions containing only 3 or 7 are only a part of the real numbers, then we get and thus Beth numbers The sequence of beth numbers is defined by setting and . So is the second beth number, beth-one: The third beth number, beth-two, is the cardinality of the power set of (i.e. the set of all subsets of the real line): The continuum hypothesis The continuum hypothesis asserts that is also the second aleph number, . In other words, the continuum hypothesis states that there is no set whose cardinality lies strictly between and This statement is now known to be independent of the axioms of Zermelo–Fraenkel set theory with the axiom of choice (ZFC), as shown by Kurt Gödel and Paul Cohen. That is, both the hypothesis and its negation are consistent with these axioms. In fact, for every nonzero natural number n, the equality = is independent of ZFC (case being the continuum hypothesis). The same is true for most other alephs, although in some cases, equality can be ruled out by König's theorem on the grounds of cofinality (e.g. ). In particular, could be either or , where is the first uncountable ordinal, so it could be either a successor cardinal or a limit cardinal, and either a regular cardinal or a singular cardinal. Sets with cardinality of the continuum A great many sets studied in mathematics have cardinality equal to . Some common examples are the following: Sets with greater cardinality Sets with cardinality greater than include: the set of all subsets of (i.e., power set ) the set 2R of indicator functions defined on subsets of the reals (the set is isomorphic to  – the indicator function chooses elements of each subset to include) the set of all functions from to the Lebesgue σ-algebra of , i.e., the set of all Lebesgue measurable sets in . the set of all Lebesgue-integrable functions from to the set of all Lebesgue-measurable functions from to the Stone–Čech compactifications of , , and the set of all automorphisms of the (discrete) field of complex numbers. These all have cardinality (beth two)
Mathematics
Set theory
null
1575168
https://en.wikipedia.org/wiki/Subgiant
Subgiant
A subgiant is a star that is brighter than a normal main-sequence star of the same spectral class, but not as bright as giant stars. The term subgiant is applied both to a particular spectral luminosity class and to a stage in the evolution of a star. Yerkes luminosity class IV The term subgiant was first used in 1930 for class G and early K stars with absolute magnitudes between +2.5 and +4. These were noted as being part of a continuum of stars between obvious main-sequence stars such as the Sun and obvious giant stars such as Aldebaran, although less numerous than either the main sequence or the giant stars. The Yerkes spectral classification system is a two-dimensional scheme that uses a letter and number combination to denote the temperature of a star (e.g. A5 or M1) and a Roman numeral to indicate the luminosity relative to other stars of the same temperature. Luminosity class IV stars are the subgiants, located between main-sequence stars (luminosity class V) and red giants (luminosity class III). Rather than defining absolute features, a typical approach to determining a spectral luminosity class is to compare similar spectra against standard stars. Many line ratios and profiles are sensitive to gravity, and therefore make useful luminosity indicators, but some of the most useful spectral features for each spectral class are: O: relative strength of N emission and He absorption, strong emission is more luminous B: Balmer line profiles, and strength of O lines A: Balmer line profiles, broader wings means less luminous F: line strengths of Fe, Ti, and Sr G: Sr and Fe line strengths, and wing widths in the Ca, H and K lines K: Ca, H, and K line profiles, Sr/Fe line ratios, and MgH and TiO line strengths M: strength of the 422.6 nm Ca line and TiO bands Morgan and Keenan listed examples of stars in luminosity class IV when they established the two-dimensional classification scheme: B0: γ Cassiopeiae, δ Scorpii B0.5: β Scorpii B1: ο Persei, β Cephei B2: γ Orionis, π Scorpii, θ Ophiuchi, λ Scorpii B2.5: γ Pegasi, ζ Cassiopeiae B3: ι Herculis B5: τ Herculis A2: β Aurigae, λ Ursae Majoris, β Serpentis A3: δ Herculis F2: δ Geminorum, ζ Serpentis F5: Procyon, 110 Herculis F6: τ Boötis, θ Boötis, γ Serpentis F8: 50 Andromedae, θ Draconis G0: η Boötis, ζ Herculis G2: μ2 Cancri G5: μ Herculis G8: β Aquilae K0: η Cephei K1: γ Cephei Later analysis showed that some of these were blended spectra from double stars and some were variable, and the standards have been expanded to many more stars, but many of the original stars are still considered standards of the subgiant luminosity class. O-class stars and stars cooler than K1 are rarely given subgiant luminosity classes. Subgiant branch The subgiant branch is a stage in the evolution of low to intermediate mass stars. Stars with a subgiant spectral type are not always on the evolutionary subgiant branch, and vice versa. For example, the stars FK Com and 31 Com both lie in the Hertzsprung Gap and are likely evolutionary subgiants, but both are often assigned giant luminosity classes. The spectral classification can be influenced by metallicity, rotation, unusual chemical peculiarities, etc. The initial stages of the subgiant branch in a star like the sun are prolonged with little external indication of the internal changes. One approach to identifying evolutionary subgiants include chemical abundances such as Lithium which is depleted in subgiants, and coronal emission strength. As the fraction of hydrogen remaining in the core of a main sequence star decreases, the core temperature increases and so the rate of fusion increases. This causes stars to evolve slowly to higher luminosities as they age and broadens the main sequence band in the Hertzsprung–Russell diagram. Once a main sequence star ceases to fuse hydrogen in its core, the core begins to collapse under its own weight. This causes it to increase in temperature and hydrogen fuses in a shell outside the core, which provides more energy than core hydrogen burning. Low- and intermediate-mass stars expand and cool until at about 5,000 K they begin to increase in luminosity in a stage known as the red-giant branch. The transition from the main sequence to the red giant branch is known as the subgiant branch. The shape and duration of the subgiant branch varies for stars of different masses, due to differences in the internal configuration of the star. Very-low-mass stars Stars less massive than about are convective throughout most of the star. These stars continue to fuse hydrogen in their cores until essentially the entire star has been converted to helium, and they do not develop into subgiants. Stars of this mass have main-sequence lifetimes many times longer than the current age of the Universe. to Stars with 40 percent the mass of the Sun and larger have non-convective cores with a strong temperature gradient from the centre outwards. When they exhaust hydrogen at the core of the star, the shell of hydrogen surrounding the central core continues to fuse without interruption. The star is considered to be a subgiant at this point although there is little change visible from the exterior. As the fusing hydrogen shell converts its mass into helium the convective effect separates the helium towards the core where it very slowly increases the mass of the non-fusing core of nearly pure helium plasma. As this takes place the fusing hydrogen shell gradually expands outward which increases the size of the outer shell of the star up to the subgiant size from two to ten times the original radius of the star when it was on the main sequence. The expansion of the outer layers of the star into the subgiant size nearly balances the increase energy generated by the hydrogen shell fusion causing the star to nearly maintain its surface temperature. This causes the spectral class of the star to change very little in the lower end of this range of star mass. The subgiant surface area radiating the energy is so much larger the potential circumstellar habitable zone where planetary orbits will be in the range to form liquid water is shifted much further out into any planetary system. The surface area of a sphere is found as 4πr2 so a sphere with a radius of will release 400% as much energy at the surface and a sphere with a will release 10000% as much energy. The helium core mass is below the Schönberg–Chandrasekhar limit and it remains in thermal equilibrium with the fusing hydrogen shell. Its mass continues to increase and the star very slowly expands as the hydrogen shell migrates outwards. Any increase in energy output from the shell goes into expanding the envelope of the star and the luminosity stays approximately constant. The subgiant branch for these stars is short, horizontal, and heavily populated, as visible in very old clusters. After one to eight billion years, the helium core becomes too massive to support its own weight and becomes degenerate. Its temperature increases, the rate of fusion in the hydrogen shell increases, the outer layers become strongly convective, and the luminosity increases at approximately the same effective temperature. The star is now on the Red-giant branch. Mass Stars as massive and larger than the Sun have a convective core on the main sequence. They develop a more massive helium core, taking up a larger fraction of the star, before they exhaust the hydrogen in the entire convective region. Fusion in the star ceases entirely and the core begins to contract and increase in temperature. The entire star contracts and increases in temperature, with the radiated luminosity actually increasing despite the lack of fusion. This continues for several million years before the core becomes hot enough to ignite hydrogen in a shell, which reverses the temperature and luminosity increase and the star starts to expand and cool. This hook is generally defined as the end of the main sequence and the start of the subgiant branch in these stars. The core of stars below about is still below the Schönberg–Chandrasekhar limit, but hydrogen shell fusion quickly increases the mass of the core beyond that limit. More-massive stars already have cores above the Schönberg–Chandrasekhar mass when they leave the main sequence. The exact initial mass at which stars will show a hook and at which they will leave the main sequence with cores above the Schönberg–Chandrasekhar limit depend on the metallicity and the degree of overshooting in the convective core. Low metallicity causes the central part of even low mass cores to be convectively unstable, and overshooting causes the core to be larger when hydrogen becomes exhausted. Once the core exceeds the C–R limit, it can no longer remain in thermal equilibrium with the hydrogen shell. It contracts and the outer layers of the star expand and cool. The energy to expand the outer envelope causes the radiated luminosity to decrease. When the outer layers cool sufficiently, they become opaque and force convection to begin outside the fusing shell. The expansion stops and the radiated luminosity begins to increase, which is defined as the start of the red giant branch for these stars. Stars with an initial mass approximately can develop a degenerate helium core before this point and that will cause the star to enter the red giant branch as for lower mass stars. The core contraction and envelope expansion is very rapid, taking only a few million years. In this time the temperature of the star will cool from its main sequence value of 6,000–30,000 K to around 5,000 K. Relatively few stars are seen in this stage of their evolution and there is an apparent lack in the H–R diagram known as the Hertzsprung gap. It is most obvious in clusters from a few hundred million to a few billion years old. Massive stars Beyond about , depending on metallicity, stars have hot massive convective cores on the main sequence due to CNO cycle fusion. Hydrogen shell fusion and subsequent core helium fusion begin soon after core hydrogen exhaustion, before the star could reach the red giant branch. Such stars, for example early B main sequence stars, experience a brief and shortened subgiant branch before becoming supergiants. They may also be assigned a giant spectral luminosity class during this transition. In very massive O-class main sequence stars, the transition from main sequence to giant to supergiant occurs over a very narrow range of temperature and luminosity, sometimes even before core hydrogen fusion has ended, and the subgiant class is rarely used. Values for the surface gravity, log(g), of O-class stars are around 3.6 cgs for giants and 3.9 for dwarfs. For comparison, typical log(g) values for K class stars are 1.59 (Aldebaran) and 4.37 (α Centauri B), leaving plenty of scope to classify subgiants such as η Cephei with log(g) of 3.47. Examples of massive subgiant stars include θ2 Orionis A and the primary star of the δ Circini system, both class O stars with masses of over . Properties This table shows the typical lifetimes on the main sequence (MS) and subgiant branch (SB), as well as any hook duration between core hydrogen exhaustion and the onset of shell burning, for stars with different initial masses, all at solar metallicity (Z = 0.02). Also shown are the helium core mass, surface effective temperature, radius, and luminosity at the start and end of the subgiant branch for each star. The end of the subgiant branch is defined to be when the core becomes degenerate or when the luminosity starts to increase. In general, stars with lower metallicity are smaller and hotter than stars with higher metallicity. For subgiants, this is complicated by different ages and core masses at the main sequence turnoff. Low metallicity stars develop a larger helium core before leaving the main sequence, hence lower mass stars show a hook at the start of the subgiant branch. The helium core mass of a Z=0.001 (extreme population II) star at the end of the main sequence is nearly double that of a Z=0.02 (population I) star. The low metallicity star is also over 1,000 K hotter and over twice as luminous at the start of the subgiant branch. The difference in temperature is less pronounced at the end of the subgiant branch, but the low metallicity star is larger and nearly four times as luminous. Similar differences exist in the evolution of stars with other masses, and key values such as the mass of a star that will become a supergiant instead of reaching the red giant branch are lower at low metallicity. Subgiants in the H–R diagram A Hertzsprung–Russell (H–R) diagram is a scatter plot of stars with temperature or spectral type on the x-axis and absolute magnitude or luminosity on the y-axis. H–R diagrams of all stars, show a clear diagonal main sequence band containing the majority of stars, a significant number of red giants (and white dwarfs if sufficiently faint stars are observed), with relatively few stars in other parts of the diagram. Subgiants occupy a region above (i.e. more luminous than) the main sequence stars and below the giant stars. There are relatively few on most H–R diagrams because the time spent as a subgiant is much less than the time spent on the main sequence or as a giant star. Hot, class B, subgiants are barely distinguishable from the main sequence stars, while cooler subgiants fill a relatively large gap between cool main sequence stars and the red giants. Below approximately spectral type K3 the region between the main sequence and red giants is entirely empty, with no subgiants. Stellar evolutionary tracks can be plotted on an H–R diagram. For a particular mass, these trace the position of a star throughout its life, and show a track from the initial main sequence position, along the subgiant branch, to the giant branch. When an H–R diagram is plotted for a group of stars which all have the same age, such as a cluster, the subgiant branch may be visible as a band of stars between the main sequence turnoff point and the red giant branch. The subgiant branch is only visible if the cluster is sufficiently old that stars have evolved away from the main sequence, which requires several billion years. Globular clusters such as ω Centauri and old open clusters such as M67 are sufficiently old that they show a pronounced subgiant branch in their color–magnitude diagrams. ω Centauri actually shows several separate subgiant branches for reasons that are still not fully understood, but appear to represent stellar populations of different ages within the cluster. Variability Several types of variable star include subgiants: Beta Cephei variables, early B main sequence and subgiant stars Slowly pulsating B-type stars, mid to late B main sequence and subgiant stars Delta Scuti variables, late A and early F main sequence and subgiant stars Subgiants more massive than the sun cross the Cepheid instability strip, called the first crossing since they may cross the strip again later on a blue loop. In the range, this includes Delta Scuti variables such as β Cas. At higher masses the stars would pulsate as Classical Cepheid variables while crossing the instability strip, but massive subgiant evolution is very rapid and it is difficult to detect examples. SV Vulpeculae has been proposed as a subgiant on its first crossing but was subsequently determined to be on its second crossing Planets Planets in orbit around subgiant stars include Kappa Andromedae b, Kepler-36 b and c, TOI-4603 b and HD 224693 b.
Physical sciences
Stellar astronomy
Astronomy
1575341
https://en.wikipedia.org/wiki/Dexter%20cattle
Dexter cattle
The Dexter is an Irish breed of small cattle. It originated in the eighteenth century in County Kerry, in south-western Ireland, and appears to be named after a man named Dexter, who was factor of the estates of Lord Hawarden on Valentia Island. Until the second half of the nineteenth century it was considered a type within the Kerry breed. History The Dexter originated in the eighteenth century in County Kerry, in south-western Ireland, and was apparently named after a man named Dexter, who was factor of the estates of Lord Hawarden on Valentia Island. Rotund short-legged Kerry cattle are documented from the late eighteenth century; the Scottish agriculturalist David Low, writing in 1842, describes them as the "Dexter Breed", and writes "When any individual of a Kerry drove appears remarkably round and short legged, it is common for the country people to call it a Dexter". Until the second half of the nineteenth century the Dexter was considered a type within the Kerry breed; from 1863 it was shown in a separate class at the agricultural shows of the Royal Dublin Society. A joint herd-book, The Kerry and Dexter Herd Book, was established in 1890, and a breed society, the Kerry and Dexter Cattle Society of Ireland, was started in 1917; the name was shortened to the Kerry Cattle Society of Ireland in 1919. It was brought to England in 1882. The breed virtually disappeared in Ireland, but was still maintained as a pure breed in a number of small herds in England and the United States. In 2023 it was reported to DAD-IS by sixteen countries in Africa, the Americas, Europe and Oceania; the largest populations were in Denmark and the United Kingdom. Its conservation status worldwide is listed as 'not at risk', while for Ireland it is listed as 'at risk/critical'. Characteristics The cattle are small; heights at the withers for bulls are usually in the range , for cows about less; the average weight of a cow is approximately .. The coat is usually solid black, but may also be red or dun. The cattle were formerly always horned; in the twenty-first century some polled examples are seen, but the mechanism of introduction of this characteristic has not been identified. Some Dexter cattle carry a gene for chondrodysplasia (a semilethal gene), which is a form of dwarfism that results in shorter legs than unaffected cattle. Chondrodysplasia-affected Dexters are typically 6–8 in shorter in height than unaffected ones. Breeding two chondrodysplasia-affected Dexters together results in a 25% chance that the foetus can abort prematurely. A DNA test is available to test for the chondrodysplasia gene, using tail hairs from the animal. The aborted foetus is commonly called a bulldog, a stillborn calf that has a bulging head, compressed nose, protruding lower jaw, and swollen tongue, as well as extremely short limbs. The occurrence of bulldog foetuses is higher in calves born with a black coat than a red coat, because black coat colour is more common. Short-legged Dexter cattle are considered to be heterozygous, while bulldog foetuses are homozygous for chondrodysplasia genes. Dexters can also be affected with pulmonary hypoplasia with anasarca (PHA), which is an incomplete formation of the lungs with accumulation of a serum fluid in various parts of the tissue of the foetus. Unlike chondrodysplasia, which has many physical signs, PHA shows no outward signs and is only detectable through DNA testing. As with Chondrodysplasia, PHA-affected Dexters should not be bred together. Dexter cattle have short legs compared to other breeds; increased shortness is displayed from the knee to the fetlock. Dexter cattle are very hardy, efficient grazers and are able to thrive on poor land. Use The Dexter is dual-purpose breed, reared for both milk and beef. Milk yields average about per lactation, although some farms may reach an average of . In flavour and texture the meat is often not as good as that of other breeds, especially if it is from a very short-legged animal.
Biology and health sciences
Miniature cattle
Animals
1575447
https://en.wikipedia.org/wiki/Shear%20modulus
Shear modulus
In materials science, shear modulus or modulus of rigidity, denoted by G, or sometimes S or μ, is a measure of the elastic shear stiffness of a material and is defined as the ratio of shear stress to the shear strain: where = shear stress is the force which acts is the area on which the force acts = shear strain. In engineering , elsewhere is the transverse displacement is the initial length of the area. The derived SI unit of shear modulus is the pascal (Pa), although it is usually expressed in gigapascals (GPa) or in thousand pounds per square inch (ksi). Its dimensional form is M1L−1T−2, replacing force by mass times acceleration. Explanation The shear modulus is one of several quantities for measuring the stiffness of materials. All of them arise in the generalized Hooke's law: Young's modulus E describes the material's strain response to uniaxial stress in the direction of this stress (like pulling on the ends of a wire or putting a weight on top of a column, with the wire getting longer and the column losing height), the Poisson's ratio ν describes the response in the directions orthogonal to this uniaxial stress (the wire getting thinner and the column thicker), the bulk modulus K describes the material's response to (uniform) hydrostatic pressure (like the pressure at the bottom of the ocean or a deep swimming pool), the shear modulus G describes the material's response to shear stress (like cutting it with dull scissors). These moduli are not independent, and for isotropic materials they are connected via the equations The shear modulus is concerned with the deformation of a solid when it experiences a force parallel to one of its surfaces while its opposite face experiences an opposing force (such as friction). In the case of an object shaped like a rectangular prism, it will deform into a parallelepiped. Anisotropic materials such as wood, paper and also essentially all single crystals exhibit differing material response to stress or strain when tested in different directions. In this case, one may need to use the full tensor-expression of the elastic constants, rather than a single scalar value. One possible definition of a fluid would be a material with zero shear modulus. Shear waves In homogeneous and isotropic solids, there are two kinds of waves, pressure waves and shear waves. The velocity of a shear wave, is controlled by the shear modulus, where G is the shear modulus is the solid's density. Shear modulus of metals The shear modulus of metals is usually observed to decrease with increasing temperature. At high pressures, the shear modulus also appears to increase with the applied pressure. Correlations between the melting temperature, vacancy formation energy, and the shear modulus have been observed in many metals. Several models exist that attempt to predict the shear modulus of metals (and possibly that of alloys). Shear modulus models that have been used in plastic flow computations include: the Varshni-Chen-Gray model developed by and used in conjunction with the Mechanical Threshold Stress (MTS) plastic flow stress model. the Steinberg-Cochran-Guinan (SCG) shear modulus model developed by and used in conjunction with the Steinberg-Cochran-Guinan-Lund (SCGL) flow stress model. the Nadal and LePoac (NP) shear modulus model that uses Lindemann theory to determine the temperature dependence and the SCG model for pressure dependence of the shear modulus. Varshni-Chen-Gray model The Varshni-Chen-Gray model (sometimes referred to as the Varshni equation) has the form: where is the shear modulus at , and and are material constants. SCG model The Steinberg-Cochran-Guinan (SCG) shear modulus model is pressure dependent and has the form where, μ0 is the shear modulus at the reference state (T = 300 K, p = 0, η = 1), p is the pressure, and T is the temperature. NP model The Nadal-Le Poac (NP) shear modulus model is a modified version of the SCG model. The empirical temperature dependence of the shear modulus in the SCG model is replaced with an equation based on Lindemann melting theory. The NP shear modulus model has the form: where and μ0 is the shear modulus at absolute zero and ambient pressure, ζ is an area, m is the atomic mass, and f is the Lindemann constant. Shear relaxation modulus The shear relaxation modulus is the time-dependent generalization of the shear modulus : .
Physical sciences
Solid mechanics
Physics
10952778
https://en.wikipedia.org/wiki/Poecilotheria%20metallica
Poecilotheria metallica
Poecilotheria metallica, also known as the peacock tarantula, is an Old World species of tarantula. It is the only blue species of the genus Poecilotheria. Like others in its genus it exhibits an intricate fractal-like pattern on the abdomen. The species' natural habitat is deciduous forest in Andhra Pradesh, in central southern India. It has been classified as Critically endangered by the IUCN. Description Poecilotheria metallica has similar intricate geometric body coloration to other Poecilotheria species, but it is the only species in the genus to be covered in blue hair. While it is young, P. metallica is less chromatic, the coloring turns to blue as it matures. This blue is much less prominent in the mature males. Males also have more slender bodies, and their legs are longer. The definitive trait of a mature male are the revelation of emboli at the end of their pedipalps following their "mature molt." Females can be determined through molt confirmations before maturity. When full size, the leg span of P. metallica is . Distribution Poecilotheria metallica is found only in a small area of less than , a reserve forest that is nonetheless highly disturbed. Surveys of adjacent forest have failed to observe this species. The type specimen was discovered in a railway timber yard in Gooty about 100 km southwest of its known range, but it is believed to have been transported there by train. Behavior Poecilotheria metallica's behavior parallels that of many arboreal spiders. In the wild, P. metallica lives in holes of tall trees where it makes asymmetric funnel webs. The primary prey consists of various flying insects. Spiders of this genus may live communally when territory, i.e. the number of holes per tree, is limited. The species is skittish and will try to flee first, and will also flee when light shines upon it, as it is a photosensitive species. Under provocation, however, members of the species may bite. Longevity Females typically live for 11 to 12 years, or, in rare instances, for up to 15 years. Males live for 3 to 4 years. Venom There has never been a recorded human death from its bite. However, P. metallica's bite is considered medically significant, with venom that may cause intense pain, judging from the experience of keepers bitten by other spiders in the genus. The vast majority are "dry bites," where no venom is injected into the handler. The mechanical effects of the bite can still be worrisome, as an adult's fangs can reach nearly 3/4 of an inch in length. P. metallica can move rapidly and may defend itself when cornered. Venom may produce a heart-rate increase followed by sweating, headache, stinging, cramping, or swelling. Effects can last for up to a week. However in extreme bites from the genus Poecilotheria, effects may still be felt months later. Coloration As with other tarantulas with blue hair, the vivid colors of P. metallica are produced by quasi-periodic perforated multilayer nanostructures. Structural colours are usually highly iridescent, changing color when viewed from different angles. Some species of blue tarantulas have hairs with a "special flower-like" structure which may reduce iridescence. Given that many tarantulas express nearly a full suite of opsins found in other colourful spiders with colour vision, blue colors could potentially function in mate-choice or contests for mates. Common names P. metallica is also known as the Gooty sapphire ornamental tree spider, Gooty sapphire, and Gooty tarantula. Other common names are metallic tarantula, peacock parachute spider, or peacock tarantula. As pets P. metallica has been bred in captivity for more than ten years and is popular with tarantula enthusiasts, and has a high demand due to its attractive coloration. It sometimes priced above $500 in the United States, but as a spiderling is typically between $100 and $200. As with most tarantulas, the spider's sex can influence price - females generally being more expensive because of their longer life. Members of the species are hardy, relatively fast-growing spiders that are generally fed crickets, but may also eat moths, grasshoppers and cockroaches. P. metallica measures between in legspan when fully grown. In captivity, humid environments with temperatures between and a humidity level of 75 to 85% are preferred. This is a very fast, sometimes defensive tarantula that has the potential for medically significant venom. Conservation P. metallica is classified as Critically Endangered by the International Union for Conservation of Nature (IUCN) due to its occurrence in a single, small area in which habitat is rapidly degrading due to logging and firewood harvesting. Another threat identified by IUCN assessors is specimen collection for the pet trade. Population size is unknown, but the combination of its small natural range and the habitat threats indicate a declining population trend.
Biology and health sciences
Spiders
Animals
2217590
https://en.wikipedia.org/wiki/Choristodera
Choristodera
Choristodera (from the Greek χωριστός chōristos + δέρη dérē, 'separated neck') is an extinct order of semiaquatic diapsid reptiles that ranged from the Middle Jurassic, or possibly Triassic, to the Miocene (168 to 20 or possibly 11.6 million years ago). Choristoderes are morphologically diverse, with the best known members being the crocodile-like neochoristoderes such as Champsosaurus. Other choristoderans had lizard-like or long necked morphologies. Choristoderes appear to have been confined to the Northern Hemisphere, having been found in North America, Asia, and Europe, and possibly also North Africa. Choristoderes are generally thought to be derived neodiapsids that are close relatives or members of Sauria. History of discovery Choristodera was erected in 1876, originally as a suborder of Rhynchocephalia by Edward Drinker Cope to contain Champsosaurus, which was described from Late Cretaceous strata of Montana by Cope in the same paper. A year later, in 1877, Simoedosaurus was described by Paul Gervais from Upper Paleocene deposits at Cernay, near Rheims, France. These remained the only recognised choristoderes for over a century, until new taxa were described in the late 20th century. Beginning in the late 1970s, additional taxa were described by Soviet-Mongolian teams from Lower Cretaceous sediments in Mongolia. In studies from 1989 to 1991, Susan E. Evans described new material of Cteniogenys from the Middle Jurassic of Britain. The genus had first been described by Charles W. Gilmore in 1928 from the Late Jurassic of the western United States, and had previously been enigmatic. The studies revealed it to be a small, lizard-like choristodere, different from the crocodile-like forms previously known. Description Choristoderes vary substantially in size, the smallest genera like Cteniogenys and Lazarussuchus had a length of only around , and the largest known choristoderan, Kosmodraco dakotensis is estimated to have had a total length of around . Neochoristoderes such as Champsosaurus are the best-known group of the Choristodera. They resembled modern crocodilians, especially gharials. The skull of these animals have a long, thin snout filled with small, sharp conical teeth. Other choristoderes are referred to collectively as "non-neochoristoderes", which are mostly small lizard-like forms, though Shokawa, Khurendukhosaurus and Hyphalosaurus possess long plesiosaur like necks. The grouping of "non-neochoristoderes" is paraphyletic (not containing all descendants of a common ancestor), as the lizard-like bodyform represents the ancestral morphology of the group. Skeletal anatomy According to Matsumoto and colleagues (2019), choristoderes are united by the presence of nine synapomorphies (shared traits characteristic of the group), including a median contact of the elongated prefrontal bones of the skull separating the nasal bones from the frontal bones, the dorsal flange of the maxilla is inflected medially (toward the midline of the body), the parietal foramen are absent, the squamosal bones are expanded behind (posterior to) the occipital condyle, the teeth are conical and sub-thecodont (located in shallow sockets), the dentaries are slender with elongated grooves running along the labial (outward facing) surface of the bone, additional sacral vertebrae are present, expanded "spine tables" are present on the vertebrae, and the surfaces of both ends of vertebral centra are flat (amphiplatyan). All known choristoderans possess or are inferred to possess a novel skull bone not found in other reptiles, referred to as the "neomorphic bone" or neomorph, which is a component of the dermatocranium. Ancestrally, the skull of choristoderes possess elongated upper and lower temporal fenestrae (openings of the skull behind the eye socket), these are greatly expanded in neochoristoderes, most extremely in Champsosaurus, giving the skull a cordiform (heart shaped) appearance when viewed from above. In many "non-neochoristoderes" the lower temporal fenestrae are secondarily closed. Choristoderes possessed gastralia (rib-like bones situated in the abdomen) like tuatara and crocodilians. Internal skull anatomy The internal skull anatomy of choristoderes is only known for Champsosaurus. The braincase of Champsosaurus is poorly ossified at the front of the skull (anterior), but is well ossified in the rear (posterior) similar to other diapsids. The cranial endocast (space occupied by the brain in the cranial vault) is proportionally narrow in both lateral and dorsoventral axes, with an enlarged pineal body and olfactory bulbs. The optic lobes and flocculi are small in size, indicating only average vision ability at best. The olfactory chambers of the nasal passages and olfactory stalks of the braincase are reasonably large, indicating that Champsosaurus probably had good olfactory capabilities (sense of smell). The nasal passages lack bony turbinates. The semicircular canals of the inner ear are most similar to those of other aquatic reptiles. The expansion of the sacculus indicates that Champsosaurus likely had an increased sensitivity to low frequency sounds and vibrations. Dentition Most choristoderes have rather simple undifferentiated (homodont) teeth, with striated enamel covering the tooth crown but not the base. Neochoristoderes have teeth completely enveloped in striated enamel with an enamel infolding at the base, labiolingually compressed and hooked, the exception being Ikechosaurus which has still rather simple teeth aside from the start of an enamel infolding. Teeth implantation is subthecodont, with teeth being replaced by erosion of a pit in the lingual (side of the tooth facing the tongue) surface of the tooth base. There is some tooth differentiation among neochoristoderes, with the anterior teeth being sharper and more slender than posterior teeth. Choristoderes retain palatal teeth (teeth present on the bones of the roof of the mouth). Unlike most diapsid groups, where palatal teeth are reduced or lost completely, the palatal teeth in choristoderes are extensively developed indicating food manipulation in the mouth, probably in combination with the tongue. In most choristoderes, longitudinal rows of palatal teeth are present on the pterygoid, palatine and vomer, as well as a row on the pterygoid flange. In some neochoristoderes the palatal tooth rows are modified into tooth batteries on raised platforms. The morphology of the palatal teeth is identical to that of the marginal teeth of non-neochoristoderes, and the replacement of palatal teeth is nearly identical to the replacement of marginal teeth. Skin An exceptionally preserved specimen of Monjurosuchus preserves pleated skin, which indicates that in life it was probably thin and soft. The preserved scales are small and overlapping, and are smaller on the ventral underside of the body than the dorsal surface. A double row of larger ovoid scales runs along the dorsum (upper midline) of the body. The fossil also preserves webbed feet. Hyphalosaurus was covered in scales of varying shape, depending on their position on the body, with at least one and possibly multiple rows of large ovoid scales running down sides of the trunk and tail. The feet display evidence of webbing, and the tail probably had additional tissue at the top and bottom, allowing it to be used as a fin to propel Hyphalosaurus through the water. Skin impressions of Champsosaurus have also been reported, they consist of small (0.6-0.1 mm) pustulate and rhomboid scales, with the largest scales being located on the lateral sides of the body, decreasing in size dorsally, no osteoderms were present. The Menat specimen of Lazarussuchus preserves some remnants of soft tissue, but no scales, which shows that the hindfoot (pes) was not webbed, and a dark stained region with a crenellated edge is present above the caudal vertebrae of the tail, suggestive of a crest similar to those found in some living reptiles, like the tuatara, lizards and crocodiles. Paleobiology Choristoderes are exclusively found in freshwater deposits, often associated with turtles, fish, frogs, salamanders and crocodyliformes. They appear to have been almost exclusively found in warm temperate climates, with the range of neochoristoderes extending to the high Canadian Arctic during the Coniacian-Santonian stages of the Late Cretaceous (~89-83 Million years ago), a time of extreme warmth. Due to the morphological similarities between choristoderes and crocodyliformes, it has often been assumed that they existed in competition. However "non-neochoristoderes" were smaller than adult aquatic crocodyliformes and were more likely in competition with other taxa. For the more crocodile-like neochoristoderes, there appears to have been niche differentiation, with gharial-like neochoristoderans occurring in association with blunt snouted crocodyliformes, but not in association with narrow snouted forms. Diet Neochoristoderans are presumed to have been piscivorous. Champsosaurus in particular is thought to have fed like modern gharials, sweeping its head to the side to catch individual fish from shoals, while Simoedosaurus is thought to have been more generalist, being able to take both aquatic and terrestrial prey. Cteniogenys and Lazarussuchus have been suggested to have fed on invertebrates. Preserved gut contents of a Monjurosuchus specimen appear to show arthropod cuticle fragments. Another specimen of Monjurosuchus has been found with preserved skulls of seven juvenile individuals within the abdominal cavity. This has been proposed to represent evidence of cannibalism. However, this proposal has been criticised by other authors, who suggest it is more likely that they represent late-stage embryos. A specimen of Hyphalosaurus has been found with small rib bones in its abdominal cavity, suggesting that it took vertebrate prey at least on occasion. Reproduction A specimen of Hyphalosaurus baitaigouensis has been found with 18 fully developed embryos within the mother's body, suggesting that they were viviparous, but another specimen shows that Hyphalosaurus baitaigouensis also possessed soft-shelled eggs, similar to those of lepidosaurs. A possible explanation for this is that Hyphalosaurus was ovoviviparous, with the thin-shelled eggs hatching immediately after they were laid, presumably on land, though it has also been suggested that the species employed both viviparous and oviparous reproductive modes. An embryo of Ikechosaurus has been found preserved within a weakly mineralised parchment-shelled egg, suggesting that Ikechosaurus was oviparous, and laid their eggs on land. Monjuruosuchus has been suggested to have been viviparous. In Champsosaurus, it has been suggested that adult females could crawl ashore to lay eggs on land, with males and juveniles appearing to be incapable of doing so, based on the presumably sexually dimorphic fusion of the sacral vertebrae and possession of more robust limb bones in presumed females. A skeleton of Philydrosaurus has been found with associated post-hatchling stage juveniles, suggesting that they engaged in post-hatching parental care. Tracks Tracks from the Early Cretaceous (Albian) of South Korea, given the ichnotaxon name Novapes ulsanensis have been attributed to choristoderans, based on the similarity of the pentadactyl (five fingered) preserved tracks to the foot morphology of Monjurosuchus. The tracks preserve traces of webbing between the digits. The authors of the study proposed based on the spacing of the prints, that choristoderans could "high walk" like modern crocodilians. Tracks attributed to neochoristoderans dubbed Champsosaurichnus parfeti have also been reported from the Late Cretaceous Laramie Formation of the United States, though only two prints are present and it is not possible to distinguish between a manus (forefoot) or pes (hindfoot). Classification and phylogeny Internal systematics Historically, the internal phylogenetics of Choristodera were unclear, with the neochoristoderes being recovered as a well-supported clade, but the relationships of the "non-neochoristoderes" being poorly resolved. However, during the 2010s, the "non-neochoristoderes" from the Early Cretaceous of Asia (with the exception of Heishanosaurus) alongside Lazarussuchus from the Cenozoic of Europe were recovered (with weak support) as belonging to a monophyletic clade, which were informally named the "Allochoristoderes" by Dong and colleagues in 2020, characterised by the shared trait of completely closed lower temporal fenestrae, with Cteniogenys from the Middle-Late Jurassic of Europe and North America being consistently recovered as the basalmost choristodere. The long necked "non-neochoristoderes" Shokawa and Hyphalosaurus have often been recovered as a clade, dubbed the Hyphalosauridae by Gao and Fox in 2005. The finding of more complete material of the previously fragmentary Khurendukhosaurus shows that it also has a long neck, and it has also been recovered as part of the clade. Phylogeny from the analysis of Dong and colleagues (2020): Relationships to other reptiles Choristoderes are universally agreed to be members of Neodiapsida, but their exact placement in the clade is uncertain, due to their mix of primitive and derived features, and a long ghost lineage (absence of a fossil record) after their split from other reptiles. After initially being placed in Rhynchocephalia, Cope later suggested a placement in Lacertilla due to the shape of the cervical vertebrae. Louis Dollo in 1891 returned Choristodera to Rhynchocephalia, but in 1893 suggested a close relationship with Pareiasaurus. Alfred Romer in publications in 1956 and 1968 placed Choristodera within the paraphyletic or polyphyletic grouping of "Eosuchia", describing them, as "an offshoot of the basic eosuchian stock", a classification which was widely accepted. However, the use of computer based cladistics in the 1980s demonstrated the non-monophyly of "Eosuchia", making the classification of choristoderes again uncertain. Subsequent studies either suggested placement as archosauromorphs, lepidosauromorphs or members of Diapsida incertae sedis. In a 2016 analysis of neodiapsid relationships by Martín Ezcurra they were recovered as members of the advanced neodiapsid group Sauria, in a polytomy with Lepidosauromorpha and Archosauromorpha, with being the earliest diverging members of either group also being plausible. A position as basal archosauromorphs is supported by the ossification sequence of their embryos. Evolutionary history Choristoderes must have diverged from all other known reptile groups prior to the end of the Permian period, over 250 million years ago, based on their primitive phylogenetic position. In 2015, Rainer R. Schoch reported a new small (~ 20 cm long) diapsid from the Middle Triassic (Ladinian) Lower Keuper of Southern Germany, known from both cranial and postcranial material, which he claimed represented the oldest known choristodere. Pachystropheus from the Late Triassic (Rhaetian) of Britain was historically suggested to be a choristodere, but was later demonstrated to be a member of the marine reptile group Thalattosauria. The oldest unequivocal choristoderan is the small lizard-like Cteniogenys, the oldest known remains of which are known from the late Middle Jurassic (Bathonian ~168-166 million years ago) Forest Marble and Kilmaluag formations of Britain, with remains also known from the Upper Jurassic Alcobaça Formation of Portugal and the Morrison Formation of the United States, with broadly similar remains also known from the late Middle Jurassic (Callovian) Balabansai Formation of Kyrgyzstan in Central Asia, the Bathonian Itat Formation of western Siberia, as well as possibly the Bathonian aged Anoual Formation in Morocco, North Africa. Choristoderes underwent a major evolutionary radiation in Asia during the Early Cretaceous, which represents the high point of choristoderan diversity, including the first records of the gharial-like Neochoristodera, which appear to have evolved in the regional absence of aquatic neosuchian crocodyliformes. A partial femur of an indeterminate choristodere is known from the Yellow Cat Member of the Cedar Mountain Formation in North America. They appear to be absent from the well sampled European localities of the Berriasian aged Purbeck Group, Great Britain and the Barremian aged La Huérguina Formation, Spain, though there is a record of a small Cteniogenys-like taxon from the Berriasian aged Angeac-Charente bonebed in France. In the latter half of the Late Cretaceous (Campanian-Maastrichtian), the neochoristodere Champsosaurus is found in Utah, Wyoming, Montana, North Dakota, Alberta and Saskatchewan, which were along the western coast of the Western Interior Seaway on the island of Laramidia. Indeterminate remains of neochoristoderes are also known from the Canadian High Arctic, dating to the early Late Cretaceous (Coniacian–Turonian) and from the Navesink Formation of New Jersey from the latest Cretaceous (Maastrichtian), which formed the separate island of Appalachia. Vertebrae from the Cenomanian of Germany and the Campanian aged Grünbach Formation of Austria indicate the presence of choristoderes in Europe during this time period. The only record of choristoderes from Asia in the Late Cretaceous is a single vertebra from the Turonian of Japan. Fragmentary remains found in the Campanian aged Oldman and Dinosaur Park formations in Alberta, Canada, also possibly suggest the presence of small bodied "non-neochoristoderes" in North America during the Late Cretaceous. Champsosaurus survived the K-Pg extinction, and together with fellow neochoristoderes Kosmodraco and Simoedosaurus are present in Europe, Asia and North America during the Paleocene, however they became extinct during the early Eocene. Their extinction coincides with major faunal turnover associated with elevated temperatures. Small bodied "non-neochoristoderes", which are absent from the fossil record after the Early Cretaceous (except for possible North American remains), reappear in the form of the lizard-like Lazarussuchus from the late Paleocene of France. The European endemic Lazarussuchus is the last known choristodere, surviving the extinction of neochoristoderes at the beginning of the Eocene, with the youngest known remains being those of L. dvoraki from the Early Miocene of the Czech Republic and as well as possible indeterminate remains of Lazarussuchus reported from the late Miocene (~11.6 million years ago) of southern Germany.
Biology and health sciences
Choristoderes
Animals
2217676
https://en.wikipedia.org/wiki/Side-striped%20jackal
Side-striped jackal
The side-striped jackal (Lupulella adusta or Schaeffia adusta) is a canine native to Central and Southern Africa. Unlike the smaller and related black-backed jackal (Lupulella mesomelas), which dwells in open plains, the side-striped jackal primarily dwells in woodland and scrub areas. Taxonomy and evolution The Swedish zoologist Carl Jakob Sundevall named the species Canis adustus in 1847. The German zoologist Max Hilzheimer proposed a different genus as Schaeffia adusta in 1906. Fossil remains of the side-striped jackal date to the Pliocene era. A mitochondrial DNA sequence alignment for the wolf-like canids gave a phylogenetic tree with the side-striped jackal and the black-backed jackal being the most basal members of this clade, which means that this tree is indicating an African origin for the clade. In 2019, a workshop hosted by the IUCN/SSC Canid Specialist Group recommends that because DNA evidence shows the side-striped jackal (Canis adustus) and black-backed jackal (Canis mesomelas) to form a monophyletic lineage that sits outside of the Canis/Cuon/Lycaon clade, that they should be placed in a distinct genus, Lupulella Hilzheimer, 1906 with the names Lupulella adusta and Lupulella mesomelas. Studies indicate that the dentition of the side-striped jackal is different to that of the black-backed jackal, and propose that the side-striped jackal should be classified as Schaeffia adusta following Hilzheimer in 1906. It is the surviving member from an African group of which the early Pliocene African Eucyon khoikhoi sp. nov. is the basal member. The recent discovery of the 5 million years old E. khoikhoi supports the proposed radiation of the genus Eucyon, with the oldest E. ferox in North America, to E. davisi in North America then to China, to E. debonisi in Western Europe, to E. khoikhoi in Africa. Description The side-striped jackal is a slender, medium-sized canid, which tends to be slightly larger on average than the black-backed jackal. Body mass ranges from , head-and-body length from and tail length from . Shoulder height can range from . Its pelt is coloured buff-grey. The back is darker grey than the underside, and the tail is black with a grey, almost silver tip. Indistinct white stripes are present on the flanks, running from elbow to hip. The boldness of the markings varies between individuals, with those of adults being better defined than those of juveniles. The side-striped jackal's skull is similar to that of the black-backed jackal's, but is flatter, with a longer and narrower rostrum. Its sagittal crest and zygomatic arches are also lighter in build. Due to its longer rostrum, its third upper premolar lies almost in line with the others, rather than at an angle. Its dentition is well suited to an omnivorous diet. The long, curved canines have a sharp ridge on the posterior surface, and the outer incisors are canine-like. Its carnassials are smaller than those of the more carnivorous black-backed jackal. Females have four inguinal teats. Dietary habits The side-striped jackal tends to be less carnivorous than other jackal species, and is a highly adaptable omnivore whose dietary preferences change in accordance to seasonal and local variation. It tends to forage solitarily, though family groups of up to 12 jackals have been observed to feed together in western Zimbabwe. In the wild, it feeds largely on invertebrates during the wet season and small mammals, such as the springhare, in the dry months. It frequently scavenges from campsites and the kills of larger predators. In the wild, fruit is taken exclusively in season, while in ruralised areas, it can account for 30% of their dietary intake. The side-striped jackal tends to be comparatively less predatory when compared to other jackal species. It typically does not target prey exceeding the size of neonatal antelopes, and one specimen was recorded to have entered a duck's pen to eat their feed, whilst ignoring the birds. A side-striped jackal from Angola was found to be a host of an intestinal acanthocephalan worm, Pachysentis angolensis. Social behaviour and reproduction The side-striped jackal lives both solitarily and in family groups of up to seven individuals. The family unit is dominated by a breeding pair, which remains monogamous for a number of years. The breeding season for this species depends on where they live; in Southern Africa, breeding starts in June and ends in November. The side-striped jackal has a gestation period of 57 to 70 days, with average litter of three to six young. The young reach sexual maturity at six to eight months of age, and typically begin to leave when 11 months old. The side-striped jackal is among the few mammal species that mate for life, forming monogamous pairs. Subspecies There are seven recognized subspecies of the side-striped jackal: L. a. adusta (West Africa to most of Angola) – Sundevall's side-striped jackal L. a. bweha (East Africa; Kisumu, Kenya) – Elgon side-striped jackal L. a. centralis (Central Africa; Cameroon, near the Uham River) L. a. grayi (North Africa; Morocco and Tunisia) L. a. kaffensis (Kaffa, southwestern Ethiopia) – Kaffa side-striped jackal L. a. lateralis (East Africa; Kenya, Uasin Gishu Plateau, south of Gabon) L. a. notatus (East Africa; Kenya, Loita Plains, Rift Valley Province) – Loita side-striped jackal
Biology and health sciences
Canines
Animals
2217890
https://en.wikipedia.org/wiki/Sulfide%20mineral
Sulfide mineral
The sulphide minerals are a class of minerals containing sulphide (S2−) or disulphide () as the major anion. Some sulfide minerals are economically important as metal ores. The sulphide class also includes the selenides, the tellurides, the arsenides, the antimonides, the bismuthinides, the sulpharsenides and the sulphosalts. Sulphide minerals are inorganic compounds. Minerals Common or important examples include: Acanthite Chalcocite Bornite Galena Sphalerite Chalcopyrite Pyrrhotite Millerite Pentlandite Covellite Cinnabar Realgar Orpiment Stibnite Pyrite Marcasite Molybdenite Sulfarsenides: Cobaltite Arsenopyrite Gersdorffite Sulfosalts: Pyrargyrite Proustite Tetrahedrite Tennantite Enargite Bournonite Jamesonite Cylindrite Nickel–Strunz Classification -02- Sulphides IMA-CNMNC proposes a new hierarchical scheme (Mills et al., 2009). This list uses the Classification of Nickel–Strunz (mindat.org, 10 ed, pending publication). Abbreviations: "*" - discredited (IMA/CNMNC status). "?" - questionable/doubtful (IMA/CNMNC status). "REE" - Rare-earth element (Sc, Y, La, Ce, Pr, Nd, Pm, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb, Lu) "PGE" - Platinum-group element (Ru, Rh, Pd, Os, Ir, Pt) 03.C Aluminofluorides, 06 Borates, 08 Vanadates (04.H V[5,6] Vanadates), 09 Silicates: Neso: insular (from Greek νησος nēsos, island) Soro: grouping (from Greek σωροῦ sōros, heap, mound (especially of corn)) Cyclo: ring Ino: chain (from Greek ις [genitive: ινος inos], fibre) Phyllo: sheet (from Greek φύλλον phyllon, leaf) Tekto: three-dimensional framework Nickel–Strunz code scheme: NN.XY.##x NN: Nickel–Strunz mineral class number X: Nickel–Strunz mineral division letter Y: Nickel–Strunz mineral family letter ##x: Nickel–Strunz mineral/group number, x add-on letter Class: sulphides, selenides, tellurides 02.A Simple Sulphides, Selenides, etc. 02.AA Alloys of metalloids with Cu, Ag, Sn, Au: 10a Algodonite, 10b Domeykite, 10d Koutekite; 15 Novakite, 20 Cuprostibite, 25 Kutinaite, 30 Allargentum, 35 Dyscrasite, 40 Maldonite, 45 Stistaite 02.AB Ni-metalloid alloys: 10 Orcelite, 15 Maucherite, 20 Oregonite 02.AC Alloys of metalloids with PGE: 05a Atheneite, 05a Vincentite; 10a Stillwaterite, 10b Mertieite-II, 10c Arsenopalladinite; 15a Miessiite, 15a Isomertieite, 15b Mertieite-I; 20a Stibiopalladinite, 20b Palarstanide, 20c Menshikovite; 25a Palladoarsenide, 25b Rhodarsenide, 25c Palladodymite, 25d Naldretteite, 25e Majakite, 25f Palladobismutharsenide; 30 Polkanovite; 35a Genkinite, 35b Ungavaite, 40 Polarite; 45a Froodite, 45b Iridarsenite, 45c Borishanskiite 02.B Metal Sulphides, M:S > 1:1 (mainly 2:1) 02.BA With Cu, Ag, Au: 05a Chalcocite, 05b Djurleite, 05c Geerite, 05d Roxbyite, 05e Digenite, 05f Anilite; 10 Bornite; 15a Berzelianite, 15b Bellidoite, 15c Umangite, 15d Athabascaite; 20a Rickardite, 20b Weissite; 25a Stromeyerite, 25b Mckinstryite, 25c Selenojalpaite, 25c Jalpaite, 25d Eucairite, 25e Henryite; 30a Acanthite, 30a Argentite*, 30b Aguilarite, 30b Naumannite, 30c Hessite, 30d Cervelleite, 30e Stutzite; 35 Argyrodite, 35 Putzite, 35 Canfieldite; 40a Fischesserite, 40a Petzite, 40b Uytenbogaardtite, 40c Petrovskaite, 40d Penzhinite; 45 Bezsmertnovite, 50 Bogdanovite, 55 Bilibinskite, 60 Chenguodaite 02.BB With Ni, Fe: 05 Heazlewoodite; 10 Arsenohauchecornite, 10 Bismutohauchecornite, 10 Hauchecornite, 10 Tellurohauchecornite, 10 Tucekite; 15a Argentopentlandite, 15a Cobaltpentlandite, 15a Geffroyite, 15a Manganoshadlunite, 15a Pentlandite, 15a Shadlunite, 15b Godlevskite, 15c Sugakiite; 20 Vozhminite 02.BC With Rh, Pd, Pt, etc.: 05 Palladseite, 05 Miassite; 10 Oosterboschite; 15 Jagueite, 15 Chrisstanleyite; 20 Keithconnite, 25 Vasilite, 30 Telluropalladinite, 35 Luberoite, 40 Oulankaite, 45 Telargpalite, 50 Temagamite, 55 Sopcheite, 60 Laflammeite, 65 Tischendorfite, 70 Kharaelakhite 02.BD With Hg, Tl: 05 Imiterite, 10 Gortdrumite; 15 Balkanite, 15 Danielsite; 20 Donharrisite, 25 Carlinite; 30 Bukovite, 30 Thalcusite, 30 Murunskite; 35 Rohaite, 40 Chalcothallite, 45 Sabatierite, 50 Crookesite, 55 Brodtkorbite 02.BE With Pb(Bi): 05 Betekhtinite, 10 Furutobeite; 15 Rhodplumsite, 15 Shandite; 20 Parkerite, 25 Schlemaite, 30 Pasavaite 02.C Metal Sulphides, M:S = 1:1 (and similar) 02.CA With Cu: 05a Covellite, 05b Klockmannite, 05c Spionkopite, 05d Yarrowite; 10 Nukundamite, 15 Calvertite 02.CB With Zn, Fe, Cu, Ag, Au, etc.: 05a Rudashevskyite, 05a Hawleyite, 05a Coloradoite, 05a Metacinnabar, 05a Sphalerite, 05a Tiemannite, 05a Stilleite, 05b Sakuraiite, 05c Polhemusite; 07.0 Arsenosulvanite?; 10a Chalcopyrite, 10a Eskebornite, 10a Gallite, 10a Lenaite, 10a Roquesite, 10a Laforetite, 10b Haycockite, 10b Mooihoekite, 10b Putoranite, 10b Talnakhite; 15a Cernyite, 15a Hocartite, 15a Kuramite, 15a Pirquitasite, 15a Stannite, 15a Velikite, 15a Idaite, 15a Ferrokesterite, 15a Kesterite, 15b Mohite, 15c Stannoidite; 20 Chatkalite, 20 Mawsonite; 30 Colusite, 30 Germanite, 30 Germanocolusite, 30 Nekrasovite, 30 Stibiocolusite, 30 Maikainite, 30 Ovamboite; 35a Hemusite, 35a Kiddcreekite, 35a Renierite, 35a Polkovicite, 35a Morozeviczite, 35a Catamarcaite, 35a Vinciennite; 40 Lautite; 45 Cadmoselite, 45 Rambergite, 45 Greenockite, 45 Wurtzite; 55a Cubanite, 55b Isocubanite; 60 Picotpaulite, 60 Raguinite; 65 Argentopyrite, 65 Sternbergite; 70 Sulvanite, 75 Vulcanite, 80 Empressite, 85 Muthmannite 02.CC With Ni, Fe, Co, PGE, etc.: 05 Zlatogorite, 05 Breithauptite, 05 Freboldite, 05 Langisite, 05 Nickeline, 05 Sederholmite, 05 Stumpflite, 05 Sudburyite, 05 Sobolevskite, 05 Achavalite, 05 Jaipurite*, 05 Hexatestibiopanickelite, 05 Kotulskite; 10 Smythite, 10 Pyrrhotite, 10 Troilite; 15 Cherepanovite, 15 Modderite, 15 Ruthenarsenite, 15 Westerveldite; 20 Makinenite, 20 Millerite; 25 Mackinawite, 30 Vavrinite; 35a Braggite, 35a Cooperite, 35a Vysotskite 02.CD With Sn, Pb, Hg, etc.: 05 Herzenbergite, 05 Teallite; 10 Altaite, 10 Galena, 10 Clausthalite, 10 Alabandite, 10 Niningerite, 10 Oldhamite, 10 Keilite; 15a Cinnabar, 15b Hypercinnabar 02.D Metal Sulphides, M:S = 3:4 and 2:3 02.DA M:S = 3:4: 05 Bornhardtite, 05 Florensovite, 05 Carrollite, 05 Fletcherite, 05 Daubréelite, 05 Greigite, 05 Linnaeite, 05 Kalininite, 05 Polydymite, 05 Violarite, 05 Tyrrellite, 05 Siegenite, 05 Trustedtite, 05 Cadmoindite, 05 Cuproiridsite, 05 Cuprorhodsite, 05 Dayingite*, 05 Ferrorhodsite, 05 Indite, 05 Malanite, 05 Xingzhongite; 10 Rhodostannite, 10 Toyohaite; 15 Wilkmanite, 15 Brezinaite, 15 Heideite; 20 Inaglyite, 20 Konderite; 25 Kingstonite 02.DB M:S = 2:3 and similar: 05 Heklaite, 05a Antimonselite, 05a Guanajuatite, 05a Bismuthinite, 05a Stibnite, 05a Metastibnite, 05b Paakkonenite; 10 Ottemannite, 10 Suredaite; 15 Bowieite, 15 Kashinite; 20 Montbrayite, 25 Edgarite, 30 Tarkianite, 35 Cameronite 02.DC Variable M:S: 05 Platynite?, 05a Hedleyite, 05b Nevskite, 05b Telluronevskite, 05b Ingodite, 05b Sulphotsumoite, 05b Tsumoite, 05c Kawazulite, 05c Paraguanajuatite, 05c Skippenite, 05c Tetradymite, 05c Tellurantimony, 05c Tellurobismuthite, 05d Laitakarite, 05d Ikunolite, 05d Joseite, 05d Joseite-B, 05d Pilsenite, 05e Vihorlatite, 05e Baksanite, 05e Protojoseite*, 05e Sztrokayite* 02.E Metal Sulphides, M:S £1:2 02.EA M:S = 1:2: 05 Sylvanite, 10 Calaverite; 15 Krennerite, 15 Kostovite; 20 Berndtite, 20 Merenskyite, 20 Melonite, 20 Kitkaite, 20 Moncheite, 20 Sudovikovite, 20 Shuangfengite; 25 Verbeekite; 30 Drysdallite, 30 Jordisite, 30 Molybdenite, 30 Tungstenite 02.EB M:S = 1:2, with Fe, Co, Ni, PGE, etc.: 05a Aurostibite, 05a Cattierite, 05a Hauerite, 05a Fukuchilite, 05a Erlichmanite, 05a Geversite, 05a Insizwaite, 05a Laurite, 05a Krutaite, 05a Pyrite, 05a Penroseite, 05a Sperrylite, 05a Vaesite, 05a Villamaninite, 05a Trogtalite, 05a Dzharkenite, 05a Gaotaiite, 05b Bambollaite; 10a Frohbergite, 10a Hastite?, 10a Ferroselite, 10a Kullerudite, 10a Mattagamite, 10a Marcasite, 10b Alloclasite, 10c Glaucodot, 10d Costibite, 10e Pararammelsbergite, 10e Paracostibite, 10f Oenite; 15a Clinosafflorite, 15a Anduoite, 15a Omeiite, 15a Lollingite, 15a Nisbite, 15a Rammelsbergite, 15a Safflorite, 15b Seinajokite; 20 Paxite, 20 Arsenopyrite, 20 Gudmundite, 20 Ruarsite, 20 Osarsite; 25 Krutovite, 25 Cobaltite, 25 Changchengite, 25 Hollingworthite, 25 Gersdorffite, 25 Irarsite, 25 Jolliffeite, 25 Padmaite, 25 Platarsite, 25 Ullmannite, 25 Tolovkite, 25 Willyamite, 25 Milotaite, 25 Kalungaite, 25 Maslovite, 25 Testibiopalladite, 25 Michenerite, 25 Mayingite; 30 Urvantsevite, 35 Rheniite 02.EC M:S = 1:>2: 05 Ferroskutterudite, 05 Kieftite, 05 Dienerite?, 05 Nickelskutterudite, 05 Skutterudite; 10 Patronite 02.F Sulphides of Arsenic, Alkalies; Sulphides with Halide, Oxide, Hydroxide, O 02.FA With As, (Sb), S: 05 Duranusite, 10 Dimorphite, 15a Realgar, 15b Pararealgar, 20 Alacranite, 25 Uzonite; 30 Laphamite, 30 Orpiment; 35 Getchellite, 40 Wakabayashilite 02.FB With Alkalies (without Cl, etc.): 05 Cronusite, 05 Caswellsilverite, 05 Schollhornite; 10 Chvilevaite, 15 Orickite; 20 Rasvumite, 20 Pautovite; 25 Colimaite 02.FC With Cl, Br, I (halide-sulfides): 05 Djerfisherite, 05 Owensite, 05 Thalfenisite; 10 Bartonite, 10 Chlorbartonite; 15a Arzakite, 15a Corderoite, 15a Lavrentievite, 15b Kenhsuite, 15c Grechishchevite, 15d Radtkeite; 20a Capgaronnite, 20b Iltisite, 20c Perroudite; 25 Demicheleite-(Br), 25 Demicheleite-(Cl) 02.FD With O, OH, : 05 Kermesite, 10 Viaeneite, 20 Erdite, 25 Coyoteite; 30 Haapalaite, 30 Valleriite, 30 Yushkinite; 35 Tochilinite, 40 Wilhelmramsayite, 45 Vyalsovite, 50 Bazhenovite 02.X Unclassified Strunz Sulphides 02.XX Unknown: 00 Horsfordite?, 00 Imgreite?, 00 Bravoite?, 00 Isochalcopyrite?, 00 Bayankhanite?, 00 Dzhezkazganite*, 00 Matraite?, 00 Iridisite*, 00 Prassoite, 00 Samaniite, 00 Horomanite, 00 Jeromite?, 00 Dilithium*, 00 Kurilite?
Physical sciences
Minerals
Earth science
2221187
https://en.wikipedia.org/wiki/Solid%20solution
Solid solution
A solid solution, a term popularly used for metals, is a homogeneous mixture of two compounds in solid state and having a single crystal structure. Many examples can be found in metallurgy, geology, and solid-state chemistry. The word "solution" is used to describe the intimate mixing of components at the atomic level and distinguishes these homogeneous materials from physical mixtures of components. Two terms are mainly associated with solid solutions – solvents and solutes, depending on the relative abundance of the atomic species. In general if two compounds are isostructural then a solid solution will exist between the end members (also known as parents). For example sodium chloride and potassium chloride have the same cubic crystal structure so it is possible to make a pure compound with any ratio of sodium to potassium (Na1-xKx)Cl by dissolving that ratio of NaCl and KCl in water and then evaporating the solution. A member of this family is sold under the brand name Lo Salt which is (Na0.33K0.66)Cl, hence it contains 66% less sodium than normal table salt (NaCl). The pure minerals are called halite and sylvite; a physical mixture of the two is referred to as sylvinite. Because minerals are natural materials they are prone to large variations in composition. In many cases specimens are members for a solid solution family and geologists find it more helpful to discuss the composition of the family than an individual specimen. Olivine is described by the formula (Mg, Fe)2SiO4, which is equivalent to (Mg1−xFex)2SiO4. The ratio of magnesium to iron varies between the two endmembers of the solid solution series: forsterite (Mg-endmember: Mg2SiO4) and fayalite (Fe-endmember: Fe2SiO4) but the ratio in olivine is not normally defined. With increasingly complex compositions the geological notation becomes significantly easier to manage than the chemical notation. Nomenclature The IUPAC definition of a solid solution is a "solid in which components are compatible and form a unique phase". The definition "crystal containing a second constituent which fits into and is distributed in the lattice of the host crystal" given in refs., is not general and, thus, is not recommended. The expression is to be used to describe a solid phase containing more than one substance when, for convenience, one (or more) of the substances, called the solvent, is treated differently from the other substances, called solutes. One or several of the components can be macromolecules. Some of the other components can then act as plasticizers, i.e., as molecularly dispersed substances that decrease the glass-transition temperature at which the amorphous phase of a polymer is converted between glassy and rubbery states. In pharmaceutical preparations, the concept of solid solution is often applied to the case of mixtures of drug and polymer. The number of drug molecules that do behave as solvent (plasticizer) of polymers is small. Phase diagrams On a phase diagram a solid solution is represented by an area, often labeled with the structure type, which covers the compositional and temperature/pressure ranges. Where the end members are not isostructural there are likely to be two solid solution ranges with different structures dictated by the parents. In this case the ranges may overlap and the materials in this region can have either structure, or there may be a miscibility gap in solid state indicating that attempts to generate materials with this composition will result in mixtures. In areas on a phase diagram which are not covered by a solid solution there may be line phases, these are compounds with a known crystal structure and set stoichiometry. Where the crystalline phase consists of two (non-charged) organic molecules the line phase is commonly known as a cocrystal. In metallurgy alloys with a set composition are referred to as intermetallic compounds. A solid solution is likely to exist when the two elements (generally metals) involved are close together on the periodic table, an intermetallic compound generally results when two metals involved are not near each other on the periodic table. Details The solute may incorporate into the solvent crystal lattice substitutionally, by replacing a solvent particle in the lattice, or interstitially, by fitting into the space between solvent particles. Both of these types of solid solution affect the properties of the material by distorting the crystal lattice and disrupting the physical and electrical homogeneity of the solvent material. Where the atomic radii of the solute atom is larger than the solvent atom it replaces the crystal structure (unit cell) often expands to accommodate it, this means that the composition of a material in a solid solution can be calculated from the unit cell volume a relationship known as Vegard's law. Some mixtures will readily form solid solutions over a range of concentrations, while other mixtures will not form solid solutions at all. The propensity for any two substances to form a solid solution is a complicated matter involving the chemical, crystallographic, and quantum properties of the substances in question. Substitutional solid solutions, in accordance with the Hume-Rothery rules, may form if the solute and solvent have: Similar atomic radii (15% or less difference) Same crystal structure Similar electronegativities Similar valency a solid solution mixes with others to form a new solution The phase diagram in the above diagram displays an alloy of two metals which forms a solid solution at all relative concentrations of the two species. In this case, the pure phase of each element is of the same crystal structure, and the similar properties of the two elements allow for unbiased substitution through the full range of relative concentrations. Solid solution of pseudo-binary systems in complex systems with three or more components may require a more involved representation of the phase diagram with more than one solvus curves drawn corresponding to different equilibrium chemical conditions. Solid solutions have important commercial and industrial applications, as such mixtures often have superior properties to pure materials. Many metal alloys are solid solutions. Even small amounts of solute can affect the electrical and physical properties of the solvent. The binary phase diagram in the above diagram shows the phases of a mixture of two substances in varying concentrations, and . The region labeled "" is a solid solution, with acting as the solute in a matrix of . On the other end of the concentration scale, the region labeled "" is also a solid solution, with acting as the solute in a matrix of . The large solid region in between the and solid solutions, labeled " + ", is not a solid solution. Instead, an examination of the microstructure of a mixture in this range would reveal two phases—solid solution -in- and solid solution -in- would form separate phases, perhaps lamella or grains. Application In the phase diagram, at three different concentrations, the material will be solid until heated to its melting point, and then (after adding the heat of fusion) become liquid at that same temperature: the unalloyed extreme left the unalloyed extreme right the dip in the center (the eutectic composition). At other proportions, the material will enter a mushy or pasty phase until it warms up to being completely melted. The mixture at the dip point of the diagram is called a eutectic alloy. Lead-tin mixtures formulated at that point (37/63 mixture) are useful when soldering electronic components, particularly if done manually, since the solid phase is quickly entered as the solder cools. In contrast, when lead-tin mixtures were used to solder seams in automobile bodies a pasty state enabled a shape to be formed with a wooden paddle or tool, so a 70–30 lead to tin ratio was used. (Lead is being removed from such applications owing to its toxicity and consequent difficulty in recycling devices and components that include lead.) Exsolution When a solid solution becomes unstable—due to a lower temperature, for example—exsolution occurs and the two phases separate into distinct microscopic to megascopic lamellae. This is mainly caused by difference in cation size. Cations which have a large difference in radii are not likely to readily substitute. Alkali feldspar minerals, for example, have end members of albite, NaAlSi3O8 and microcline, KAlSi3O8. At high temperatures Na+ and K+ readily substitute for each other and so the minerals will form a solid solution, yet at low temperatures albite can only substitute a small amount of K+ and the same applies for Na+ in the microcline. This leads to exsolution where they will separate into two separate phases. In the case of the alkali feldspar minerals, thin white albite layers will alternate between typically pink microcline, resulting in a perthite texture.
Physical sciences
Mixture
Chemistry
2221992
https://en.wikipedia.org/wiki/Lystrosaurus
Lystrosaurus
Lystrosaurus (; 'shovel lizard'; proper Ancient Greek is lístron ‘tool for leveling or smoothing, shovel, spade, hoe’) is an extinct genus of herbivorous dicynodont therapsids from the late Permian and Early Triassic epochs (around 248 million years ago). It lived in what is now Antarctica, India, China, Mongolia, European Russia and South Africa. Four to six species are currently recognized, although from the 1930s to 1970s the number of species was thought to be much higher. They ranged in size from that of a small dog to 8 feet (2.5 meters) long. As a dicynodont, Lystrosaurus had only two teeth (a pair of tusk-like canines), and is thought to have had a horny beak that was used for biting off pieces of vegetation. Lystrosaurus was a heavily built, herbivorous animal. The structure of its shoulders and hip joints suggests that Lystrosaurus moved with a semi-sprawling gait. The forelimbs were even more robust than the hindlimbs, and the animal is thought to have been a powerful digger that nested in burrows. Lystrosaurus survived the Permian-Triassic extinction, 252 million years ago. In the Early Triassic, they were by far the most common terrestrial vertebrates, accounting for as many as 95% of the total individuals in some fossil beds. Researchers have offered various hypotheses for why Lystrosaurus survived the extinction event and thrived in the early Triassic. History of discovery Dr. Elias Root Beadle, a Philadelphia missionary and avid fossil collector, discovered the first Lystrosaurus skull. Beadle wrote to the eminent paleontologist Othniel Charles Marsh, but received no reply. Marsh's rival, Edward Drinker Cope, was very interested in seeing the find, and described and named Lystrosaurus in the Proceedings of the American Philosophical Society in 1870. Its name is derived from the Ancient Greek words listron, "shovel", and sauros, "lizard". Marsh belatedly purchased the skull in May 1871, although his interest in an already-described specimen was unclear; he may have wanted to carefully scrutinize Cope's description and illustration. Plate tectonics The discovery of Lystrosaurus fossils at Coalsack Bluff in the Transantarctic Mountains by Edwin H. Colbert and his team in 1969–1970 helped support the hypothesis of plate tectonics and strengthen the theory, since Lystrosaurus had already been found in the lower Triassic of southern Africa as well as in India and China. Distribution and species Lystrosaurus fossils have been found in many Late Permian and Early Triassic terrestrial bone beds, most abundantly in Africa, and to a lesser extent in parts of what are now India, China, Mongolia, European Russia, and Antarctica (which was not over the South Pole at the time). Species found in Africa Most Lystrosaurus fossils have been found in the Balfour and Katberg Formations of the Karoo basin in South Africa; these specimens offer the best prospects of identifying species because they are the most numerous and have been studied for the longest time. As so often with fossils, there is debate in the paleontological community as to exactly how many species have been found in the Karoo basin. Studies from the 1930s to 1970s suggested a large number (23 in one case). However, by the 1980s and 1990s, only 6 species were recognized in the Karoo: L. curvatus, L. platyceps, L. oviceps, L. maccaigi, L. murrayi, and L. declivis. A study in 2011 reduced that number to four, treating the fossils previously labeled as L. platyceps and L. oviceps as members of L. curvatus. L. maccaigi is the largest and apparently most specialized species, while L. curvatus was the least specialized. A Lystrosaurus-like fossil, Kwazulusaurus shakai, has also been found in South Africa. Although not assigned to the same genus, K. shakai is very similar to L. curvatus. Some paleontologists have therefore proposed that K. shakai was possibly an ancestor of or closely related to the ancestors of L. curvatus, while L. maccaigi arose from a different lineage. L. maccaigi is found only in sediments from the Permian period, and apparently did not survive the Permian–Triassic extinction event. Its specialized features and sudden appearance in the fossil record without an obvious ancestor may indicate that it immigrated into the Karoo from an area in which Late Permian sediments have not been found. L. curvatus is found in a relatively narrow band of sediments from shortly before and after the extinction, and can be used as an approximate marker for the boundary between the Permian and Triassic periods. A skull identified as L. curvatus has been found in late Permian sediments from Zambia. For many years it had been thought that there were no Permian specimens of L. curvatus in the Karoo, which led to suggestions that L. curvatus immigrated from Zambia into the Karoo. However, a re-examination of Permian specimens in the Karoo has identified some as L. curvatus, and there is no need to assume immigration. L. murrayi and L. declivis are found only in Triassic sediments. Other species Lystrosaurus georgi fossils have been found in the Earliest Triassic sediments of the Moscow Basin in Russia. It was probably closely related to the African Lystrosaurus curvatus, which is regarded as one of the least specialized species and has been found in very Late Permian and very Early Triassic sediments. L. murrayi, in addition to two undescribed species presently assigned to L. curvatus and L. declivis, is known from the Early Triassic Panchet Formation of the Damodar Valley and the Kamthi Formation of the Pranhita-Godavari Basin in India. Seven Lystrosaurus species have been described from the Early Triassic Jiucaiyuan, Guodikeng and Wutonggou formations of the Bogda Mountains in Xinjiang, China, although it is possible that only two (L. youngi and L. hedini) are valid; unusually, no Chinese Lystrosaurus specimens are known below the Permian-Triassic boundary in this region. L. curvatus, L. murrayi, and L. maccaigi are known from the Fremouw Formation in the Transantarctic Mountains of Antarctica. Description Lystrosaurus was a dicynodont therapsid, between long with an average of about depending upon the species. Unlike other therapsids, dicynodonts had very short snouts and no teeth except for the tusk-like upper canines. Dicynodonts are generally thought to have had horny beaks like those of turtles, for shearing off pieces of vegetation, which were then ground on a horny secondary palate when the mouth was shut. The jaw joint was weak and moved backwards and forwards with a shearing action, instead of the more common sideways or up and down movements. It is thought that the jaw muscles were attached unusually far forward on the skull and took up a lot of space on the top and back of the skull. As a result, the eyes were set high and well forward on the skull, and the face was short. Features of the skeleton indicate that Lystrosaurus moved with a semi-sprawling gait. The lower rear corner of the scapula (shoulder blade) was strongly ossified (built of strong bone), which suggests that movement of the scapula contributed to the stride length of the forelimbs and reduced the sideways flexing of the body. The five sacral vertebrae were massive but not fused to each other and to the pelvis, making the back more rigid and reducing sideways flexing while the animal was walking. Therapsids with fewer than five sacral vertebrae are thought to have had sprawling limbs, like those of modern lizards. In dinosaurs and mammals, which have erect limbs, the sacral vertebrae are fused to each other and to the pelvis. A buttress above each acetabulum (hip socket) is thought to have prevented dislocation of the femur (thigh bone) while Lystrosaurus was walking with a semi-sprawling gait. The forelimbs of Lystrosaurus were massive, and Lystrosaurus is thought to have been a powerful burrower. Mummified specimens recovered from the Karoo Basin and described in 2022 revealed that Lystrosaurus had dimpled, leathery, and hairless skin. Paleoecology Dominance of the Early Triassic Lystrosaurus is notable for dominating southern Pangaea for millions of years during the Early Triassic. At least one unidentified species of this genus survived the end-Permian mass extinction and, in the absence of predators and herbivorous competitors, went on to thrive and re-radiate into a number of species within the genus, becoming the most common group of terrestrial vertebrates during the Early Triassic; for a while, 95% of land vertebrates were Lystrosaurus. This is the only time that a single species or genus of land animal dominated the Earth to such a degree. A few other Permian therapsid genera also survived the mass extinction and appear in Triassic rocks—the therocephalians Tetracynodon, Moschorhinus, Ictidosuchoides and Promoschorhynchus—but do not appear to have been abundant in the Triassic; complete ecological recovery took 30 million years, spanning the Early and Middle Triassic. Several attempts have been made to explain why Lystrosaurus survived the Permian–Triassic extinction event, the "mother of all mass extinctions", and why it dominated Early Triassic fauna to such an unprecedented extent: Growth marks in fossilized tusks suggest that Lystrosaurus living in Antarctica ~250 Mya could enter a prolonged state of torpor analogous to hibernation. This could be the oldest evidence of a hibernation-like state in a vertebrate animal and indicates that torpor arose in vertebrates before mammals and dinosaurs evolved. One of the more recent theories is that the extinction event reduced the atmosphere's oxygen content and increased its carbon dioxide content, so that many terrestrial species died out because they found breathing too difficult. It has therefore been suggested that Lystrosaurus survived and became dominant because its burrowing life-style made it able to cope with an atmosphere of "stale air", and that specific features of its anatomy were part of this adaptation: a barrel chest that accommodated large lungs, short internal nostrils that facilitated rapid breathing, and high neural spines (projections on the dorsal side of the vertebrae) that gave greater leverage to the muscles that expanded and contracted its chest. However, there are weaknesses in all these points: the chest of Lystrosaurus was not significantly larger in proportion to its size than in other dicynodonts that became extinct; although Triassic dicynodonts appear to have had longer neural spines than their Permian counterparts, this feature may be related to posture, locomotion or even body size rather than respiratory efficiency; L. murrayi and L. declivis are much more abundant than other Early Triassic burrowers such as Procolophon or Thrinaxodon. The suggestion that Lystrosaurus was helped to survive and dominate by being semi-aquatic has a similar weakness: although temnospondyls become more abundant in the Karoo's Triassic sediments, they were much less numerous than L. murrayi and L. declivis. The most specialized and the largest animals are at higher risk in mass extinctions; this may explain why the unspecialized L. curvatus survived while the larger and more specialized L. maccaigi perished along with all the other large Permian herbivores and carnivores. Although Lystrosaurus generally looks adapted to feed on plants similar to Dicroidium, which dominated the Early Triassic, the larger size of L. maccaigi may have forced it to rely on the larger members of the Glossopteris flora, which did not survive the end-Permian extinction. Only the –long therocephalian Moschorhinus and the large archosauriform Proterosuchus appear to be large enough to have preyed on the Triassic Lystrosaurus species, and this shortage of predators may have been responsible for a Lystrosaurus population boom in the Early Triassic. According to Benton, "Perhaps the survival of Lystrosaurus was simply a matter of luck".
Biology and health sciences
Proto-mammals
Animals
8786647
https://en.wikipedia.org/wiki/Cash%20%28unit%29
Cash (unit)
Cash or li () is a traditional Chinese unit of weight. The terms "cash" or "le" were documented to have been used by British explorers in the 1830s when trading in Qing territories of China. Under the Hong Kong statute of the Weights and Measures Ordinance, 1 cash is about . Currently, it is candareen or catty, namely .
Physical sciences
Chinese
Basics and measurement
5589071
https://en.wikipedia.org/wiki/Titanis
Titanis
Titanis (meaning "Titan" for the Titans of Greek mythology) is a genus of phorusrhacid ("terror birds", a group originating in South America), an extinct family of large, predatory birds, in the order Cariamiformes that inhabited the United States during the Pliocene and earliest Pleistocene. The first fossils were unearthed by amateur archaeologists Benjamin Waller and Robert Allen from the Santa Fe River in Florida and were named Titanis walleri by ornithologist Pierce Brodkorb in 1963, the species name honoring Waller. The holotype material is fragmentary, consisting of only an incomplete right tarsometatarsus (lower leg bone) and phalanx (toe bone), but comes from one of the largest phorusrhacid individuals known. In the years following the description, many more isolated elements have been unearthed from sites from other areas of Florida, Texas, and California. The species was classified in the subfamily Phorusrhacinae, which includes some of the last and largest phorusrhacids like Devincenzia and Kelenken. Like all phorusrhacids, Titanis had elongated hind limbs, a thin pelvis, proportionally small wings, and a large skull with a hooked beak. It was one of the largest phorusrhacids, possibly similar in size to Phorusrhacos based on preserved material. More recent estimates placed Titanis at in height and over in body mass. Due to the fragmentary fossils, the anatomy is poorly known, but several distinct characters on the tarsometatarsus have been observed. The skull is estimated to have been between and in length, one of the largest known from any bird. Phorusrhacids are thought to have been ground predators or scavengers, and have often been considered apex predators that dominated Cenozoic South America in the absence of placental mammalian predators, though they did co-exist with some large, carnivorous borhyaenid mammals. Titanis co-existed with many placental predators in North America and was likely one of several apex predators in its ecosystem. The tarsometatarsus was long and slender, like that of its relative Kelenken, which has been suggested to have been agile and capable of running at high speeds. Studies of the related Andalgalornis show that large phorusrhacids had very stiff and stress-resistant skulls; this indicates they may have swallowed small prey whole or targeted larger prey with repetitive strikes of the beak. Titanis is known from the Pliocene deposits of Florida, southern California, and southeastern Texas, regions that had large open savannas and a menagerie of mammalian megafauna. It likely preyed on mammals such as the extinct armadillo relatives Holmesina and Glyptotherium, equids, tapirs, capybaras, and other Pliocene herbivores. Titanis is unique among phorusrhacids in that it is the only one known from North America, crossing over from South America before the Great American Interchange. Discovery and age The earliest discovery of Titanis fossils occurred in the winter of 1961/1962, when amateur archaeologists Benjamin Waller and Robert Allen were searching for artifacts and fossils using scuba gear in the Santa Fe River on the border of Gilchrist and Columbia Counties in Florida, United States. The two collectors donated their discoveries to the Florida Museum of Natural History (UF) later along with bones of equids, proboscideans, and many other Floridan fossils from the late Pliocene and latest Pleistocene. Waller and Allen's avian fossils consisted of only a distal tarsometatarsus (lower leg bone) and a pedal phalanx (toe bone), deposited under specimen numbers UF 4108 and 4109 respectively. They remained without analysis in the museum's donations until they were recognized as unique by paleontologist Clayton Ray in 1962. He noticed the avian features and giant size of the fossils, which led him to believe they were from a phorusrhacid (or "terror bird", a group of large, predatory birds). Ray also noted their stratigraphic origin; they were found in a sedimentary layer containing the equid Nannippus and "bone-crushing" dog Borophagus, indicating that they originated from the upper part of the Blancan stage (2.2–1.8 million years old). Ray presented the Santa Fe fossils to the museum's ornithologist Pierce Brodkorb, who mistakenly believed that they were from Rancholabrean strata, an error which made it to the final publication. In that publication, Brodkorb erroneously classified it as a relative of rheas, though Ray pushed Brodkorb to assign the fossils to Phorusrhacidae. Brodkorb published his description in 1963, naming the new genus and species Titanis walleri. The generic name, Titanis, references the Greek Titans, due to the bird's large size, and the specific name, walleri, honors Waller, one of the collectors of the type specimen. As suggested by Ray, Brodkorb grouped Titanis with the subfamily Phorusrhacinae within Phorusrhacidae, along with Phorusrhacos and Devincenzia. This was the first discovery of phorusrhacids outside South America. Titanis has been found in five locales in Florida: Santa Fe River sites 1a and 1b and Inglis 1b, Citrus County; Port Charlotte, Charlotte County; and a shell pit in Sarasota County. Of the 40 Floridan specimens of Titanis, 27 have been unearthed from the Santa Fe River, many of them collected in the 1960s and '70s following Brodkorb's description. The Santa Fe River specimens come from two localities within the river, 1a and 1b. The former locality is more productive, producing elements of Titanis including vertebrae, limb bones, and even parts of the skull. Inglis 1b was originally a sinkhole during the Pliocene, but became a sedimentary layer of clay that was uncovered during construction of the Cross Florida Barge Canal by the federal government during the 1960s. A pair of graduate students from the University of Florida were the first to discover fossils in the clay sediments in 1967, sparking a wave of large-scale excavations by curator David Webb of the Florida Museum of Natural History. Work on the site lasted from 1967 to 1973, during which over 18,000 fossils were collected. Of the many fossils, only 12 belonged to Titanis, including cervical vertebrae, a carpometacarpus, and a metatarsal. As for Port Charlotte, a single fossil, a partial pedal phalanx from the fourth digit, was donated to the UF in 1990. Another partial tarsometatarsus was reportedly found in a shell pit in Sarasota County, making it the only other tarsometatarsus known from Titanis. Texan and Californian discoveries A newer discovery of Titanis was described in 1995; an isolated pedal phalanx that had been recovered from a sand and gravel pit near Odem along the Nueces River in San Patricio County, Texas. This was the first description of Titanis fossils from outside Florida. The pit was largely disorganized, with fossils dating to the Early Pliocene and Late Pleistocene jumbled together. The description followed Brodkorb's erroneous Late Pleistocene age assessment. Later analyses of rare-earth elements within the fossil demonstrated that the Texan Titanis derived from Pliocene rocks of the Hemphillian stage, a period preceding the formation of the Isthmus of Panama. This would make it the oldest estimate of a Titanis fossil at 5 million years old, compared to the Floridan fossils which are around 2.2–1.8 million years old, and therefore from the Blancan age. In 1961, while fossil collecting, G. Davidson Woodward acquired several avian fossils from sediments in the Pliocene-aged (3.7 million year old) strata of the Olla Formation in Anza-Borrego Desert State Park, California, including a wing bone found in association with the premaxilla of a giant bird. The wing bone was referred to the teratorn Aiolornis at that time, an assessment backed by ornithologist Hildegarde Howard in 1972. This was supported by later studies, but a 2013 paper by paleontologist Robert Chandler and colleagues assigned the premaxilla to Titanis, the authors citing the bone's age and phorusrhacid features. The age of the Anza-Borrego premaxilla is estimated at 3.7 million years old, making it the oldest confirmed fossil of Titanis, though the Texan specimen may be older. Classification During the early Cenozoic, after the extinction of the non-avian dinosaurs, mammals underwent an evolutionary diversification, and some bird groups around the world developed a tendency towards gigantism; this included the Gastornithidae, the Dromornithidae, the Palaeognathae, and the Phorusrhacidae. Phorusrhacids are an extinct group within Cariamiformes, the only living members of which are the two species of seriemas in the family Cariamidae. Although phorusrhacids are the most taxon-rich group within Cariamiformes, their interrelationships are unclear due to the incompleteness of their remains. A lineage of related predatory birds, the bathornithids, occupied North America before the arrival of phorusrhacids, living from the Eocene to Miocene and filling a similar niche to cariamids. The oldest phorusrhacid fossils come from South America during the Paleocene (when the continent was an isolated island) and survived until the Pleistocene, eventually spreading to North America through Titanis. Though fossils from Europe and Africa have been assigned to the group, their classification is disputed. It is unclear where the group originated; both cariamids and phorusrhacids may have arisen in South America, or arrived from elsewhere when southern continents were closer together or when sea levels were lower. Since phorusrhacids survived until the Pleistocene, they appear to have been more successful than the South American metatherian thylacosmilid predators (which disappeared in the Pliocene), and it is possible that they competed ecologically with placental predators that entered from North America in the Pleistocene. Titanis itself coexisted with a variety of placental mammalian predators, including carnivorans like the saber-toothed cat Smilodon, cheetah-like Miracinonyx, wolf-like Aenocyon, and the short-faced bear Arctodus. All of these genera, including the last phorusrhacids, went extinct during the Late Pleistocene extinctions. Though for many decades the internal phylogenetics of Phorusrhacidae were uncertain and many taxa were named, they have received more analysis in the 21st century. Titanis, however, has consistently been regarded as being within the subfamily Phorusrhacinae along with Phorusrhacos, Kelenken, and Devincenzia. Brazilian paleontologist Herculano Alvarenga and colleagues published a phylogenetic analysis of Phorusrhacidae in 2011 that did not separate Brontornithinae, Phorusrhacinae, and Patagornithinae, resulting in Titanis in a polytomy (topology 1). In their 2015 description of Llallawavis, the Argentinian paleontologist Federico J. Degrange and colleagues performed a phylogenetic analysis of Phorusrhacidae, wherein they found Phorusrhacinae to be polyphyletic, or an unnatural grouping (topology 2). Topology 1: Alvarenga et al. (2011) results Topology 2: Degrange et al. (2015) results Description Phorusrhacids were large, flightless birds with long hind limbs, narrow pelvises, and proportionally small wings. They had elongated skulls ending in a thin, hooked beak. Overall, Titanis was very similar to the South American Phorusrhacos and Devincenzia, its closest relatives. Little is known of its body structure, but it seems to have been less wide-footed than Devincenzia, with a proportionally much stronger middle toe. In its initial description, Titanis has been suggested to be larger than the African ostrich and more than twice the size of South American rhea. Accurate scaling after the discovery of new material estimated its total height around tall. Though Titanis is suggested to be comparable in size to Phorusrachos based on comparing the dimensions of known specimens, researchers weren't able to definitively estimate the body mass of Titanis due to the fragmentary nature of the known specimens. In 1995, Jon A. Baskin proposed that a tall individual would have weighed , but the 2005 study which cited Baskin suggested it to be over . In spite of this, it would still make Titanis one of the largest phorusrhacids and birds known, only relatives like Devincenzia and Kelenken as well as some struthioniforms and gastornithiforms being larger. Skull Of the skull, only the premaxilla, frontal (top orbit bone), pterygoid (palate bone), quadrate (skull joint bone), orbital process, and two quadratojugals (cheek bones) have been mentioned in scientific literature. The skull is estimated to have been between and in length, one of the largest known from any bird. These sizes are based on the size of quadratojugals from Titanis and the cranium of Phorusrhacos. The premaxilla of Titanis is incomplete, consisting of its frontmost end including the characteristic long, sharp beak tip of Phorusrhacidae that would have been used for hunting. Its preserved length is with a height of with a triangular shape in vertical cross-section. Sides of the fossil are flat bearing a large dorsal crest, as in other thin-skulled phorusrhacids like Phorusrhacos. The culmen (upper arc) of the exposed premaxilla is identical to that in Patagornis marshi, an Argentine phorusrhacid. The pterygoid is enlarged, as seen in other phorusrhacids, at in complete length with a medially placed joint for its articulation to the basipterygoid process. Two quadratojugals are preserved, both with different anatomies. The larger of the two has a more pronounced crest cranial to the articulation tubercular, whereas the smaller quadratojugal has a deep fossa instead of a crest. Potential sexual dimorphism has been suggested due to the lack of signs of unfinished ontogenetic development in the smaller quadratojugal, meaning they both come from adults. In the lower jaw, a partial mandible is known but it is unfigured and undescribed in scientific literature. Being a phorusrhacine, it would have had a long and narrow symphysis ending in a sharp tip pointing downward. Postcranial skeleton As for the postcranial anatomy, Titanis and other phorusrhacines were heavily built. They all preserve an elongated, thin tarsometatarsus that was at least 60% the length of the tibiotarsus. Titanis is distinguished from other phorusrhacines by the anatomy of its tarsometatarsus; the distal end of the mid-trochlea is spread out onto its sides and its slenderness compared to related genera of the same size. The pes is large and had three digits, the third of which had an enlarged ungual akin to that of dromaeosaurid dinosaurs. The spinal column is poorly known from Titanis, though several vertebrae have been collected. The cervical vertebrae are elongated anteroposteriorly and somewhat flexible, whereas the dorsal, sacral, and caudal vertebrae were more boxy and rigid. The dorsal vertebrae have tall neural spines atop the centra. The dorsal ribs connected to the sacral ribs, creating a basketed underbelly. The wings are small and could not have been used for flight, but were much more strongly built than those of living ratites. It also had a relatively rigid wrist, which would not have allowed the hand to fold back against the arm to the same degree as other birds. This led R. M. Chandler to suggest in a 1994 paper that the wings may have supported some type of clawed, mobile hand similar to the hands of non-avian theropod dinosaurs, such as the dromaeosaurs. It was later pointed out by Gould and Quitmyer in a 2005 study that demonstrated that this wing joint is not unique and is present in seriemas, which do not have specialized grasping hands. The wing bones articulated in an unusual joint-like structure, suggesting the digits could flex to some degree. Evidence of elongated quill-feathers are known from Patagornis and Llallawavis, with large tubercles called quill knobs present on their ulnae. These quill knobs would have supported long flight feathers. Paleobiology Little is known about the paleobiology of Titanis due to a scarcity of fossil remains. Many of its habits are inferred based on related taxa like Kelenken and Andalgalornis. Features such as the pointed premaxillary beak tip and recurved pedal unguals are direct evidence of its carnivorous lifestyle. Feeding and diet Phorusrhacids are thought to have been terrestrial predators or scavengers, and have often been considered apex predators that dominated Cenozoic South America in the absence of placental mammalian predators. They co-existed with some large, carnivorous borhyaenid mammals for much of their existence. Earlier hypotheses of phorusrhacid feeding ecology were mainly inferred from their large skulls with hooked beaks rather than through detailed hypotheses and biomechanical studies. Detailed analyses of their running and predatory adaptations were only conducted from the beginning of the 21st century through the use of computer technology. Alvarenga and Elizabeth Höfling made some general remarks about phorusrhacid habits in a 2003 article. They were flightless, as evidenced by the proportional size of their wings and body mass, and the wing-size was more reduced in larger members of the group. These researchers pointed out that the narrowing of the pelvis, upper maxilla, and thorax could have been adaptations to enable the birds to search for and take smaller animals in tall plant growth or broken terrain. The large expansions above the eyes formed by the lacrimal bones (similar to what is seen in modern hawks) would have protected the eyes against the sun, and enabled keen eyesight, which indicates they hunted by sight in open, sunlit areas, and not shaded forests. Leg function In 2005, Rudemar Ernesto Blanco and Washington W. Jones examined the strength of the tibiotarsus (shin bone) of phorusrhacids to determine their speed, but conceded that such estimates can be unreliable even for extant animals. The tibiotarsal strength of Patagornis and an indeterminate large phorusrhacine suggested a speed of , and that of Mesembriornis suggested ; the latter is greater than that of a modern ostrich, approaching that of a cheetah, . They found these estimates unlikely due to the large body size of these birds, and instead suggested the strength could have been used to break the long-bones of medium-sized mammals, the size for example of a saiga or Thomson's gazelle. This strength could be used for accessing the marrow inside the bones, or by using the legs as kicking weapons (like some modern ground birds do), consistent with the large, curved, and sideways compressed claws known in some phorusrhacids. They also suggested future studies could examine whether they could have used their beaks and claws against well-armored mammals such as armadillos and glyptodonts. In a 2006 news article, Luis Chiappe, an Argentine paleontologist, stated that Kelenken, a similar genus to Titanis, would have been as quick as a greyhound, and that while there were other large predators in South America at the time, they were limited in numbers and not as fast and agile as the phorusrhacids, and the many grazing mammals would have provided ample prey. Chiappe remarked that phorusrhacids crudely resembled earlier predatory dinosaurs like Tyrannosaurus, in having gigantic heads, very small forelimbs, and very long legs, and thereby similar carnivore adaptations. Skull and neck function A 2010 study by Degrange and colleagues of the medium-sized phorusrhacid Andalgalornis, based on Finite Element Analysis using CT scans, estimated its bite force and stress distribution in its skull. They found its bite force to be 133 Newtons at the bill tip, and showed it had lost a large degree of intracranial immobility (mobility of skull bones in relation to each other), as was also the case for other large phorusrhacids such as Titanis. These researchers interpreted this loss as an adaptation for enhanced rigidity of the skull; compared to the modern red-legged seriema and white-tailed eagle, the skull of the phorusrhacid showed relatively high stress under sideways loadings, but low stress where force was applied up and down, and in simulations of "pullback" where the head returned to its normal position. Due to the relative weakness of the skull at the sides and midline, these researchers considered it unlikely that Andalgalornis engaged in potentially risky behavior that involved using its beak to subdue large, struggling prey. Instead, they suggested that it fed on smaller prey that could be killed and consumed more safely by swallowing it whole. Alternatively, if Andalgalornis did target large prey, Degrange et al. conjectured that it probably used a series of well-targeted repetitive strikes with the beak in an "attack-and-retreat" strategy. Struggling prey could also have been restrained with the feet, despite the lack of sharp talons. A 2012 follow-up study by Claudia Tambussi and colleagues analyzed the flexibility of the neck of Andalgalornis based on the morphology of its neck vertebrae, finding the neck to be divided into three sections. By manually manipulating the vertebrae, they concluded that the neck musculature and skeleton of Andalgalornis were adapted to carrying a large head and for raising the head after the neck had been fully extended. The researchers assumed same would be true for other large, big-headed phorusrhacids. A 2020 study of phorusrhacid skull morphology by Degrange found that there were two main morphotypes within the group, derived from a seriema-like ancestor. These were the "Psilopterine Skull Type", which was plesiomorphic (more similar to the ancestral type), and the "Terror Bird Skull Type", which included Titanis and other large members, that was more specialized, with more rigid skulls. Despite the differences, studies have shown the two types handled prey similarly; the more rigid skulls and resulting larger bite force of the "Terror Bird" type would have been an adaptation to handling larger prey. Paleoenvironment During the Blancan stage, Titanis lived alongside both endemic mammals as well as new immigrants from Asia and South America. Because of this, the fauna of the Blancan starkly contrasted with the fauna of the Pleistocene and Holocene. The localities in which Titanis is known are all tropical or subtropical in climate, with traditional interpretations indicating a habitat of dense forests and a variety of flora. In Inglis 1a specifically, previous studies have reported that longleaf pine flatwoods and pine-oak scrub are known to have occupied the area, similar to the modern flora. More recent interpretations suggest that the environment of Pliocene-Pleistocene Florida was a mosaic of different communities (i.e. a mixture of forests, savannas, wetlands, etc.), and that Titanis lived in areas of xeric thorn-scrubs and savannas. Similarly, the Santa Cruz Formation where Phorusrhacos was discovered also consisted of a variety of habitats, with Phorusrhacos suggested to live in open grasslands. During the Miocene-Pliocene climatic transition, the climate was cooler but temperatures did not reach those of the Pleistocene, creating a warm period. Sea levels were higher, but this was reversed at the end of the Pliocene during the beginning of large glaciations that fostered the Pleistocene's "Ice Age". The Blancan age strata of Florida from Titanis sites preserve over a hundred species and many different mammals. This includes extinct proboscideans and perissodactyls represented by grazing equids and browsing tapirs. A wide array of artiodactyls existed, including peccaries, camelids, pronghorns, and the extant white-tailed deer. Armadillos and their relatives are also known such as a pampathere, a glyptodont, and dasypodids. One of the largest groups known from the Blancan of Florida is the ground sloths represented by three families. The carnivorans include borophagins, hyaenids, and "saber-toothed" cats. Large rodents are represented by capybaras and porcupines. Many fossils of smaller mammals like shrews, rabbits, and muskrats have been found associated with Titanis. Along with mammals, a menagerie of reptiles including lizards, turtles, and snakes is known from fossils. There are abundant remains of avifauna, with thousands of known fossils, including birds of prey, the teratorn Teratornis, one of the largest flight-capable birds known, and turkeys. Great American Interchange South America, the continent where phorusrhacids originated, was isolated after the breakup of the landmass Gondwana at the end of the Mesozoic era. This period of separation from the rest of the Earth's continents led to an age of unique mammalian and avian evolution, with the dominance of phorusrhacids and sparassodonts as predators in contrast to the North American placental carnivores. The fauna of North America was composed of living groups like canids, felids, ursids, tapirids, antilocaprids, and equids populating the region alongside now extinct families like the gomphotheres, amphicyonids, and mammutids. Phorusrhacids evolved in South America to fill gaps in niches otherwise filled by placentals in other continents, such as that of apex predator. Flight-capable birds could more easily migrate between continents, creating a more homogenous avian fauna. The Great American Interchange took place between the Paleogene and Pliocene, though most species crossed at around 2.7 million years ago. The momentous final stage witnessed the movement of glyptodonts, capybaras, pampatheres, and marsupials to North America via the Isthmus of Panama, which connected South America to the rest of the Americas, and a reverse migration of ungulates, proboscideans, felids, canids, and many other mammal groups to South America. The oldest fossil of Titanis is estimated to be 5 million years old, at least half a million years older than the earliest date for the Isthmus's formation about 4.5–3.5 million years ago. How Titanis was able to traverse the gap to North America is unknown. A hypothesis made by a 2006 article suggested that it could have island-hopped through Central America and the Caribbean islands. Titanis is possibly not the only large animal to have done this; two genera of large ground sloth and a procyonid made it to North America millions of years before the volcanic formation of Panama. The period following the Isthmus's foundation saw the extinction of many groups, including the South American phorusrhacids; the last phorusrhacids went extinct in the Pleistocene. Human settlement in the Americas, climate change, and other factors likely led to the extinction of most of the remaining native South American mammal families. Extinction The extinction of Titanis and other phorusrhacids throughout the Americas was originally theorized to have been due to competition with large placental (canid, felid, and possibly ursid) carnivores that occupied the same ancient terrestrial ecosystems during the Great American Interchange. However, this has been contested as Titanis had competed successfully against both groups for several million years upon entering North America. Brodkorb's description of Titanis as being from the latest Pleistocene, an error followed by later studies, postulated that it went extinct as recently as 15,000 BP (about 13,000 BC). The rare-earth element analysis of Titanis fossils by MacFadden and colleagues in 2007 dispelled this, demonstrating that the genus lived during the Pliocene and earliest Pleistocene. Some phorusrhacid material from South America dates to the Late Pleistocene, younger than Titanis, and close to the time of human arrival.
Biology and health sciences
Prehistoric birds
Animals
5589338
https://en.wikipedia.org/wiki/Summer%20squash
Summer squash
Summer squash are squashes that are harvested when immature, while the rind is still tender and edible. Most summer squashes are varieties of Cucurbita pepo, though some are C. moschata. Most summer squash have a bushy growth habit, unlike the rambling vines of many winter squashes. The term "summer squash" refers to the early harvest period and short storage life of these squashes, unlike that of winter squashes. Summer squashes include the C. pepo varieties: Crookneck squash Gem squash Kamokamo Pattypan squash Straightneck squash Zucchini (courgette) and marrow, respectively immature and mature fruits of the same variety of C. pepo Other summer squashes include the C. moschata varieties: Aehobak Tromboncino or zucchetta History In the journals of Lewis and Clark, on October 12, 1804, Clark recorded that the Arikara tribe raised "great quantities of Corn Beens Simmins, &c." Clark also used the spelling in his journal entries. Simlin, variously spelled (Thomas Jefferson's spelling) and were words for summer squash, particularly Cucurbita pepo pepo, common name pattypan squash. The word simnel was used because of the visual similarity between the squash and the simnel cake.
Biology and health sciences
Botanical fruits used as culinary vegetables
Plants
9460040
https://en.wikipedia.org/wiki/Lyman-alpha%20emitter
Lyman-alpha emitter
A Lyman-alpha emitter (LAE) is a type of distant galaxy that emits Lyman-alpha radiation from neutral hydrogen. Most known LAEs are extremely distant, and because of the finite travel time of light they provide glimpses into the history of the universe. They are thought to be the progenitors of most modern Milky Way type galaxies. These galaxies can be found nowadays rather easily in narrow-band searches by an excess of their narrow-band flux at a wavelength which may be interpreted from their redshift where z is the redshift, is the observed wavelength, and 1215.67 Å is the wavelength of Lyman-alpha emission. The Lyman-alpha line in most LAEs is thought to be caused by recombination of interstellar hydrogen that is ionized by an ongoing burst of star formation. Such Lyman alpha emission was first suggested as a signature of young galaxies by Bruce Partridge and P. J. E. Peebles in 1967. Experimental observations of the redshift of LAEs are important in cosmology because they trace dark matter halos and subsequently the evolution of matter distribution in the universe. Properties Lyman-alpha emitters are typically low mass galaxies of 108 to 1010 solar masses. They are typically young galaxies that are 200 to 600 million years old, and they have the highest specific star formation rate of any galaxies known. All of these properties indicate that Lyman-alpha emitters are important clues as to the progenitors of modern Milky Way type galaxies. Lyman-alpha emitters have many unknown properties. The Lyman-alpha photon escape fraction varies greatly in these galaxies. This is what portion of the light emitted at the Lyman-alpha line wavelength inside the galaxy actually escapes and will be visible to distant observers. There is much evidence that the dust content of these galaxies could be significant and therefore is obscuring the brightness of these galaxies. It is also possible that anisotropic distribution of hydrogen density and velocity play a significant role in the varying escape fraction due to the photons' continued interaction with the hydrogen gas (radiative transfer). Evidence now shows strong evolution in the Lyman-alpha escape fraction with redshift, most likely associated with the buildup of dust in the ISM. Dust is shown to be the main parameter setting the escape of Lyman-alpha photons. Additionally the metallicity, outflows, and detailed evolution with redshift is unknown. Importance in cosmology LAEs are important probes of reionization, cosmology (BAO), and they allow probing of the faint end of the luminosity function at high redshift. The baryonic acoustic oscillation signal should be evident in the power spectrum of Lyman-alpha emitters at high redshift. Baryonic acoustic oscillations are imprints of sound waves on scales where radiation pressure stabilized the density perturbations against gravitational collapse in the early universe. The three-dimensional distribution of the characteristically homogeneous Lyman-alpha galaxy population will allow a robust probe of cosmology. They are a good tool because the Lyman-alpha bias, the propensity for galaxies to form in the highest overdensity of the underlying dark matter distribution, can be modeled and accounted for. Lyman-alpha emitters are over dense in clusters.
Physical sciences
Galaxy classification
Astronomy
9462241
https://en.wikipedia.org/wiki/Dropline
Dropline
A dropline is a commercial fishing rig consisting of a long fishing line set vertically down into the water, with a series of baited hooks attached to the ends of side-branching secondary lines called snoods. Dropline fishing, or droplining, is a specialized angling technique. Droplines may be set either down underwater trenches or just into the open water column. They have a weight at the bottom of the line and are fixed to the water surface at least one float at the top. They are usually not as long as longlines and have fewer hooks. Droplines can be contrasted with trotlines. Whereas a dropline has a series of hooks suspended sideways off a vertical mainline, a trotline has a series of hooks suspended vertically off a horizontal mainline. Conservation impacts A concern for marine conservation is that droplines are able to access areas that are otherwise natural fish refuges, such as deep sea canyons and seamounts. The Australian Marine Conservation Society rates dropline fishing as having a "moderate impact" on wildlife and a "low impact" on marine habitats. Droplines have the potential to interact with orcas (killer whales). There is predation by orcas on commercial longline and dropline fish catches, including around Tasmania, Bering Sea and Prince William Sound areas, causing significant financial loss to commercial fishers, and threat to orcas, which can become caught or entangled, exposed to ship strikes when moving or migrating, or suffer retaliation from fishers. Retaliation in response to predation on fish catches in previous decades has included shooting and harpooning of orcas.
Technology
Hunting and fishing
null
9464988
https://en.wikipedia.org/wiki/Land%20transport
Land transport
Land transport is the transport or movement of people, animals or goods from one location to another location on land. This is in contrast with other main types of transport such as maritime transport and aviation. The two main forms of land transport can be considered to be rail transport and road transport. Systems Several systems of land transport have been devised, from the most basic system of humans carrying things from place to sophisticated networks of ground-based transportation using different types of vehicles and infrastructure. The three types are human-powered, animal powered and machine powered Human-powered transportation Human-powered transport, a form of sustainable transportation, is the transport of people and/or goods using human muscle-power, in the form of walking, running and swimming. Modern technology has allowed machines to enhance human power. Human-powered transport remains popular for reasons of cost-saving, leisure, physical exercise, and environmentalism; it is sometimes the only type available, especially in underdeveloped or inaccessible regions. Although humans are able to walk without infrastructure, the transport can be enhanced through the use of roads, especially when using the human power with vehicles, such as bicycles and inline skates. Human-powered vehicles have also been developed for difficult environments, such as snow and water, by watercraft rowing and skiing; even the air can be entered with human-powered aircraft. Animal-powered transportation Animal-powered transport is the use of working animals for the movement of people and goods. Humans may ride some of the animals directly, use them as pack animals for carrying goods, or harness them, alone or in teams, to pull sleds or wheeled vehicles. Road transportation A road is an identifiable route, way or path between two or more places. Roads are typically smoothed, paved, or otherwise prepared to allow easy travel; though they need not be, and historically many roads were simply recognizable routes without any formal construction or maintenance. In urban areas, roads may pass through a city or village and be named as streets, serving a dual function as urban space easement and route. The most common road vehicle is the automobile; a wheeled passenger vehicle that carries its own motor. Other users of roads include buses, trucks, motorcycles, bicycles and pedestrians. As of 2002, there were 590 million automobiles worldwide. Road transport offers a complete freedom to road users to transfer the vehicle from one lane to the other and from one road to another according to the need and convenience. This flexibility of changes in location, direction, speed, and timings of travel is not available to other modes of transport. It is possible to provide door to door service only by road transport. Automobiles offer high flexibility and with low capacity, but are deemed with high energy and area use, and the main source of noise and air pollution in cities; buses allow for more efficient travel at the cost of reduced flexibility. Road transport by truck is often the initial and final stage of freight transport. Rail transportation Rail transport is where a train runs along a set of two parallel steel rails, known as a railway or railroad. The rails are anchored perpendicular to ties (or sleepers) of timber, concrete or steel, to maintain a consistent distance apart, or gauge. The rails and perpendicular beams are placed on a foundation made of concrete or compressed earth and gravel in a bed of ballast. Alternative methods include monorail and maglev. A train consists of one or more connected vehicles that run on the rails. Propulsion is commonly provided by a locomotive, that hauls a series of unpowered cars, that can carry passengers or freight. The locomotive can be powered by steam, diesel or by electricity supplied by trackside systems. Alternatively, some or all the cars can be powered, known as a multiple unit. Also, a train can be powered by horses, cables, gravity, pneumatics and gas turbines. Railed vehicles move with much less friction than rubber tires on paved roads, making trains more energy efficient, though not as efficient as ships. Intercity trains are long-haul services connecting cities; modern high-speed rail is capable of speeds up to , but this requires specially built track. Regional and commuter trains feed cities from suburbs and surrounding areas, while intra-urban transport is performed by high-capacity tramways and rapid transits, often making up the backbone of a city's public transport. Freight trains traditionally used box cars, requiring manual loading and unloading of the cargo. Since the 1960s, container trains have become the dominant solution for general freight, while large quantities of bulk are transported by dedicated trains. Other modes Pipeline transport sends goods through a pipe; most commonly liquid and gases are sent, but pneumatic tubes can also send solid capsules using compressed air. For liquids/gases, any chemically stable liquid or gas can be sent through a pipeline. Short-distance systems exist for sewage, slurry, water, and beer, while long-distance networks are used for petroleum and natural gas. Cable transport is a broad mode where vehicles are pulled by cables instead of an internal power source. It is most often used on steep slopes. Typical solutions include aerial tramways, elevators, escalator and ski lifts; some of these are also categorized as conveyor transport. Connections with other modes Airports Airports serve as a station for air transport activities, but most people and cargo transported by air must use ground transport to reach their final destination. Airport-based services are sometimes used to shuttle people to nearby hotels or motels when overnight stay is required for connecting flights. Companies provide rental cars, private bus and taxi services while mass transportation is usually provided by a municipality or other source of public funding. Several major airports, including Denver International and JFK International, provide many types of ground transportation, often by working with livery companies and similar businesses. Smaller airports may only have a few private rental companies and a bus service. Larger airports tend to offer several different transportation options, such as light rail and/or roads that loop around an airport to provide access from multiple terminals. Seaports As with air transport, sea transport typically requires use of ground transport at either end of travel for people or goods to reach their final destinations. Significant infrastructure is used at ports to transfer people and goods between sea and land systems. Elements Infrastructure Infrastructure is the fixed installations that allow a vehicle to operate. It consists of a way, a terminal, and facilities for parking and maintenance. For rail, pipeline, road, and cable transport, the entire way the vehicle travels must be built up. Terminals such as stations are locations where passengers and freight can be transferred from one vehicle or mode to another. For passenger transportation, terminals integrate different modes to allow riders to interchange to take advantage of each mode's advantages. For instance, airport rail links connect airports to the city centers and suburbs. The terminals for automobiles are parking lots, while buses and coaches can operate from simple stops. For freight, terminals act as transshipment points, though some cargo is transported directly from the point of production to the point of use. The financing of infrastructure can either be public or private. Transportation is often a natural monopoly and a necessity for the public; roads, and in some countries railways and airports are funded through taxation. New infrastructure projects can have high cost and are often financed through debt. Many infrastructure owners then impose usage fees, such as landing fees at airports, or toll plazas on roads. Independent of this, authorities may impose taxes on the purchase or use of vehicles. Because of poor forecasting and overestimation of passenger numbers by planners, there is frequently a benefit shortfall for transport infrastructure projects. Vehicles A vehicle is any non-living device that is used to move people and goods. Unlike the infrastructure, the vehicle moves along with the cargo and riders. Unless being pulled by a cable or muscle-power, the vehicle must provide its own propulsion; this is most commonly done through a steam engine, combustion engine, or electric motor, though other means of propulsion also exist. Vehicles also need a system of converting the energy into movement; this is most commonly done through wheels, propellers and pressure. Vehicles are most commonly staffed by a driver. However, some systems, such as people movers and some rapid transits, are fully automated. For passenger transport, the vehicle must have a compartment for the passengers. Simple vehicles, such as automobiles, bicycles or simple aircraft, may have one of the passengers as a driver. Users Public Public land transport refers to the carriage of people and goods by government or commercial entities which is made available to the public at large for the purpose of facilitating the economy and society they serve. Most transport infrastructure and large transport vehicles are operated in this manner. Funds to pay for such transport may come from taxes, subscriptions, direct user fees, or combinations of these methods. The vast majority of public transport is land-based, with commuting and postal delivery being the primary purposes. Commerce Commercial land transport refers to the carriage of people and goods by commercial entities made available at cost to individuals, businesses, and the government for the purpose of profiting the entities providing the travel. Most infrastructure used is publicly owned, and vehicles tend to be large and efficient to maximize capacity and profit margins. Freight shipping and long-distance travel are common uses served by commercial land transport. Military Military land transport refers to the carriage of people and goods by the military or other operators for the purpose of supporting military operations, both in peacetime as well as in combat areas. Such activity may use a combination of public infrastructure as well as military-specific infrastructure and in many cases is designed to operate with little or no infrastructure when necessary. Vehicles can range from basic commercial or even private vehicles to those specifically designed for military use. Private Private land transport refers to individuals and organizations transporting themselves and their own people, animals, and goods at their own discretion. Vehicles used are typically smaller, though publicly owned infrastructure is often used for travel. Function Relocation of travelers and cargo are the most common uses of transport. However, other uses exist, such as the strategic and tactical relocation of armed forces during warfare, or the civilian mobility construction or emergency equipment. Passenger Passenger transport, or travel, is divided into public and private transport. Public transport is scheduled services on fixed routes, while private is vehicles that provide ad hoc services at the riders desire. The latter offers better flexibility, but has lower capacity, and a higher environmental impact. Travel may be as part of daily commuting, for business, leisure or migration. Short-haul transport is dominated by the automobile and mass transit. The latter consists of buses in rural and small cities, supplemented with commuter rail, trams and rapid transit in larger cities. Long-haul transport involves the use of the automobile, trains, coaches and aircraft, the last of which have become predominantly used for the longest, including intercontinental, travel. Intermodal passenger transport is where a journey is performed through the use of several modes of transport; since all human transport normally starts and ends with walking, all passenger transport can be considered intermodal. Public transport may also involve the intermediate change of vehicle, within or across modes, at a transport hub, such as a bus or railway station. Taxis and buses can be found on both ends of the public transport spectrum. Buses are the cheaper mode of transport but are not necessarily flexible, and taxis are very flexible but more expensive. In the middle is demand-responsive transport, offering flexibility whilst remaining affordable. International travel may be restricted for some individuals due to legislation and visa requirements. Freight Freight transport, or shipping, is a key in the value chain in manufacturing. With increased specialization and globalization, production is being located further away from consumption, rapidly increasing the demand for transport. While all modes of transport are used for cargo transport, there is high differentiation between the nature of the cargo transport, in which mode is chosen. Logistics refers to the entire process of transferring products from producer to consumer, including storage, transport, transshipment, warehousing, material-handling and packaging, with associated exchange of information. Incoterm deals with the handling of payment and responsibility of risk during transport. Containerization, with the standardization of ISO containers on all vehicles and at all ports, has revolutionized international and domestic trade, offering huge reduction in transshipment costs. Traditionally, all cargo had to be manually loaded and unloaded into the haul of any car; containerization allows for automated handling and transfer between modes, and the standardized sizes allow for gains in economy of scale in vehicle operation. This has been one of the key driving factors in international trade and globalization since the 1950s. Bulk transport is common with cargo that can be handled roughly without deterioration; typical examples are ore, coal, cereals and petroleum. Because of the uniformity of the product, mechanical handling can allow enormous quantities to be handled quickly and efficiently. The low value of the cargo combined with high volume also means that economies of scale become essential in transport, and whole trains are commonly used to transport bulk. Liquid products with sufficient volume may also be transported by pipeline. History Humans' first means of land transport was walking. The domestication of animals introduces a new way to lay the burden of transport on more powerful creatures, allowing heavier loads to be hauled, or humans to ride the animals for higher speed and duration. Inventions such as the wheel and sled helped make animal transport more efficient through the introduction of vehicles. However, water transport, including rowed and sailed vessels, was the only efficient way to transport large quantities or over large distances prior to the Industrial Revolution. The first forms of road transport were horses, oxen or even humans carrying goods over dirt tracks that often followed game trails. Paved roads were built by many early civilizations, including Mesopotamia and the Indus Valley civilization. The Persian and Roman empires built stone-paved roads to allow armies to travel quickly. Deep roadbeds of crushed stone underneath ensured that the roads kept dry. The medieval Caliphate later built tar-paved roads. Until the Industrial Revolution, transport remained slow and costly, and production and consumption were located as close to each other as feasible. The Industrial Revolution in the 19th century saw a number of inventions fundamentally change transport. With telegraphy, communication became instant and independent of transport. The invention of the steam engine, closely followed by its application in rail transport, made land transport independent of human or animal muscles. Both speed and capacity increased rapidly, allowing specialization through manufacturing being located independent of natural resources. With the development of the combustion engine and the automobile at the turn into the 20th century, road transport became more viable, allowing the introduction of mechanical private transport. The first highways were constructed during the 19th century with macadam. Later, tarmac and concrete became the dominant paving material. After World War II, the automobile and airlines took higher shares of transport, reducing rail to freight and short-haul passenger. In the 1950s, the introduction of containerization gave massive efficiency gains in freight transport, permitting globalization. International air travel became much more accessible in the 1960s, with the commercialization of the jet engine. Along with the growth in automobiles and motorways, this introduced a decline for rail transport. After the introduction of the Shinkansen in 1964, high-speed rail in Asia and Europe started taking passengers on long-haul routes from airlines. Early in U.S. history, most aqueducts, bridges, canals, railroads, roads, and tunnels were owned by private joint-stock corporations. Most such transportation infrastructure came under government control in the late 19th and early 20th centuries, culminating in the nationalization of inter-city passenger rail service with the creation of Amtrak. Recently, however, a movement to privatize roads and other infrastructure has gained some ground and adherents. Impact Economic Transport is a key necessity for specialization—allowing production and consumption of products to occur at different locations. Transport has throughout history been a spur to expansion; better transport allows more trade and a greater spread of people. Economic growth has always been dependent on increasing the capacity and rationality of transport. But the infrastructure and operation of transport has a great impact on the land and is the largest drainer of energy, making transport sustainability a major issue. Modern society dictates a physical distinction between home and work, forcing people to transport themselves to places of work or study, as well as to temporarily relocate for other daily activities. Passenger transport is also the essence of tourism, a major part of recreational transport. Commerce requires the transport of people to conduct business, either to allow face-to-face communication for important decisions or to move specialists from their regular place of work to sites where they are needed. Planning Transport planning allows for high utilization and less impact regarding new infrastructure. Using models of transport forecasting, planners are able to predict future transport patterns. On the operative level, logistics allows owners of cargo to plan transport as part of the supply chain. Transport as a field is studied through transport economics, the backbone for the creation of regulation policy by authorities. Transport engineering, a sub-discipline of civil engineering, must take into account trip generation, trip distribution, mode choice and route assignment, while the operative level is handled through traffic engineering. Because of the negative impacts made, transport often becomes the subject of controversy related to choice of mode, as well as increased capacity. Automotive transport can be seen as a tragedy of the commons, where the flexibility and comfort for the individual deteriorate the natural and urban environment for all. Density of development depends on mode of transport, with public transport allowing for better spatial utilization. Good land use keeps common activities close to people's homes and places higher-density development closer to transport lines and hubs, to minimize the need for transport. There are economies of agglomeration. Beyond transportation some land uses are more efficient when clustered. Transportation facilities consume land, and in cities, pavement (devoted to streets and parking) can easily exceed 20 percent of the total land use. An efficient transport system can reduce land waste. Too much infrastructure and too much smoothing for maximum vehicle throughput means that in many cities there is too much traffic and many—if not all—of the negative impacts that come with it. It is only in recent years that traditional practices have started to be questioned in many places, and as a result of new types of analysis which bring in a much broader range of skills than those traditionally relied on—spanning such areas as environmental impact analysis, public health, sociologists as well as economists—the viability of the old mobility solutions is increasingly being questioned. European cities are leading this transition. Environment Transport is a major use of energy and burns most of the world's petroleum. This creates air pollution, including nitrous oxides and particulates, and is a significant contributor to global warming through emission of carbon dioxide, for which transport is the fastest-growing emission sector. By subsector, road transport is the largest contributor to global warming. Environmental regulations in developed countries have reduced individual vehicles' emissions; however, this has been offset by increases in the numbers of vehicles and in the use of each vehicle. Some pathways to reduce the carbon emissions of road vehicles considerably have been studied. Energy use and emissions vary largely between modes, causing environmentalists to call for a transition from road to rail and human-powered transport, as well as increased transport electrification and energy efficiency. Other environmental impacts of transport systems include traffic congestion and automobile-oriented urban sprawl, which can consume natural habitat and agricultural lands. By reducing transportation emissions globally, it is predicted that there will be significant positive effects on Earth's air quality, acid rain, smog and climate change.
Technology
Concepts of ground transport
null
9467014
https://en.wikipedia.org/wiki/Crested%20caracara
Crested caracara
The crested caracara (Caracara plancus), also known as the Mexican eagle, is a bird of prey (raptor) in the falcon family, Falconidae. It was formerly placed in the genus Polyborus before being given in its own genus, Caracara. It is native to and found in the southern and southeastern United States, Mexico (where it is present in every state) and the majority of mainland Latin America, as well as some Caribbean islands. The crested caracara is quite adaptable and hardy, for a species found predominantly in the neotropics; it can be found in a range of environments and ecosystems, including semi-arid and desert climates, maritime or coastal areas, subtropical and tropical forests, temperate regions, plains, swamps, and even in urban areas. Documented, albeit rare, sightings have occurred as far north as Minnesota and the Canadian provinces of Ontario and Prince Edward Island. The southern extent of the crested caracara's distribution can reach as far as Tierra del Fuego and Magallanes Region, Chile. Taxonomy In 1777, English illustrator John Frederick Miller included a hand-coloured plate of the crested caracara in his Icones animalium et plantarum ("icons of the animal and plant world"). He coined the binomial name Falco plancus and specified the type locality as Tierra del Fuego. The specific epithet plancus is Latin for "eagle". The crested caracara is now placed in the genus Caracara (which was introduced in 1826 by German naturalist Blasius Merrem). Two subspecies are recognised: C. p. cheriway (Jacquin, 1784) – United States (Southern California, Arizona, Florida, Louisiana, New Mexico, Texas), México (present in every state), Belize, El Salvador, Guatemala, Honduras, Nicaragua, Costa Rica, Panamá, Colombia, Venezuela, Ecuador, Guyana, Suriname, French Guiana, and north Roraima, Brazil; Caribbean islands of Cuba, Aruba, Guanaja and Roatán (Honduras), and Trinidad; the Pacific Islas Marías (Mexico) and Isla del Rey (Panamá). Individual birds have been seen as far north as Dallas, Texas and Santa Cruz, California. C. p. plancus (Miller, JF, 1777) – SE Perú, N Bolivia to Eastern Brazil, south to Tierra del Fuego and the Falkland Islands. The subspecies C. p. cheriway was formerly classed as a separate species, with the common English name of the northern crested caracara. Description The crested caracara has a total length of and a wingspan of . Its weight is , averaging in seven birds from Tierra del Fuego. Individuals from the colder southern part of its range average larger than those from tropical regions (as predicted by Bergmann's rule) and are the largest type of caracara. In fact, they are the second-largest species of falcon in the world by mean body mass, second only to the gyrfalcon. The cap, belly, thighs, most of the wings, and tail tip are dark brownish, the auriculars (feathers surrounding the ear), throat, and nape are whitish-buff, and the chest, neck, mantle, back, upper tail coverts, crissum (the undertail coverts surrounding the cloaca), and basal part of the tail are whitish-buff barred dark brownish. In flight, the outer primaries show a large conspicuous whitish-buff patch ('window'), as in several other species of caracaras. The legs are yellow and the bare facial skin and cere are deep yellow to reddish-orange. (The facial color can change depending on the bird's mood.) Juveniles resemble adults, but are paler, with streaking on the chest, neck, and back, grey legs, and whitish, later pinkish-purple, facial skin and cere. Behavior A bold, opportunistic raptor, the crested caracara is often seen walking around on the ground looking for food. It mainly feeds on carcasses of dead animals, but it also steals food from other raptors, raids bird and reptile nests, and takes live prey if the possibility arises; mostly this is insects or other small prey, such as small mammals, small birds, amphibians, reptiles, fish, crabs, other shellfish, maggots, and worms, but it can include creatures up to the size of a snowy egret. It may also eat fruit. It is dominant over the black and turkey vulture at carcasses. Furthermore, it also pirates food from them and buteos, as well as from brown pelicans, ibises, and spoonbills, chasing and harrying until they regurgitate or drop food. The crested caracara takes live prey that has been flushed by wildfire, cattle, and farming equipment. Locally, it has even learnt to follow trains or cars for food thrown out. The opportunistic nature of this species means that the crested caracara seeks out the phenomena associated with its food, e.g. wildfires and circling vultures. It is typically solitary, but several individuals may gather at a large food source (e.g. dumps). Breeding takes place in the Southern Hemisphere spring/summer in the southern part of its range, but timing is less strict in warmer regions. The nest is a large, open structure, typically placed on the top of a tree or palm, but sometimes on the ground. The typical clutch size is two eggs. Distribution and habitat The crested caracara occurs from Tierra del Fuego in southernmost South America to the southern United States, Mexico, and Central America. An isolated population occurs on the Falkland Islands. It avoids the Andean highlands and dense humid forests, such as the Amazon rainforest, where it is largely restricted to relatively open sections along major rivers. Otherwise, it occurs in virtually any open or semi-open habitat and is often found near humans. Reports have been made of the crested caracara as far north as San Francisco, California. and, in 2012, near Crescent City, California. Some are believed to possibly be living in Nova Scotia, with numerous sightings throughout the 2010s. In July 2016 a northern caracara was reported and photographed by numerous people in the upper peninsula of Michigan, just outside of Munising. In June 2017, a northern caracara was sighted far north in St. George, New Brunswick, Canada. A specimen was photographed in Woodstock, Vermont in March 2020. The species has recently become more common in central and north Texas and is generally common in south Texas and south of the US border. It can also be found (nesting) in the Southern Caribbean (e.g. Aruba, Curaçao and Bonaire), Mexico, and Central America. Florida caracara Florida is home to a relict population of northern caracaras that dates to the last glacial period, which ended around 12,500 BP. At that time, Florida and the rest of the Gulf Coast were covered in an oak savanna. As temperatures increased, the savanna between Florida and Texas disappeared. Caracaras were able to survive in the prairies of central Florida and the marshes along the St. Johns River. Cabbage palmettos are a preferred nesting site, although they also nest in southern live oaks. Their historical range on the modern-day Florida peninsula included Okeechobee, Osceola, Highlands, Glades, Polk, Indian River, St. Lucie, Hardee, DeSoto, Brevard, Collier, and Martin counties. They are currently most common in DeSoto, Glades, Hendry, Highlands, Okeechobee, and Osceola Counties. It has been seen on the East Coast as far as extreme eastern Brevard County, Florida (Viera, Florida), where it is now considered a resident, but listed as threatened. In February 2023 a crested caracara was identified in St, Johns County, Florida and documented by The St. Johns County Audubon Society on their social media page. Crested caracara in Mexico Mexican ornithologist Rafael Martín del Campo proposed that the northern caracara was possibly the sacred "eagle" depicted in several pre-Columbian Aztec codices, as well as the Florentine Codex. This imagery was adopted as a national symbol of Mexico, but it is not the bird depicted on the flag, which is a golden eagle (Aquila chrysaetos), the national bird. Texan eagle Balduin Möllhausen, the German artist accompanying the 1853 railroad survey (led by Lt. Amiel Weeks Whipple) from the Canadian River to California along the 35th parallel, recounted observing what he called the "Texan Eagle", which, in his account, he identified as Audubon's Polyborus vulgaris. This sighting occurred in the Sans Bois Mountains in southeastern Oklahoma. Status Throughout most of its range, its occurrence is common to very common. It is likely to benefit from the widespread deforestation in tropical South America, so is considered to be of least concern by BirdLife International.
Biology and health sciences
Accipitrimorphae
Animals
1014565
https://en.wikipedia.org/wiki/Grande%20Dixence%20Dam
Grande Dixence Dam
The Grande Dixence Dam () is a concrete gravity dam on the Dixence at the head of the Val d'Hérémence in the canton of Valais in Switzerland. At high, it is the tallest gravity dam in the world, seventh tallest dam overall, and the tallest dam in Europe. It is part of the Cleuson-Dixence Complex. With the primary purpose of hydroelectric power generation, the dam fuels four power stations, totaling the installed capacity to , generating approximately annually, enough to power 400,000 Swiss households. The dam withholds the Lac des Dix ('Lake of the Ten'), its reservoir. With a surface area of 4 km2, it is the second largest lake in Valais and the largest lake above 2,000 m in the Alps. The reservoir receives its water from four different pumping stations; the Z’Mutt, Stafel, Ferpècle and Arolla. At peak capacity, it contains approximately of water, with depths reaching up to . Construction on the dam began in 1950 and was completed in 1961, before officially commissioning in 1965. History In 1922, Energie Ouest Suisse (EOS) became established with a few small power stations. To generate substantial amounts of electricity, EOS looked to the Valais canton which contains 56% of Switzerland's glaciers and stores the largest amount of water in Europe. In 1927, EOS acquired the license for the upper Dixence basin. In 1929, 1,200 workers constructed the first Dixence dam which would be complete in 1935. The first dam would supply water to the Chandoline Power Station which has a capacity of 120 MW. After the Second World War, growing industries needed electricity and construction on the Cleuson Dam began in 1947 and was completed in 1951. The original Dixence dam was submerged by the filling of Lac des Dix beginning in 1957, it can still be seen when the reservoir level is low. Plans for the Super Dixence Dam were finalized by the recently founded company, Grande Dixence SA. Construction on the Super Dixence Dam began in late 1950. By 1961, 3,000 workers had finished pouring of concrete, completing the dam. At 285 m, it was the world's tallest dam at the time, but it was surpassed by the Nurek Dam of Tajikistan in 1972 (300 m). It remains the world's tallest gravity dam. In the 1980s, Grande Dixence SA and EOS began the Cleuson-Dixence project which improved the quality of electricity produced by building new tunnels along with the Bieudron Power Station. By the time the Cleuson-Dixence Complex was complete, the power generated had more than doubled. A short documentary film, Opération béton, was made about the dam's construction by Jean-Luc Godard as first-time director. Characteristics The Grande Dixence Dam is a high, long concrete gravity dam. The dam is wide at its base and wide at its crest. The dam's crest reaches an altitude of . The dam structure contains approximately of concrete. To secure the dam to the surrounding foundation, a grout curtain surrounds the dam, reaching a depth of and extending on each side of the valley. Although the dam is situated on the relatively small Dixence, water supplied from other rivers and streams is pumped by the Z’Mutt, Stafel, Ferpècle and Arolla pumping stations. The pumping stations transport the water through of tunnels into Lac des Dix. Water from the high Cleuson Dam, located to the northwest, is also transported from its reservoir, the Lac de Cleuson. Three penstocks transport water from Lac des Dix to the Chandoline, Fionnay, Nendaz and Bieudron power stations, before being discharged into the Rhône below. All the pumping stations, power stations and dams form the Cleuson-Dixence Complex. Although the complex operates with water being pumped from one reservoir to another, it does not technically qualify as a pumped-storage scheme. Most of the water comes from glaciers melting during the summer. The lake is usually at full capacity by late September, and empties during the winter, eventually reaching its lowest point around April. Power stations Chandoline Power Station The Chandoline Power Station was the power station for the original Dixence Dam. The Grande Dixence Dam submerged the original dam but the power station still operates with water received from the reservoir of the Grande Dixence Dam, Lac des Dix. The power station is the smallest of the four, producing from five Pelton turbines with a gross head of . Fionnay Power Station The Fionnay Power Station receives water from the Grande Dixence Dam by a long tunnel with an average gradient of 10%. Once the tunnel reaches a surge chamber at Louvie in Bagnes, it turns into a penstock which descends at a gradient of 73% for until it reaches the power station. The water, now flowing at a maximum rate of spins six Pelton turbines, generating a combined maximum capacity of . Nendaz Power Station After arriving at the Fionnay Power Station from the Grande Dixence Dam, water then travels through a pressure tunnel which eventually leads into the Péroua surge chamber, above the Nendaz Power Station. The water, which remains at a maximum rate of spins six Pelton turbines, generating a combined maximum capacity of . The Nendaz power station is located within mountains between Aproz and Riddes and is the second-largest hydroelectric power station in Switzerland after the Bieudron Power Station. Bieudron Power Station The water travels down a long penstock from the Grande Dixence Dam before reaching the Bieudron Power Station down. The water spins three pelton turbines, generating a combined capacity of . The power station was constructed after the Nendaz and Fionnay power stations. The power station was built by both Grande Dixence SA and Energie Ouest Suisse between 1993 and 1998 at a cost of US$1.2 billion. The Bieudron Power Station alone holds three world records, for the height of its head (), the output of each Pelton turbine and the output per pole of the generators . It was taken out of service in December 2000 after the rupture of a penstock. The power station became partially operational in December 2009 and fully operational in 2010.
Technology
Dams
null
1014590
https://en.wikipedia.org/wiki/Cannabis%20sativa
Cannabis sativa
Cannabis sativa is an annual herbaceous flowering plant. The species was first classified by Carl Linnaeus in 1753. The specific epithet sativa means 'cultivated'. Indigenous to Eastern Asia, the plant is now of cosmopolitan distribution due to widespread cultivation. It has been cultivated throughout recorded history and used as a source of industrial fiber, seed oil, food, and medicine. It is also used as a recreational drug and for religious and spiritual purposes. Description The flowers of Cannabis sativa plants are most often either male or female, but, only plants displaying female pistils can be or turn hermaphrodite. Males can never become hermaphrodites. It is a short-day flowering plant, with staminate (male) plants usually taller and less robust than pistillate (female or male) plants. The flowers of the female plant are arranged in racemes and can produce hundreds of seeds. Male plants shed their pollen and die several weeks prior to seed ripening on the female plants. Under typical conditions with a light period of 12 to 14 hours, both sexes are produced in equal numbers because of heritable X and Y chromosomes. Although genetic factors dispose a plant to become male or female, environmental factors including the diurnal light cycle can alter sexual expression. Naturally occurring monoecious plants, with both male and female parts, are either sterile or fertile; but artificially induced "hermaphrodites" can have fully functional reproductive organs. "Feminized" seed sold by many commercial seed suppliers are derived from artificially "hermaphroditic" females that lack the male gene, or by treating the plants with hormones or silver thiosulfate. Chemical constituents Although the main psychoactive constituent of Cannabis is tetrahydrocannabinol (THC), the plant is known to contain more than 500 compounds, among them at least 113 cannabinoids; however, most of these "minor" cannabinoids are only produced in trace amounts. Besides THC, another cannabinoid produced in high concentrations by some plants is cannabidiol (CBD), which is not psychoactive but has recently been shown to block the effect of THC in the nervous system. Differences in the chemical composition of Cannabis varieties may produce different effects in humans. Synthetic THC, called dronabinol, does not contain cannabidiol (CBD), cannabinol (CBN), or other cannabinoids, which is one reason why its pharmacological effects may differ significantly from those of natural Cannabis preparations. Beside cannabinoids, the chemical constituents of Cannabis include about 120 compounds responsible for its characteristic aroma. These are mainly volatile terpenes and sesquiterpenes. α-Pinene Myrcene Linalool Limonene Trans-β-ocimene α-Terpinolene Trans-caryophyllene α-Humulene, contributes to the characteristic aroma of Cannabis sativa Caryophyllene, with which some hashish detection dogs are trained A 1980 study identifying constituents of C. sativa established 19 major chemical families (number of chemicals within group): Acids (18) Alcohols (6) Aldehydes (12) Amino Acids (18) Cannabinoids (55) Esters/Lactones (11) Flavanoids Glycosides (14) Fatty Acids (20) Hydrocarbons (46) Ketones (13) Nitrogenous Compounds (18) Non-Cannabinoid Phenols (14) Phytocannabinoids (111) Pigments (2) Proteins (7) Steroids (9) Sugars (32) Terpenes (98) Vitamins (1) Cannabis also produces numerous volatile sulfur compounds that contribute to the plant's skunk-like aroma, with Prenylthiol (3-methyl-2-butene-1-thiol) identified as the primary odorant. These compounds are found in much lower concentrations than the major terpenes and sesquiterpenes. However, they contribute significantly to the pungent aroma of cannabis due to their low odor thresholds as often seen with thiols or other sulfur-containing compounds. A number of specific aromatic compounds have been implicated in variety-specific aromas. These include another class of volatile sulfur compounds, referred to as tropical volatile sulfur compounds, that include 3-mercaptohexanol, 3-mercaptohexyl acetate, and 3-mercaptohexyl butyrate. These compounds possess powerful and distinctive fruity, tropical, and citrus aromas in low concentrations such as those found in certain cannabis varieties. These compounds are also important in the citrus and tropical flavors of hops, beer, wine, and tropical fruits. In addition to volatile sulfur compounds, the heterocyclic compounds indole and skatole (3-Methyl-1H-indole) contribute to the chemical or savory aromas of certain varieties. Skatole in particular was identified as a key contributor to this scent. This compound is found in mammalian feces and is used in the perfuming industry. It possesses a complex aroma that is highly dependent on concentration. Cultivation A Cannabis plant in the vegetative growth phase of its life requires more than 16–18 hours of light per day to stay vegetative. Flowering usually occurs when darkness equals at least 12 hours per day. The flowering cycle can last anywhere between seven and fifteen weeks, depending on the strain and environmental conditions. When the production of psychoactive cannabinoids is sought, female plants are grown separately from male plants to induce parthenocarpy in the female plant's fruits (popularly called "sin semilla" which is Spanish for "without seed" ) and increase the production of cannabinoid-rich resin. In soil, the optimum pH for the plant is 6.3 to 6.8. In hydroponic growing, the nutrient solution is best at 5.2 to 5.8, making Cannabis well-suited to hydroponics because this pH range is hostile to most bacteria and fungi. Tissue culture multiplication has become important in producing medically important clones, while seed production remains the generally preferred means of multiplication. Sativa plants have narrow leaves and grow best in warm environments. They do, however, take longer to flower than their Indica counterparts, and they grow taller than the Indica cannabis strains as well. Cultivars Broadly, there are three main cultivar groups of cannabis that are cultivated today: Cultivars primarily cultivated for their fibre, characterized by long stems and little branching. Cultivars grown for seed which can be eaten entirely raw or from which hemp oil is extracted. Cultivars grown for medicinal or recreational purposes, characterized by extensive branching to maximize the number of flowers. A nominal if not legal distinction is often made between industrial hemp, with concentrations of psychoactive compounds far too low to be useful for that purpose, and marijuana. Uses Cannabis sativa seeds are chiefly used to make hempseed oil which can be used for cooking, lamps, lacquers, or paints. They can also be used as caged-bird feed, as they provide a source of nutrients for most animals. The flowers and fruits (and to a lesser extent the leaves, stems, and seeds) contain psychoactive chemical compounds known as cannabinoids that are consumed for recreational, medicinal, and spiritual purposes. When so used, preparations of flowers and fruits (called marijuana) and leaves and preparations derived from resinous extract (e.g., hashish) are consumed by smoking, vaporising, and oral ingestion. Historically, tinctures, teas, and ointments have also been common preparations. In traditional medicine of India in particular cannabis sativa has been used as hallucinogenic, hypnotic, sedative, analgesic, and anti-inflammatory agent. Terpenes have gained public awareness through the growth and education of medical and recreational cannabis. Organizations and companies operating in cannabis markets have pushed education and marketing of terpenes in their products as a way to differentiate taste and effects of cannabis. The entourage effect, which describes the synergy of cannabinoids, terpenes, and other plant compounds, has also helped further awareness and demand for terpenes in cannabis products.
Biology and health sciences
Rosales
Plants
1015846
https://en.wikipedia.org/wiki/Mannitol
Mannitol
Mannitol is a type of sugar alcohol used as a sweetener and medication. It is used as a low calorie sweetener as it is poorly absorbed by the intestines. As a medication, it is used to decrease pressure in the eyes, as in glaucoma, and to lower increased intracranial pressure. Medically, it is given by injection or inhalation. Effects typically begin within 15 minutes and last up to 8 hours. Common side effects from medical use include electrolyte problems and dehydration. Other serious side effects may include worsening heart failure and kidney problems. It is unclear if use is safe in pregnancy. Mannitol is in the osmotic diuretic family of medications and works by pulling fluid from the brain and eyes. The discovery of mannitol is attributed to Joseph Louis Proust in 1806. It is on the World Health Organization's List of Essential Medicines. It was originally made from the flowering ash and called manna due to its supposed resemblance to the Biblical food. Mannitol is on the World Anti-Doping Agency's banned substances list due to concerns that it may mask prohibited drugs. Uses Medical uses In the United States, mannitol is indicated for the reduction of intracranial pressure and treatment of cerebral edema and elevated intraocular pressure. In the European Union, mannitol is indicated for the treatment of cystic fibrosis (CF) in adults aged 18 years and above as an add-on therapy to best standard of care. Mannitol is used intravenously to reduce acutely raised intracranial pressure until more definitive treatment can be applied, e.g., after head trauma. While mannitol injection is the mainstay for treating high pressure in the skull after a bad brain injury, it is no better than hypertonic saline as a first-line treatment. In treatment-resistant cases, hypertonic saline works better. Intra-arterial infusions of mannitol can transiently open the blood–brain barrier by disrupting tight junctions. It may also be used for certain cases of kidney failure with low urine output, decreasing pressure in the eye, to increase the elimination of certain toxins, and to treat fluid build up. Intraoperative mannitol prior to vessel clamp release during renal transplant has been shown to reduce post-transplant kidney injury, but has not been shown to reduce graft rejection. Mannitol acts as an osmotic laxative in oral doses larger than 20 g, and is sometimes sold as a laxative for children. The use of mannitol, when inhaled, as a bronchial irritant as an alternative method of diagnosis of exercise-induced asthma has been proposed. A 2013 systematic review concluded evidence to support its use for this purpose at this time is insufficient. Mannitol is commonly used in the circuit prime of a heart lung machine during cardiopulmonary bypass. The presence of mannitol preserves renal function during the times of low blood flow and pressure, while the patient is on bypass. The solution prevents the swelling of endothelial cells in the kidney, which may have otherwise reduced blood flow to this area and resulted in cell damage. Mannitol can also be used to temporarily encapsulate a sharp object (such as a helix on a lead for an artificial pacemaker) while it passes through the venous system. Because the mannitol dissolves readily in blood, the sharp point becomes exposed at its destination. Mannitol is also the first drug of choice to treat acute glaucoma in veterinary medicine. It is administered as a 20% solution intravenously. It dehydrates the vitreous humor and, therefore, lowers the intraocular pressure. However, it requires an intact blood-ocular barrier to work. Food Mannitol increases blood glucose to a lesser extent than sucrose (thus having a relatively low glycemic index) so is used as a sweetener for people with diabetes, and in chewing gums. Although mannitol has a higher heat of solution than most sugar alcohols, its comparatively low solubility reduces the cooling effect usually found in mint candies and gums. However, when mannitol is completely dissolved in a product, it induces a strong cooling effect. Also, it has a very low hygroscopicity – it does not pick up water from the air until the humidity level is 98%. This makes mannitol very useful as a coating for hard candies, dried fruits, and chewing gums, and it is often included as an ingredient in candies and chewing gum. The pleasant taste and mouthfeel of mannitol also makes it a popular excipient for chewable tablets. Analytical chemistry Mannitol can be used to form a complex with boric acid. This increases the acid strength of the boric acid, permitting better precision in volumetric analysis of this acid. Other Mannitol is the primary ingredient of mannitol salt agar, a bacterial growth medium, and is used in others. Mannitol is used as a cutting agent in various drugs that are used intranasally (snorted), such as heroin and cocaine. A mixture of mannitol and fentanyl (or fentanyl analogs) in ratio 1:10 is labeled and sold as "China white", a popular heroin substitute. Mannitol is a sugar alcohol with "50-70 percent of the relative sweetness of sugar, which means more must be used to equal the sweetness of sugar. Mannitol lingers in the intestines for a long time and therefore often causes bloating and diarrhea." Contraindications Mannitol is contraindicated in people with anuria, severe hypovolemia, pre-existing severe pulmonary vascular congestion or pulmonary edema, irritable bowel syndrome (IBS), and active intracranial bleeding except during craniotomy. Adverse effects include hyponatremia and volume depletion leading to metabolic acidosis. Chemistry Mannitol is an isomer of sorbitol, another sugar alcohol; the two differ only in the orientation of the hydroxyl group on carbon 2. While similar, the two sugar alcohols have very different sources in nature, melting points, and uses. Production Mannitol is classified as a sugar alcohol; that is, it can be derived from a sugar (mannose) by reduction. Other sugar alcohols include xylitol and sorbitol. Industrial synthesis Mannitol is commonly produced via the hydrogenation of fructose, which is formed from either starch or sucrose (common table sugar). Although starch is a cheaper source than sucrose, the transformation of starch is much more complicated. Eventually, it yields a syrup containing about 42% fructose, 52% glucose, and 6% maltose. Sucrose is simply hydrolyzed into an invert sugar syrup, which contains about 50% fructose. In both cases, the syrups are chromatographically purified to contain 90–95% fructose. The fructose is then hydrogenated over a nickel catalyst into a mixture of isomers sorbitol and mannitol. Yield is typically 50%:50%, although slightly alkaline reaction conditions can slightly increase mannitol yields. Biosyntheses Mannitol is one of the most abundant energy and carbon storage molecules in nature, produced by a plethora of organisms, including bacteria, yeasts, fungi, algae, lichens, and many plants. Fermentation by microorganisms is an alternative to the traditional industrial synthesis. A fructose to mannitol metabolic pathway, known as the mannitol cycle in fungi, has been discovered in a type of red algae (Caloglossa leprieurii), and it is highly possible that other microorganisms employ similar such pathways. A class of lactic acid bacteria, labeled heterofermentive because of their multiple fermentation pathways, convert either three fructose molecules or two fructose and one glucose molecule into two mannitol molecules, and one molecule each of lactic acid, acetic acid, and carbon dioxide. Feedstock syrups containing medium to large concentrations of fructose (for example, cashew apple juice, containing 55% fructose: 45% glucose) can produce yields mannitol per liter of feedstock. Further research is being conducted, studying ways to engineer even more efficient mannitol pathways in lactic acid bacteria, as well as the use of other microorganisms such as yeast and E. coli in mannitol production. When food-grade strains of any of the aforementioned microorganisms are used, the mannitol and the organism itself are directly applicable to food products, avoiding the need for careful separation of microorganism and mannitol crystals. Although this is a promising method, steps are needed to scale it up to industrially needed quantities. Natural extraction Since mannitol is found in a wide variety of natural products, including almost all plants, it can be directly extracted from natural products, rather than chemical or biological syntheses. In fact, in China, isolation from seaweed is the most common form of mannitol production. Mannitol concentrations of plant exudates can range from 20% in seaweeds to 90% in the plane tree. It is a constituent of saw palmetto (Serenoa). Traditionally, mannitol is extracted by the Soxhlet extraction, using ethanol, water, and methanol to steam and then hydrolysis of the crude material. The mannitol is then recrystallized from the extract, generally resulting in yields of about 18% of the original natural product. Another method of extraction is using supercritical and subcritical fluids. These fluids are at such a stage that no difference exists between the liquid and gas stages, so are more diffusive than normal fluids. This is considered to make them much more effective mass transfer agents than normal liquids. The super- or subcritical fluid is pumped through the natural product, and the mostly mannitol product is easily separated from the solvent and minute amount of byproduct. Supercritical carbon dioxide extraction of olive leaves has been shown to require less solvent per measure of leaf than a traditional extraction – CO2 versus ethanol per olive leaf. Heated, pressurized, subcritical water is even cheaper, and is shown to have dramatically greater results than traditional extraction. It requires only water per of olive leaf, and gives a yield of 76.75% mannitol. Both super- and subcritical extractions are cheaper, faster, purer, and more environmentally friendly than the traditional extraction. However, the required high operating temperatures and pressures are causes for hesitancy in the industrial use of this technique. History In the early 1880s, Julije Domac elucidated the structure of hexene and mannitol obtained from Caspian manna. He determined the place of the double bond in hexene obtained from mannitol and proved that it is a derivative of a normal hexene. This also solved the structure of mannitol, which was unknown until then. Controversy The three studies that originally found high-dose mannitol effective in treating severe head injury were the subject of an investigation. Published in 2007 after the lead author Dr Julio Cruz's death, the investigation questioned whether the studies had actually taken place. The co-authors of the paper were not able to confirm the existence of the study patients, and the Federal University of São Paulo, which Cruz gave as his affiliation, had never employed him. As a result of doubt surrounding Cruz's work, an updated version of the Cochrane review excludes all studies by Julio Cruz, leaving only four studies. Due to differences in selection of control groups, a conclusion about the clinical use of mannitol has not been reached. Compendial status British Pharmacopoeia Japanese Pharmacopoeia United States Pharmacopeia
Physical sciences
Sugar alcohols
Chemistry
1016422
https://en.wikipedia.org/wiki/Curved%20spacetime
Curved spacetime
In physics, curved spacetime is the mathematical model in which, with Einstein's theory of general relativity, gravity naturally arises, as opposed to being described as a fundamental force in Newton's static Euclidean reference frame. Objects move along geodesics—curved paths determined by the local geometry of spacetime—rather than being influenced directly by distant bodies. This framework led to two fundamental principles: coordinate independence, which asserts that the laws of physics are the same regardless of the coordinate system used, and the equivalence principle, which states that the effects of gravity are indistinguishable from those of acceleration in sufficiently small regions of space. These principles laid the groundwork for a deeper understanding of gravity through the geometry of spacetime, as formalized in Einstein's field equations. Introduction Newton's theories assumed that motion takes place against the backdrop of a rigid Euclidean reference frame that extends throughout all space and all time. Gravity is mediated by a mysterious force, acting instantaneously across a distance, whose actions are independent of the intervening space. In contrast, Einstein denied that there is any background Euclidean reference frame that extends throughout space. Nor is there any such thing as a force of gravitation, only the structure of spacetime itself. In spacetime terms, the path of a satellite orbiting the Earth is not dictated by the distant influences of the Earth, Moon and Sun. Instead, the satellite moves through space only in response to local conditions. Since spacetime is everywhere locally flat when considered on a sufficiently small scale, the satellite is always following a straight line in its local inertial frame. We say that the satellite always follows along the path of a geodesic. No evidence of gravitation can be discovered following alongside the motions of a single particle. In any analysis of spacetime, evidence of gravitation requires that one observe the relative accelerations of two bodies or two separated particles. In Fig. 5-1, two separated particles, free-falling in the gravitational field of the Earth, exhibit tidal accelerations due to local inhomogeneities in the gravitational field such that each particle follows a different path through spacetime. The tidal accelerations that these particles exhibit with respect to each other do not require forces for their explanation. Rather, Einstein described them in terms of the geometry of spacetime, i.e. the curvature of spacetime. These tidal accelerations are strictly local. It is the cumulative total effect of many local manifestations of curvature that result in the appearance of a gravitational force acting at a long range from Earth. Different observers viewing the scenarios presented in this figure interpret the scenarios differently depending on their knowledge of the situation. (i) A first observer, at the center of mass of particles 2 and 3 but unaware of the large mass 1, concludes that a force of repulsion exists between the particles in scenario A while a force of attraction exists between the particles in scenario B. (ii) A second observer, aware of the large mass 1, smiles at the first reporter's naiveté. This second observer knows that in reality, the apparent forces between particles 2 and 3 really represent tidal effects resulting from their differential attraction by mass 1. (iii) A third observer, trained in general relativity, knows that there are, in fact, no forces at all acting between the three objects. Rather, all three objects move along geodesics in spacetime. Two central propositions underlie general relativity. The first crucial concept is coordinate independence: The laws of physics cannot depend on what coordinate system one uses. This is a major extension of the principle of relativity from the version used in special relativity, which states that the laws of physics must be the same for every observer moving in non-accelerated (inertial) reference frames. In general relativity, to use Einstein's own (translated) words, "the laws of physics must be of such a nature that they apply to systems of reference in any kind of motion." This leads to an immediate issue: In accelerated frames, one feels forces that seemingly would enable one to assess one's state of acceleration in an absolute sense. Einstein resolved this problem through the principle of equivalence. The equivalence principle states that in any sufficiently small region of space, the effects of gravitation are the same as those from acceleration. In Fig. 5-2, person A is in a spaceship, far from any massive objects, that undergoes a uniform acceleration of g. Person B is in a box resting on Earth. Provided that the spaceship is sufficiently small so that tidal effects are non-measurable (given the sensitivity of current gravity measurement instrumentation, A and B presumably should be Lilliputians), there are no experiments that A and B can perform which will enable them to tell which setting they are in. An alternative expression of the equivalence principle is to note that in Newton's universal law of gravitation, mgg and in Newton's second law, there is no a priori reason why the gravitational mass mg should be equal to the inertial mass mi. The equivalence principle states that these two masses are identical. To go from the elementary description above of curved spacetime to a complete description of gravitation requires tensor calculus and differential geometry, topics both requiring considerable study. Without these mathematical tools, it is possible to write about general relativity, but it is not possible to demonstrate any non-trivial derivations. Curvature of time In the discussion of special relativity, forces played no more than a background role. Special relativity assumes the ability to define inertial frames that fill all of spacetime, all of whose clocks run at the same rate as the clock at the origin. Is this really possible? In a nonuniform gravitational field, experiment dictates that the answer is no. Gravitational fields make it impossible to construct a global inertial frame. In small enough regions of spacetime, local inertial frames are still possible. General relativity involves the systematic stitching together of these local frames into a more general picture of spacetime. Years before publication of the general theory in 1916, Einstein used the equivalence principle to predict the existence of gravitational redshift in the following thought experiment: (i) Assume that a tower of height h (Fig. 5-3) has been constructed. (ii) Drop a particle of rest mass m from the top of the tower. It falls freely with acceleration g, reaching the ground with velocity , so that its total energy E, as measured by an observer on the ground, is (iii) A mass-energy converter transforms the total energy of the particle into a single high energy photon, which it directs upward. (iv) At the top of the tower, an energy-mass converter transforms the energy of the photon E back into a particle of rest mass m. It must be that , since otherwise one would be able to construct a perpetual motion device. We therefore predict that , so that A photon climbing in Earth's gravitational field loses energy and is redshifted. Early attempts to measure this redshift through astronomical observations were somewhat inconclusive, but definitive laboratory observations were performed by Pound & Rebka (1959) and later by Pound & Snider (1964). Light has an associated frequency, and this frequency may be used to drive the workings of a clock. The gravitational redshift leads to an important conclusion about time itself: Gravity makes time run slower. Suppose we build two identical clocks whose rates are controlled by some stable atomic transition. Place one clock on top of the tower, while the other clock remains on the ground. An experimenter on top of the tower observes that signals from the ground clock are lower in frequency than those of the clock next to her on the tower. Light going up the tower is just a wave, and it is impossible for wave crests to disappear on the way up. Exactly as many oscillations of light arrive at the top of the tower as were emitted at the bottom. The experimenter concludes that the ground clock is running slow, and can confirm this by bringing the tower clock down to compare side by side with the ground clock. For a 1 km tower, the discrepancy would amount to about 9.4 nanoseconds per day, easily measurable with modern instrumentation. Clocks in a gravitational field do not all run at the same rate. Experiments such as the Pound–Rebka experiment have firmly established curvature of the time component of spacetime. The Pound–Rebka experiment says nothing about curvature of the space component of spacetime. But the theoretical arguments predicting gravitational time dilation do not depend on the details of general relativity at all. Any theory of gravity will predict gravitational time dilation if it respects the principle of equivalence. This includes Newtonian gravitation. A standard demonstration in general relativity is to show how, in the "Newtonian limit" (i.e. the particles are moving slowly, the gravitational field is weak, and the field is static), curvature of time alone is sufficient to derive Newton's law of gravity. Newtonian gravitation is a theory of curved time. General relativity is a theory of curved time and curved space. Given G as the gravitational constant, M as the mass of a Newtonian star, and orbiting bodies of insignificant mass at distance r from the star, the spacetime interval for Newtonian gravitation is one for which only the time coefficient is variable: Curvature of space The coefficient in front of describes the curvature of time in Newtonian gravitation, and this curvature completely accounts for all Newtonian gravitational effects. As expected, this correction factor is directly proportional to and , and because of the in the denominator, the correction factor increases as one approaches the gravitating body, meaning that time is curved. But general relativity is a theory of curved space and curved time, so if there are terms modifying the spatial components of the spacetime interval presented above, should not their effects be seen on, say, planetary and satellite orbits due to curvature correction factors applied to the spatial terms? The answer is that they are seen, but the effects are tiny. The reason is that planetary velocities are extremely small compared to the speed of light, so that for planets and satellites of the solar system, the term dwarfs the spatial terms. Despite the minuteness of the spatial terms, the first indications that something was wrong with Newtonian gravitation were discovered over a century-and-a-half ago. In 1859, Urbain Le Verrier, in an analysis of available timed observations of transits of Mercury over the Sun's disk from 1697 to 1848, reported that known physics could not explain the orbit of Mercury, unless there possibly existed a planet or asteroid belt within the orbit of Mercury. The perihelion of Mercury's orbit exhibited an excess rate of precession over that which could be explained by the tugs of the other planets. The ability to detect and accurately measure the minute value of this anomalous precession (only 43 arc seconds per tropical century) is testimony to the sophistication of 19th century astrometry. As the astronomer who had earlier discovered the existence of Neptune "at the tip of his pen" by analyzing irregularities in the orbit of Uranus, Le Verrier's announcement triggered a two-decades long period of "Vulcan-mania", as professional and amateur astronomers alike hunted for the hypothetical new planet. This search included several false sightings of Vulcan. It was ultimately established that no such planet or asteroid belt existed. In 1916, Einstein was to show that this anomalous precession of Mercury is explained by the spatial terms in the curvature of spacetime. Curvature in the temporal term, being simply an expression of Newtonian gravitation, has no part in explaining this anomalous precession. The success of his calculation was a powerful indication to Einstein's peers that the general theory of relativity could be correct. The most spectacular of Einstein's predictions was his calculation that the curvature terms in the spatial components of the spacetime interval could be measured in the bending of light around a massive body. Light has a slope of ±1 on a spacetime diagram. Its movement in space is equal to its movement in time. For the weak field expression of the invariant interval, Einstein calculated an exactly equal but opposite sign curvature in its spatial components. In Newton's gravitation, the coefficient in front of predicts bending of light around a star. In general relativity, the coefficient in front of predicts a doubling of the total bending. The story of the 1919 Eddington eclipse expedition and Einstein's rise to fame is well told elsewhere. Sources of spacetime curvature In Newton's theory of gravitation, the only source of gravitational force is mass. In contrast, general relativity identifies several sources of spacetime curvature in addition to mass. In the Einstein field equations, the sources of gravity are presented on the right-hand side in the stress–energy tensor. Fig. 5-5 classifies the various sources of gravity in the stress–energy tensor: (red): The total mass–energy density, including any contributions to the potential energy from forces between the particles, as well as kinetic energy from random thermal motions. and (orange): These are momentum density terms. Even if there is no bulk motion, energy may be transmitted by heat conduction, and the conducted energy will carry momentum. are the rates of flow of the of momentum per unit area in the . Even if there is no bulk motion, random thermal motions of the particles will give rise to momentum flow, so the terms (green) represent isotropic pressure, and the terms (blue) represent shear stresses. One important conclusion to be derived from the equations is that, colloquially speaking, gravity itself creates gravity. Energy has mass. Even in Newtonian gravity, the gravitational field is associated with an energy, called the gravitational potential energy. In general relativity, the energy of the gravitational field feeds back into creation of the gravitational field. This makes the equations nonlinear and hard to solve in anything other than weak field cases. Numerical relativity is a branch of general relativity using numerical methods to solve and analyze problems, often employing supercomputers to study black holes, gravitational waves, neutron stars and other phenomena in the strong field regime. Energy-momentum In special relativity, mass-energy is closely connected to momentum. Just as space and time are different aspects of a more comprehensive entity called spacetime, mass–energy and momentum are merely different aspects of a unified, four-dimensional quantity called four-momentum. In consequence, if mass–energy is a source of gravity, momentum must also be a source. The inclusion of momentum as a source of gravity leads to the prediction that moving or rotating masses can generate fields analogous to the magnetic fields generated by moving charges, a phenomenon known as gravitomagnetism. It is well known that the force of magnetism can be deduced by applying the rules of special relativity to moving charges. (An eloquent demonstration of this was presented by Feynman in volume II, of his Lectures on Physics, available online.) Analogous logic can be used to demonstrate the origin of gravitomagnetism. In Fig. 5-7a, two parallel, infinitely long streams of massive particles have equal and opposite velocities −v and +v relative to a test particle at rest and centered between the two. Because of the symmetry of the setup, the net force on the central particle is zero. Assume so that velocities are simply additive. Fig. 5-7b shows exactly the same setup, but in the frame of the upper stream. The test particle has a velocity of +v, and the bottom stream has a velocity of +2v. Since the physical situation has not changed, only the frame in which things are observed, the test particle should not be attracted towards either stream. It is not at all clear that the forces exerted on the test particle are equal. (1) Since the bottom stream is moving faster than the top, each particle in the bottom stream has a larger mass energy than a particle in the top. (2) Because of Lorentz contraction, there are more particles per unit length in the bottom stream than in the top stream. (3) Another contribution to the active gravitational mass of the bottom stream comes from an additional pressure term which, at this point, we do not have sufficient background to discuss. All of these effects together would seemingly demand that the test particle be drawn towards the bottom stream. The test particle is not drawn to the bottom stream because of a velocity-dependent force that serves to repel a particle that is moving in the same direction as the bottom stream. This velocity-dependent gravitational effect is gravitomagnetism. Matter in motion through a gravitomagnetic field is hence subject to so-called frame-dragging effects analogous to electromagnetic induction. It has been proposed that such gravitomagnetic forces underlie the generation of the relativistic jets (Fig. 5-8) ejected by some rotating supermassive black holes. Pressure and stress Quantities that are directly related to energy and momentum should be sources of gravity as well, namely internal pressure and stress. Taken together, , momentum, pressure and stress all serve as sources of gravity: Collectively, they are what tells spacetime how to curve. General relativity predicts that pressure acts as a gravitational source with exactly the same strength as mass–energy density. The inclusion of pressure as a source of gravity leads to dramatic differences between the predictions of general relativity versus those of Newtonian gravitation. For example, the pressure term sets a maximum limit to the mass of a neutron star. The more massive a neutron star, the more pressure is required to support its weight against gravity. The increased pressure, however, adds to the gravity acting on the star's mass. Above a certain mass determined by the Tolman–Oppenheimer–Volkoff limit, the process becomes runaway and the neutron star collapses to a black hole. The stress terms become highly significant when performing calculations such as hydrodynamic simulations of core-collapse supernovae. These predictions for the roles of pressure, momentum and stress as sources of spacetime curvature are elegant and play an important role in theory. In regards to pressure, the early universe was radiation dominated, and it is highly unlikely that any of the relevant cosmological data (e.g. nucleosynthesis abundances, etc.) could be reproduced if pressure did not contribute to gravity, or if it did not have the same strength as a source of gravity as mass–energy. Likewise, the mathematical consistency of the Einstein field equations would be broken if the stress terms did not contribute as a source of gravity. Experimental test of the sources of spacetime curvature Definitions: Active, passive, and inertial mass Bondi distinguishes between different possible types of mass: (1) is the mass which acts as the source of a gravitational field; (2) is the mass which reacts to a gravitational field; (3) is the mass which reacts to acceleration. is the same as in the discussion of the equivalence principle. In Newtonian theory, The third law of action and reaction dictates that and must be the same. On the other hand, whether and are equal is an empirical result. In general relativity, The equality of and is dictated by the equivalence principle. There is no "action and reaction" principle dictating any necessary relationship between and . Pressure as a gravitational source The classic experiment to measure the strength of a gravitational source (i.e. its active mass) was first conducted in 1797 by Henry Cavendish (Fig. 5-9a). Two small but dense balls are suspended on a fine wire, making a torsion balance. Bringing two large test masses close to the balls introduces a detectable torque. Given the dimensions of the apparatus and the measurable spring constant of the torsion wire, the gravitational constant G can be determined. To study pressure effects by compressing the test masses is hopeless, because attainable laboratory pressures are insignificant in comparison with the of a metal ball. However, the repulsive electromagnetic pressures resulting from protons being tightly squeezed inside atomic nuclei are typically on the order of 1028 atm ≈ 1033 Pa ≈ 1033 kg·s−2m−1. This amounts to about 1% of the nuclear mass density of approximately 1018kg/m3 (after factoring in c2 ≈ 9×1016m2s−2). If pressure does not act as a gravitational source, then the ratio should be lower for nuclei with higher atomic number Z, in which the electrostatic pressures are higher. (1968) did a Cavendish experiment using a Teflon mass suspended in a mixture of the liquids trichloroethylene and dibromoethane having the same buoyant density as the Teflon (Fig. 5-9b). Fluorine has atomic number , while bromine has . Kreuzer found that repositioning the Teflon mass caused no differential deflection of the torsion bar, hence establishing active mass and passive mass to be equivalent to a precision of 5×10−5. Although Kreuzer originally considered this experiment merely to be a test of the ratio of active mass to passive mass, Clifford Will (1976) reinterpreted the experiment as a fundamental test of the coupling of sources to gravitational fields. In 1986, Bartlett and Van Buren noted that lunar laser ranging had detected a 2 km offset between the moon's center of figure and its center of mass. This indicates an asymmetry in the distribution of Fe (abundant in the Moon's core) and Al (abundant in its crust and mantle). If pressure did not contribute equally to spacetime curvature as does mass–energy, the moon would not be in the orbit predicted by classical mechanics. They used their measurements to tighten the limits on any discrepancies between active and passive mass to about 10−12. With decades of additional lunar laser ranging data, Singh et al. (2023) reported improvement on these limits by a factor of about 100. Gravitomagnetism The existence of gravitomagnetism was proven by Gravity Probe B , a satellite-based mission which launched on 20 April 2004. The spaceflight phase lasted until 2005. The mission aim was to measure spacetime curvature near Earth, with particular emphasis on gravitomagnetism. Initial results confirmed the relatively large geodetic effect (which is due to simple spacetime curvature, and is also known as de Sitter precession) to an accuracy of about 1%. The much smaller frame-dragging effect (which is due to gravitomagnetism, and is also known as Lense–Thirring precession) was difficult to measure because of unexpected charge effects causing variable drift in the gyroscopes. Nevertheless, by August 2008, the frame-dragging effect had been confirmed to within 15% of the expected result, while the geodetic effect was confirmed to better than 0.5%. Subsequent measurements of frame dragging by laser-ranging observations of the LARES, and satellites has improved on the measurement, with results (as of 2016) demonstrating the effect to within 5% of its theoretical value, although there has been some disagreement on the accuracy of this result. Another effort, the Gyroscopes in General Relativity (GINGER) experiment, seeks to use three 6 m ring lasers mounted at right angles to each other 1400 m below the Earth's surface to measure this effect. The first ten years of experience with a prototype ring laser gyroscope array, GINGERINO, established that the full scale experiment should be able to measure gravitomagnetism due to the Earth's rotation to within a 0.1% level or even better.
Physical sciences
Physics basics: General
Physics
1016523
https://en.wikipedia.org/wiki/Legionella%20pneumophila
Legionella pneumophila
Legionella pneumophila, the primary causative agent for Legionnaire's disease, is an aerobic, pleomorphic, flagellated, non-spore-forming, Gram-negative bacterium. L. pneumophila is a intracellular parasite that preferentially infects soil amoebae and freshwater protozoa for replication. Due to L. pneumophila’s ability to thrive in water, it can grow in water filtration systems, leading to faucets, showers, and other fixtures. Aerosolized water droplets containing L. pneumophila originating from these fixtures may be inhaled by humans. Upon entry to the human respiratory tract, L. pneumophila is able to infect and reproduce within human alveolar macrophages. This causes the onset of Legionnaires' disease, also known as legionellosis. Infected humans may display symptoms such as fever, delirium, diarrhea, and decreased liver and kidney function. L. pneumophila infections can be diagnosed by a urine antigen test. The infections caused by the bacteria can be treated with fluoroquinolones and azithromycin antibiotics. Characterization L. pneumophila is a coccobacillus. It is a Gram-negative, aerobic bacterium that is non-fermentative. It is oxidase- and catalase-positive. L. pneumophila colony morphology is gray-white with a textured, cut-glass appearance; it also requires cysteine and iron to thrive. It grows on buffered charcoal yeast extract agar as well as in moist environments, such as tap water, in "opal-like" colonies. L. pneumophila is a intracellular bacterium. As an intracellular parasite, L. pneumophila has a preferential parasitic relationship with protozoa, which serve as a reservoir for the bacterium. Predators of protozoa, such as amoeba and ciliates, are natural hosts for L. pneumophila while humans are accidental hosts, as evidenced by there being only one reported case of L. pneumophila human-to-human transmission. Rather than through contagious spread, it infects the alveolar macrophages in human lungs when inhaled as aerosol. A special characteristic allows for the microbe to thrive in extracellular environments, such as various freshwater environments. This is achieved through its two forms: transmissive and replicative. The transition between the two is activated by changes in the availability of metabolic/nutritional resources in its current environment. The transmissive form is assumed when the bacteria is infecting its host, while the replicative form follows to carry out proliferation. Cell membrane structure L. pneumophila is a Gram-negative bacterium. Its unique outer membrane composed of lipoproteins, phospholipids, and other proteins is the distinguishing feature of Legionella spp. Legionella spp. possess unique lipopolysaccharides (LPS) extending from the outer membrane leaflet of the outer cell membrane that play a role in pathogenicity and adhesion to a host cell. Lipopolysaccharides are the leading surface antigen of all Legionella species including L. pneumophila. The bases for the somatic antigen specificity of this organism are located on the side chains of its cell wall. The chemical composition of these LPS side chains both with respect to components and arrangement of the different sugars, determines the nature of the somatic or O-antigenic determinants, which are important means of serologically classifying many Gram-negative bacteria. L. pneumophila exhibits distinct chemical characteristics in its LPS structure that distinguish it from other Gram-negative bacteria. The unique attributes are key factors in its serological identity and biological function. Ecology L. pneumophila is able to live in a diverse range of environmental conditions, tolerating temperatures from 0°C-63 °C, a pH range of 5.0-8.5, and in dissolved oxygen concentrations of 0.2-15.0 mg/liter. However, it multiplies within a narrower temperature range of 25 °C to 42 °C. L. pneumophila is notably resistant to chlorine derivatives that are commonly used to control water borne pathogens. This resistance allows infiltration and persistence in water systems even when standard disinfectant processes are employed. Water supply networks are the main source of L. pneumophila contamination, with the microbe commonly found in places such as cooling towers, water systems of hospitals, hotels, and cruise ships. Of note, this bacterium can form and reside in biofilms within water system pipes, allowing it be aerosolized through fixtures such as faucets, showers, and sprinklers. Exposure to these aerosols can lead to infection in susceptible individuals. Biofilms Biofilms are specialized, surface attachment communities that can consist of one or multiple microbes, ranging from bacteria, algae, and protozoa. These protective matrixes enable the microbe to live for extended periods of time in low-nutrient environments and in the presence of biocides. Multispecies biofilm on plumbing systems and in water distribution systems facilitate L. pneumophila growth due to the presence of freshwater protozoa. Environmental Protozoa L. pneumophila is capable of infecting and multiplying within various species of free-living protists and amoebas. This bacterium can infect and survive within protozoa genera such as Acanthamoeba, Vermamoeba, and Naegleria which often feed on bacteria in biofilms. It is through their growth in environmental protozoa and amoeba that L. pneumophila may persist in man-made water systems. Cyst-forming protozoans allow L. pneumophila to survive harsh environmental conditions such as chlorine, UV, ozonisation, and thermal treatments. Metabolism L. pneumophila uses glycolysis, the Entner-Doudoroff (ED) pathway, the pentose phosphate pathway (PP), and the citric acid cycle (TCA). While its genome contains genes for all these pathways, it lacks the genes encoding for the key enzymes type I–III fructose 1,6-bisphosphatases in gluconeogenesis. L. pneumophila can still perform gluconeogenesis, but uses alternative enzymes such as fructose 6-phosphate aldolase. The ED and PP pathways are the main pathways for glucose metabolism in this organism. Glucose is not the main source of energy, but does generate poly-3-hydroxybutyrate (PHB) through the ED pathway, which is a storage molecule converted to acetyl-CoA for use by the TCA cycle (Krebs cycle) when the microbe is nutrient deprived. Along with these pathways, serine was found to be a major nutrient due to its ability to be turned into pyruvate, which is an important intermediate in metabolic pathways in L. pneumophila. Glycerol is also used as a substrate, as indicated by transcriptome analysis. While carbohydrates and complex polysaccharides are minimally metabolized, amino acids are the main carbon and energy source for L. pneumophila. Imported amino acids are used by L. pneumophila to generate energy through the TCA cycle (Krebs cycle) and as sources of carbon and nitrogen. Nutrient acquisition Legionella is auxotrophic for seven amino acids: cysteine, leucine, methionine, valine, threonine, isoleucine, and arginine. Inside the vacuole, nutrient availability is low; the high demand of amino acids is not covered by the transport of free amino acids found in the host cytoplasm. To improve the availability of amino acids, the parasite promotes the host mechanisms of proteasomal degradation. This generates an excess of free amino acids in the cytoplasm of L. pneumophila-infected cells that can be used for intravacuolar proliferation of the parasite. Amino acids are imported into the LCV through various amino acid transporters such as the neutral amino acid transporter B(0). Even though L. pneumophila primarily uses amino acids as a carbon source, the bacteria does contain multiple amylases, such as LamB which hydrolyzes polysaccharides into glucose monomers for metabolism. Protein degradation to recycle amino acids and hydrolyzing polysaccharides are not the only methods by which L. pneumophila obtains carbon and energy sources from the host. Type II–secreted degradative enzymes may provide an additional strategy to generate carbon and energy sources. L. pneumophila is the only known intracellular pathogen to have a Type II Secretion System (secretome). Genomics There are 14 known serogroups of L. pneumophila, but serogroup 1 is most commonly the causative agent of Legionnaires’ disease. Three strains, L. pneumophila Philadelphia, L. pneumophila Paris, and L. pneumophila Lens, were isolated in 2004 which paved the way for understanding the molecular biology of the bacteria. Subspecies, which are commonly defined by geographical location, share about 80% of their genome with variation between strains that account for the difference in virulence between subspecies. The genome is relatively large of about 3.5 mega base pairs (mbp) which reflects a higher number of genes, corresponding with the ability of Legionella to adapt to different hosts and environments. There is a relatively high abundance genes encoding eukaryotic-like proteins (ELPs). ELPs are beneficial for mimicking the bacteria' eukaryotic hosts for pathogenicity. Other genes of L. pneumophila encode for Legionella-specific vacuoles, efflux transporters, ankyrin-repeat proteins, and many other virulence related characteristics. The bffA gene is associated with biofilm formation, and it is seen that strains without this gene form biofilms both quicker and thicker which aids in resistance to environmental stressors. In-depth comparative genome analysis using DNA arrays to study the gene content of 180 Legionella strains revealed high genomic plasticity and frequent horizontal gene transfer events. Horizontal gene transfer allows L. pneumophila to evolve at a rapid pace and commonly is associated with drug resistance. Pathogenesis L. pneumophila is able to invade and replicate within human alveolar macrophages. Internalization of the bacteria appears to occur through phagocytosis or coiling phagocytosis and is reliant on Dot/Icm type 4B secretion system (T4BSS). Once internalized, the Dot/Icm system begins secreting bacterial effector proteins that recruit host factors to the Legionella containing vacuole (LCV). This process prevents the LCV from fusing with the lysosomes that would otherwise degrade the bacteria. Vesicles of the host cell's rough endoplasmic reticulum are attracted to the LCV, and these vacuoles supple the LCV with necessary lipids and proteins. LCV membrane integrity requires a steady supply of host lipids, such as cellular cholesterol and the cis-monounsaturated fatty acid, palmitoleic acid. L. pneumophila replication occurs within the LCV. Once nutrients are depleted, the bacteria gain flagella and cytoxicity. To exit the host cell, L. pneumophila lyses the LCV and resides in the cytoplasm. In the cytoplasm, L. pneumophila inhibit organelle and plasma membrane function and structure which ultimately leads to osmotic lysis of the host cell. Virulence factors L. pneumophila exhibits a unique lipopolysaccharide (LPS) structure that is highly hydrophobic due to its being densely packed with branched fatty acids, and elevated levels of O-acetyl and N-acetyl groups. This structure helps prevent interaction with a common LPS immune system co-receptor, CD14. There is also a correlation between an LPS with a high molecular-weight and the inhibition of phagosome-lysosome fusion. L. pneumophila produces pili of varying lengths. The two pili proteins: PilE and Prepilin peptidase (PilD) are responsible for the production of type IV pili and subsequently, intracellular proliferation. L. pneumophila possesses a singular, polar flagellum that is used for cell motility, adhesion, host invasion, and biofilm formation. The same regulators that control flagellation also control lysosome avoidance and cytotoxicity. The macrophage infectivity potentiator is another key component of host cell invasion and intracellular replication. MIP displays peptidyl–prolyl cis/trans isomerase (PPIase) activity which is crucial for survival within the macrophage, along with transmigration across the lung epithelial barrier. Another key virulence factor of L. pneumophila is iron acquisition, the microbe utilizes two methods of iron uptake. Ferrous iron is collected through the use of a transport system involving an inner-membrane protein known as protein FeoB. Optimal intracellular infection is achieved in amoebae and macrophages via this transport system. The second form of uptake, involving ferric iron, is achieved through an iron chelator known as legiobactin. This is secreted by L. pneumophila when the microbes are being grown in a low iron chemically designed media. Legionella-containing vacuole For Legionella to survive within macrophages and protozoa, it must create a specialized compartment known as the Legionella-containing vacuole (LCV). Through the action of the Dot/Icm secretion system, the bacteria are able to prevent degradation by the normal endosomal trafficking pathway and instead replicate. Shortly after internalization, the bacteria specifically recruit endoplasmic reticulum-derived vesicles and mitochondria to the LCV while preventing the recruitment of endosomal markers such as Rab5a and Rab7a. Formation and maintenance of the vacuoles are crucial for pathogenesis; bacteria lacking the Dot/Icm secretion system are not pathogenic and cannot replicate within cells, while deletion of the Dot/Icm effector SdhA results in destabilization of the vacuolar membrane and no bacterial replication. Detection and treatment Antisera have been used both for slide agglutination studies and for direct detection of bacteria in tissues using immunofluorescence via fluorescent-labelled antibodies. Specific antibodies in patients can be determined by the indirect fluorescent antibody test. ELISA and microagglutination tests have also been successfully applied. A consistent method that has been used to detect the disease is the urine antigen test. Effective antibiotic treatment for Legionella pneumonia includes fluoroquinolones (levofloxacin or moxifloxacin) or, alternately, azithromycin. There has been no significant difference found between using a fluoroquinolone or azithromycin to treat Legionella pneumonia. Combination treatments with rifampicin are being tested as a response to antibiotic resistance during mono-treatments, though its effectiveness remains uncertain. These antibiotics work best because L. pneumophila is an intracellular pathogen. Levofloxacin and azithromycin have great intracellular activity and are able to penetrate into Legionella-infected cells. The Infectious Diseases Society of America recommends 5–10 days of treatment with levofloxacin or 3–5 days of treatment with azithromycin; however, patients that are immunocompromised or have a severe disease may require an extended course of treatment. Enzymes in the iron uptake pathway have been also suggested as important drug targets. Prevalence L. pneumophila is the primary causative organism for Legionnaires disease, responsible for over 90% of cases within the United States. Roughly 2 out of 100,000 people are infected each year in the European Union (EU), with an infection rate of approximately 5 per 100,000 in Italy. The highest reported amount of cases in the US, EU, and Italy have been among men over the age of 50. L. pneumophila often infects individuals through poor quality water sources. Approximately 20% of reported Legionnaires disease cases come from healthcare, senior living, or travel facilities that have been exposed to water contaminated with L. pneumophila. There may also be an increased risk of contracting L. pneumophila from private wells, as they are often unregulated and not as rigorously disinfected as municipal water systems. Several large outbreaks of Legionnaire's Disease have come from public hot tubs due to the temperature range of the water being ideal for the bacteria's growth. Legionnaires disease gained globally recognition after an outbreak in 1976 at a hotel in Philadelphia, Pennsylvania. The causative agent of the outbreak was L. pneumophila, which had contaminated the hotel's air conditioning water supply, allowing the microbe to be dispersed within the hotel's environment. A prominent mode of transmission for the disease is the inhalation of contaminated water aerosols. The outbreak resulted in a total of 182 reported cases and 29 deaths. This incident piloted research on the disease causing bacteria, as well as, preventative approaches to contamination. More recently, two outbreaks of Legionnaires disease among travelers on two cruise ships between November 2022 and June 2024 were reported by the United States Centers for Disease Control and Prevention (CDC). Hot tubs were identified as the likely source and the cruise lines modified their operation by increasing frequency of cleaning and hyperchlorination among other changes.
Biology and health sciences
Gram-positive bacteria
Plants
1017709
https://en.wikipedia.org/wiki/Arrowhead
Arrowhead
An arrowhead or point is the usually sharpened and hardened tip of an arrow, which contributes a majority of the projectile mass and is responsible for impacting and penetrating a target, or sometimes for special purposes such as signaling. The earliest arrowheads were made of stone and of organic materials; as human civilizations progressed, other alloy materials were used. Arrowheads are important archaeological artifacts; they are a subclass of projectile points. Modern enthusiasts still "produce over one million brand-new spear and arrow points per year". A craftsman who manufactures arrowheads is called an arrowsmith. History In the Stone Age, people used sharpened bone, flintknapped stones, flakes, and chips and bits of rock as weapons and tools. Such items remained in use throughout human civilization, with new materials used as time passed. As archaeological artifacts such objects are classed as projectile points, without specifying whether they were projected by a bow or by some other means such as throwing since the specific means of projection (the bow, the arrow shaft, the spear shaft, etc.) is found too seldom in direct association with any given point and the word "arrow" would imply a certainty about these points which simply does not exist. Such artifacts can be found all over the world in various locations. Those that have survived are usually made of stone, primarily consisting of flint, obsidian, or chert. In many excavations, bone, wooden, and metal arrowheads have also been found. The oldest known arrowheads likely date to 74,000 years ago in Ethiopia. Stone projectile points from 64,000 years were excavated in Sibudu Cave, South Africa. In these points, examinations found traces of blood and bone residues, and glue made from a plant-based resin that was used to fasten them on to a wooden shaft. This indicated "cognitively demanding behavior" required to manufacture glue. These hafted points might have been launched from bows. While "most attributes such as micro-residue distribution patterns and micro-wear will develop similarly on points used to tip spears, darts or arrows" and "explicit tests for distinctions between thrown spears and projected arrows have not yet been conducted" the researchers find "contextual support" for the use of these points on arrows: a broad range of animals was hunted, with an emphasis on taxa that prefer closed forested niches, including fast moving, terrestrial and arboreal animals. This is an argument for the use of traps, perhaps including snares. If snares were used, the use of cords and knots which would also have been adequate for the production of bows is implied. The employment of snares also demonstrates a practical understanding of the latent energy stored in bent branches, the main principle of bow construction. Cords and knots are implied by use-wear facets on perforated shell beads around 72,000 years old from Blombos. Archeologists in Louisiana have discovered that early Native Americans used Alligator gar scales as arrow heads. "Hunting with a bow and arrow requires intricate multi-staged planning, material collection and tool preparation and implies a range of innovative social and communication skills." Design Arrowheads are attached to arrow shafts to be shot from a bow; similar types of projectile points may be attached to a spear and "thrown" by means of an atlatl (spear thrower). The arrowhead or projectile point is the primary functional part of the arrow, and plays the largest role in determining its purpose. Some arrows may simply use a sharpened tip of the solid shaft, but it is far more common for separate arrowheads to be made, usually from metal, horn, rock, or some other hard material. Arrowheads may be attached to the shaft with a cap, a socket tang, or inserted into a split in the shaft and held by a process called hafting. Points attached with caps are simply slid snugly over the end of the shaft, or may be held on with hot glue. In medieval Europe, arrowheads were adhered with hide glue. Split-shaft construction involves splitting the arrow shaft lengthwise, inserting the arrowhead, and securing it using ferrule, sinew, rope, or wire. Modern arrowheads used for hunting come in a variety of classes and styles. Many traditionalist archers choose heads made of modern high carbon steel that closely resemble traditional stone heads (see Variants). Other classes of broadheads referred to as "mechanical" and "hybrid" are gaining popularity. Often, these heads rely on force created by passing through an animal to expand or open. Variants Arrowheads are usually separated by function: Bodkin points are short, rigid points with a small cross-section. They were made of unhardened iron and may have been used for better or longer flight, or for cheaper production. It has been suggested that the bodkin came into its own as a means of penetrating armour, however limited research has so far found no hardened bodkin points, so it appears likely that it was first designed either to extend range or as a cheaper and simpler alternative to the broadhead. In a modern test, a direct hit from a hard steel bodkin point penetrated a set of fifteenth-century chain armour made in Damascus. However, archery was minimally effective against plate armour, which became available to knights of fairly modest means by the late 14th century. Judo points have spring wires extending sideways from the tip. These catch on grass and debris to prevent the arrow from being lost in the vegetation. Used for practice and for small game. Broadheads were used for war and are still used for hunting. Medieval broadheads could be made from steel, sometimes with hardened edges. They usually have two to four sharp blades that cause massive bleeding in the victim. Their function is to deliver a wide cutting edge so as to kill as quickly as possible. They are expensive, damage most targets, and are usually not used for practice. Two main types of broadheads are used by hunters: the fixed-blade broadhead and the mechanical broadhead. While the fixed-blade broadhead keeps its blades rigid and unmovable on the broadhead at all times, the mechanical broadhead deploys its blades upon contact with the target, its blades swinging out to wound the target. "There are three requirements to making a broadhead. 1. It must be wide enough to cut through tissue to produce a quick, clean kill. 2. It must be narrow enough to penetrate well. 3. It must be of a shape that can be sharpened well." A few models known as hybrid broadheads have both fixed and replaceable blades, most often two relatively small fixed blades and two longer mechanically opening blades. The mechanical head flies better because it is more streamlined, but has less penetration as it uses some of the kinetic energy in the arrow to deploy its blades. Three-bladed, trilobate, or Scythian arrowheads appears in regions under influence of the Scythians and ancient Persians. It was the type normally used by the Achaemenid army. Target points are bullet-shaped with a sharp point, designed to penetrate target butts easily without causing excessive damage to them. Field points are similar to target points and have a distinct shoulder, so that missed outdoor shots do not become as stuck in obstacles such as tree stumps. They are also used for shooting practice by hunters, by offering similar flight characteristics and weights as broadheads, without getting lodged in target materials and causing excessive damage upon removal. Safety arrows are designed to be used in various forms of reenactment combat, to reduce the risk when shot at people. These arrows may have heads that are very wide or padded. In combination with bows of restricted draw weight and draw length, these heads may reduce to acceptable levels the risks of shooting arrows at suitably armoured people. The parameters will vary depending on the specific rules being used and on the levels of risk felt acceptable to the participants. For instance, SCA combat rules require a padded head at least in diameter, with bows not exceeding and of draw for use against well-armoured individuals. The Australia/New Zealand based SCA Kingdom of Lochac use bows and much smaller safety arrow heads similar to modern rubber bird blunts for their combat archery as these more accurately simulate real arrows.
Technology
Archery
null
1018112
https://en.wikipedia.org/wiki/Cimetidine
Cimetidine
Cimetidine, sold under the brand name Tagamet among others, is a histamine H2 receptor antagonist that inhibits stomach acid production. It is mainly used in the treatment of heartburn and peptic ulcers. With the development of proton pump inhibitors, such as omeprazole, approved for the same indications, cimetidine is available as an over-the-counter formulation to prevent heartburn or acid indigestion, along with the other H2-receptor antagonists. Cimetidine was developed in 1971 and came into commercial use in 1977. Cimetidine was approved in the United Kingdom in 1976, and was approved in the United States by the Food and Drug Administration in 1979. Medical uses Cimetidine is indicated for the treatment of duodenal ulcers, gastric ulcers, gastroesophageal reflux disease, and pathological hypersecretory conditions. Cimetidine is also used to relieve or prevent heartburn. Side effects Reported side effects of cimetidine include diarrhea, rashes, dizziness, fatigue, constipation, and muscle pain, all of which are usually mild and transient. It has been reported that mental confusion may occur in the elderly. Because of its hormonal effects, cimetidine rarely may cause sexual dysfunction including loss of libido and erectile dysfunction and gynecomastia (0.1–0.2%) in males during long-term treatment. Rarely, interstitial nephritis, urticaria, and angioedema have been reported with cimetidine treatment. Cimetidine is also commonly associated with transient raised aminotransferase activity; hepatotoxicity is rare. Overdose Cimetidine appears to be very safe in overdose, producing no symptoms even with massive overdoses (e.g., 20 g). Interactions Due to its non-selective inhibition of cytochrome P450 enzymes, cimetidine has numerous drug interactions. Examples of specific interactions include, but are not limited to, the following: Cimetidine affects the metabolism of methadone, sometimes resulting in higher blood levels and a higher incidence of side effects, and may interact with the antimalarial medication hydroxychloroquine. Cimetidine can also interact with a number of psychoactive medications, including tricyclic antidepressants and selective serotonin reuptake inhibitors, causing increased blood levels of these drugs and the potential of subsequent toxicity. Following administration of cimetidine, the elimination half-life and area-under-curve of zolmitriptan and its active metabolites were roughly doubled. Cimetidine is a potent inhibitor of tubular creatinine secretion. Creatinine is a metabolic byproduct of creatine breakdown. Accumulation of creatinine is associated with uremia, but the symptoms of creatinine accumulation are unknown, as they are hard to separate from other nitrogenous waste buildups. Like several other medications (e.g., erythromycin), cimetidine interferes with the body's metabolization of sildenafil, causing its strength and duration to increase and making its side effects more likely and prominent. Clinically significant drug interactions with the CYP1A2 substrate theophylline, the CYP2C9 substrate tolbutamide, the CYP2D6 substrate desipramine, and the CYP3A4 substrate triazolam have all been demonstrated with cimetidine, and interactions with other substrates of these enzymes are likely as well. Cimetidine has been shown clinically to reduce the clearance of mirtazapine, imipramine, timolol, nebivolol, sparteine, loratadine, nortriptyline, gabapentin, and desipramine in humans. Cimetidine inhibits the renal excretion of metformin and procainamide, resulting in increased circulating levels of these drugs. Interactions of potential clinical importance with cimetidine include warfarin, theophylline, phenytoin, carbamazepine, pethidine and other opioid analgesics, tricyclic antidepressants, lidocaine, terfenadine, amiodarone, flecainide, quinidine, fluorouracil, and benzodiazepines. Cimetidine may decrease the effects of CYP2D6 substrates that are prodrugs, such as codeine, tramadol, and tamoxifen. Cimetidine reduces the absorption of ketoconazole and itraconazole (which require a low pH). Cimetidine has a theoretical but unproven benefit in paracetamol toxicity. This is because N-acetyl-p-benzoquinone imine (NAPQI), a metabolite of paracetamol (acetaminophen) that is responsible for its hepatotoxicity, is formed from it by the cytochrome P450 system (specifically, CYP1A2, CYP2E1, and CYP3A4). Cimetidine is used in cancer metastasis research as a blocker of E-selectin. Pharmacology Pharmacodynamics Histamine H2 receptor antagonism The mechanism of action of cimetidine as an antacid is as a histamine H2 receptor antagonist. It has been found to bind to the H2 receptor with a Kd of 42 nM. Cytochrome P450 inhibition Cimetidine is a potent inhibitor of certain cytochrome P450 (CYP) enzymes, including CYP1A2, CYP2C9, CYP2C19, CYP2D6, CYP2E1, and CYP3A4. The drug appears to primarily inhibit CYP1A2, CYP2D6, and CYP3A4, of which it is described as a moderate inhibitor. This is notable since these three CYP isoenzymes are involved in CYP-mediated drug biotransformations; however, CYP1A2, CYP2C9, CYP2C19, CYP2D6, CYP2E1, and CYP3A4 are also involved in the oxidative metabolism of many commonly used drugs. As a result, cimetidine has the potential for a large number of pharmacokinetic interactions. Cimetidine is reported to be a competitive and reversible inhibitor of several CYP enzymes, although mechanism-based (suicide) irreversible inhibition has also been identified for cimetidine's inhibition of CYP2D6. It reversibly inhibits CYP enzymes by binding directly with the complexed heme-iron of the active site via one of its imidazole ring nitrogen atoms, thereby blocking the oxidation of other drugs. Antiandrogenic and estrogenic effects Cimetidine has been found to possess weak antiandrogenic activity at high doses. It directly and competitively antagonizes the androgen receptor (AR), the biological target of androgens like testosterone and dihydrotestosterone (DHT). However, the affinity of cimetidine for the AR is very weak; in one study, it showed only 0.00084% of the affinity of the anabolic steroid metribolone (100%) for the human AR (Ki = 140 μM and 1.18 nM, respectively). In any case, at sufficiently high doses, cimetidine has demonstrated weak but significant antiandrogenic effects in animals, including antiandrogenic effects in the rat ventral prostate and mouse kidney, reductions in the weights of the male accessory glands like the prostate gland and seminal vesicles in rats, and elevated gonadotropin levels in male rats (due to reduced negative feedback on the axis by androgens). In addition to AR antagonism, cimetidine has been found to inhibit the 2-hydroxylation of estradiol (via inhibition of CYP450 enzymes, which are involved in the metabolic inactivation of estradiol), resulting in increased estrogen levels. The medication has also been reported to reduce testosterone biosynthesis and increase prolactin levels in individual case reports, effects which might be secondary to increased estrogen levels. At typical therapeutic levels, cimetidine has either no effect on or causes small increases in circulating testosterone concentrations in men. Any increases in testosterone levels with cimetidine have been attributed to the loss of negative feedback on the HPG axis that results due to AR antagonism. At typical clinical dosages, such as those used to treat peptic ulcer disease, the incidence of gynecomastia (breast development) with cimetidine is very low at less than 1%. In one survey of over 9,000 patients taking cimetidine, gynecomastia was the most frequent endocrine-related complaint but was reported in only 0.2% of patients. At high doses however, such as those used to treat Zollinger–Ellison syndrome, there may be a higher incidence of gynecomastia with cimetidine. In one small study, a 20% incidence of gynecomastia was observed in 25 male patients with duodenal ulcers who were treated with 1,600 mg/day cimetidine. The symptoms appeared after 4 months of treatment and regressed within a month following discontinuation of cimetidine. In another small study, cimetidine was reported to have induced breast changes and erectile dysfunction in 60% of 22 men treated with it. These adverse effects completely resolved in all cases when the men were switched from cimetidine to ranitidine. A study of the United Kingdom General Practice Research Database, which contains over 80,000 men, found that the relative risk of gynecomastia in cimetidine users was 7.2 relative to non-users. People taking a dosage of cimetidine of greater than or equal to 1,000 mg showed more than 40 times the risk of gynecomastia than non-users. The risk was highest during the period of time of 7 to 12 months after starting cimetidine. The gynecomastia associated with cimetidine is thought to be due to blockade of ARs in the breasts, which results in estrogen action unopposed by androgens in this tissue, although increased levels of estrogens due to inhibition of estrogen metabolism is another possible mechanism. Cimetidine has also been associated with oligospermia (decreased sperm count) and sexual dysfunction (e.g., decreased libido, erectile dysfunction) in men in some research, which are hormonally related similarly. In accordance with the very weak nature of its AR antagonistic activity, cimetidine has shown minimal effectiveness in the treatment of androgen-dependent conditions such as acne, hirsutism (excessive hair growth), and hyperandrogenism (high androgen levels) in women. As such, its use for such indications is not recommended. Pharmacokinetics Cimetidine is rapidly absorbed regardless of route of administration. The oral bioavailability of cimetidine is 60 to 70%. The onset of action of cimetidine when taken orally is 30 minutes, and peak levels occur within 1 to 3 hours. Cimetidine is widely distributed throughout all tissues. It is able to cross the blood–brain barrier and can produce effects in the central nervous system (e.g., headaches, dizziness, somnolence). The volume of distribution of cimetidine is 0.8 L/kg in adults and 1.2 to 2.1 L/kg in children. Its plasma protein binding is 13 to 25% and is said to be without pharmacological significance. Cimetidine undergoes relatively little metabolism, with 56 to 85% excreted unchanged. It is metabolized in the liver into cimetidine sulfoxide, hydroxycimetidine, and guanyl urea cimetidine. The major metabolite of cimetidine is the sulfoxide, which accounts for about 30% of excreted material. Cimetidine is rapidly eliminated, with an elimination half-life of 123 minutes, or about 2 hours. It has been said to have a duration of action of 4 to 8 hours. The medication is mainly eliminated in urine. History Cimetidine, approved by the FDA for inhibition of gastric acid secretion, has been advocated for a number of dermatological diseases. Cimetidine was the prototypical histamine H2 receptor antagonist from which the later members of the class were developed. Cimetidine was the culmination of a project at Smith, Kline and French (SK&F) Laboratories in Welwyn Garden City (now part of GlaxoSmithKline) by James W. Black, C. Robin Ganellin, and others to develop a histamine receptor antagonist to suppress stomach acid secretion. This was one of the first drugs discovered using a rational drug design approach. Sir James W. Black shared the 1988 Nobel Prize in Physiology or Medicine for the discovery of propranolol and also is credited for the discovery of cimetidine. At the time (1964), histamine was known to stimulate the secretion of stomach acid, but also that traditional antihistamines had no effect on acid production. In the process, the SK&F scientists also proved the existence of histamine H2 receptors. The SK&F team used a rational drug-design structure starting from the structure of histamine — the only design lead, since nothing was known of the then hypothetical H2 receptor. Hundreds of modified compounds were synthesized in an effort to develop a model of the receptor. The first breakthrough was Nα-guanylhistamine, a partial H2 receptor antagonist. From this lead, the receptor model was further refined and eventually led to the development of burimamide, the first H2 receptor antagonist. Burimamide, a specific competitive antagonist at the H2 receptor, 100 times more potent than Nα-guanylhistamine, proved the existence of the H2 receptor. Burimamide was still insufficiently potent for oral administration, and further modification of the structure, based on modifying the pKa of the compound, led to the development of metiamide. Metiamide was an effective agent; it was associated, however, with unacceptable nephrotoxicity and agranulocytosis. The toxicity was proposed to arise from the thiourea group, and similar guanidine analogues were investigated until the ultimate discovery of cimetidine. The compound was synthesized in 1972 and evaluated for toxicology by 1973. It passed all trials. Cimetidine was first marketed in the United Kingdom in 1976, and in the U.S. in August 1977; therefore, it took 12 years from initiation of the H2 receptor antagonist program to commercialization. By 1979, Tagamet was being sold in more than 100 countries and became the top-selling prescription product in the U.S., Canada, and several other countries. In November 1997, the American Chemical Society and the Royal Society of Chemistry in the U.K. jointly recognized the work as a milestone in drug discovery by designating it an International Historic Chemical Landmark during a ceremony at SmithKline Beecham's New Frontiers Science Park research facilities in Harlow, England. The commercial name "Tagamet" was decided upon by fusing the two words "antagonist" and "cimetidine". Subsequent to the introduction onto the U.S. drug market, two other H2 receptor antagonists were approved, ranitidine (Zantac, Glaxo Labs) and famotidine (Pepcid, Yamanouchi, Ltd.) Cimetidine became the first drug ever to reach more than $1 billion a year in sales, thus making it the first blockbuster drug. Tagamet has been largely replaced by proton pump inhibitors for treating peptic ulcers, but is available as an over-the-counter medicine for heartburn in many countries. Research Some evidence suggests cimetidine could be effective in the treatment of common warts, but more rigorous double-blind clinical trials found it to be no more effective than a placebo. Tentative evidence supports a beneficial role as add-on therapy in colorectal cancer. Cimetidine inhibits ALA synthase activity and hence may have some therapeutic value in preventing and treating acute porphyria attacks. There is some evidence supporting the use of Cimetidine in the treatment of PFAPA. Veterinary use In dogs, cimetidine is used as an antiemetic when treating chronic gastritis.
Biology and health sciences
Antihistamines
Health
1018257
https://en.wikipedia.org/wiki/3-manifold
3-manifold
In mathematics, a 3-manifold is a topological space that locally looks like a three-dimensional Euclidean space. A 3-manifold can be thought of as a possible shape of the universe. Just as a sphere looks like a plane (a tangent plane) to a small and close enough observer, all 3-manifolds look like our universe does to a small enough observer. This is made more precise in the definition below. Principles Definition A topological space is a 3-manifold if it is a second-countable Hausdorff space and if every point in has a neighbourhood that is homeomorphic to Euclidean 3-space. Mathematical theory of 3-manifolds The topological, piecewise-linear, and smooth categories are all equivalent in three dimensions, so little distinction is made in whether we are dealing with say, topological 3-manifolds, or smooth 3-manifolds. Phenomena in three dimensions can be strikingly different from phenomena in other dimensions, and so there is a prevalence of very specialized techniques that do not generalize to dimensions greater than three. This special role has led to the discovery of close connections to a diversity of other fields, such as knot theory, geometric group theory, hyperbolic geometry, number theory, Teichmüller theory, topological quantum field theory, gauge theory, Floer homology, and partial differential equations. 3-manifold theory is considered a part of low-dimensional topology or geometric topology. A key idea in the theory is to study a 3-manifold by considering special surfaces embedded in it. One can choose the surface to be nicely placed in the 3-manifold, which leads to the idea of an incompressible surface and the theory of Haken manifolds, or one can choose the complementary pieces to be as nice as possible, leading to structures such as Heegaard splittings, which are useful even in the non-Haken case. Thurston's contributions to the theory allow one to also consider, in many cases, the additional structure given by a particular Thurston model geometry (of which there are eight). The most prevalent geometry is hyperbolic geometry. Using a geometry in addition to special surfaces is often fruitful. The fundamental groups of 3-manifolds strongly reflect the geometric and topological information belonging to a 3-manifold. Thus, there is an interplay between group theory and topological methods. Invariants describing 3-manifolds 3-manifolds are an interesting special case of low-dimensional topology because their topological invariants give a lot of information about their structure in general. If we let be a 3-manifold and be its fundamental group, then a lot of information can be derived from them. For example, using Poincare duality and the Hurewicz theorem, we have the following homology groups: where the last two groups are isomorphic to the group homology and cohomology of , respectively; that is,From this information a basic homotopy theoretic classification of 3-manifolds can be found. Note from the Postnikov tower there is a canonical mapIf we take the pushforward of the fundamental class into we get an element . It turns out the group together with the group homology class gives a complete algebraic description of the homotopy type of . Connected sums One important topological operation is the connected sum of two 3-manifolds . In fact, from general theorems in topology, we find for a three manifold with a connected sum decomposition the invariants above for can be computed from the . In particularMoreover, a 3-manifold which cannot be described as a connected sum of two 3-manifolds is called prime. Second homotopy groups For the case of a 3-manifold given by a connected sum of prime 3-manifolds, it turns out there is a nice description of the second fundamental group as a -module. For the special case of having each is infinite but not cyclic, if we take based embeddings of a 2-sphere where then the second fundamental group has the presentationgiving a straightforward computation of this group. Important examples of 3-manifolds Euclidean 3-space Euclidean 3-space is the most important example of a 3-manifold, as all others are defined in relation to it. This is just the standard 3-dimensional vector space over the real numbers. 3-sphere A 3-sphere is a higher-dimensional analogue of a sphere. It consists of the set of points equidistant from a fixed central point in 4-dimensional Euclidean space. Just as an ordinary sphere (or 2-sphere) is a two-dimensional surface that forms the boundary of a ball in three dimensions, a 3-sphere is an object with three dimensions that forms the boundary of a ball in four dimensions. Many examples of 3-manifolds can be constructed by taking quotients of the 3-sphere by a finite group acting freely on via a map , so . Real projective 3-space Real projective 3-space, or RP3, is the topological space of lines passing through the origin 0 in R4. It is a compact, smooth manifold of dimension 3, and is a special case Gr(1, R4) of a Grassmannian space. RP3 is (diffeomorphic to) SO(3), hence admits a group structure; the covering map S3 → RP3 is a map of groups Spin(3) → SO(3), where Spin(3) is a Lie group that is the universal cover of SO(3). 3-torus The 3-dimensional torus is the product of 3 circles. That is: The 3-torus, T3 can be described as a quotient of R3 under integral shifts in any coordinate. That is, the 3-torus is R3 modulo the action of the integer lattice Z3 (with the action being taken as vector addition). Equivalently, the 3-torus is obtained from the 3-dimensional cube by gluing the opposite faces together. A 3-torus in this sense is an example of a 3-dimensional compact manifold. It is also an example of a compact abelian Lie group. This follows from the fact that the unit circle is a compact abelian Lie group (when identified with the unit complex numbers with multiplication). Group multiplication on the torus is then defined by coordinate-wise multiplication. Hyperbolic 3-space Hyperbolic space is a homogeneous space that can be characterized by a constant negative curvature. It is the model of hyperbolic geometry. It is distinguished from Euclidean spaces with zero curvature that define the Euclidean geometry, and models of elliptic geometry (like the 3-sphere) that have a constant positive curvature. When embedded to a Euclidean space (of a higher dimension), every point of a hyperbolic space is a saddle point. Another distinctive property is the amount of space covered by the 3-ball in hyperbolic 3-space: it increases exponentially with respect to the radius of the ball, rather than polynomially. Poincaré dodecahedral space The Poincaré homology sphere (also known as Poincaré dodecahedral space) is a particular example of a homology sphere. Being a spherical 3-manifold, it is the only homology 3-sphere (besides the 3-sphere itself) with a finite fundamental group. Its fundamental group is known as the binary icosahedral group and has order 120. This shows the Poincaré conjecture cannot be stated in homology terms alone. In 2003, lack of structure on the largest scales (above 60 degrees) in the cosmic microwave background as observed for one year by the WMAP spacecraft led to the suggestion, by Jean-Pierre Luminet of the Observatoire de Paris and colleagues, that the shape of the universe is a Poincaré sphere. In 2008, astronomers found the best orientation on the sky for the model and confirmed some of the predictions of the model, using three years of observations by the WMAP spacecraft. However, there is no strong support for the correctness of the model, as yet. Seifert–Weber space In mathematics, Seifert–Weber space (introduced by Herbert Seifert and Constantin Weber) is a closed hyperbolic 3-manifold. It is also known as Seifert–Weber dodecahedral space and hyperbolic dodecahedral space. It is one of the first discovered examples of closed hyperbolic 3-manifolds. It is constructed by gluing each face of a dodecahedron to its opposite in a way that produces a closed 3-manifold. There are three ways to do this gluing consistently. Opposite faces are misaligned by 1/10 of a turn, so to match them they must be rotated by 1/10, 3/10 or 5/10 turn; a rotation of 3/10 gives the Seifert–Weber space. Rotation of 1/10 gives the Poincaré homology sphere, and rotation by 5/10 gives 3-dimensional real projective space. With the 3/10-turn gluing pattern, the edges of the original dodecahedron are glued to each other in groups of five. Thus, in the Seifert–Weber space, each edge is surrounded by five pentagonal faces, and the dihedral angle between these pentagons is 72°. This does not match the 117° dihedral angle of a regular dodecahedron in Euclidean space, but in hyperbolic space there exist regular dodecahedra with any dihedral angle between 60° and 117°, and the hyperbolic dodecahedron with dihedral angle 72° may be used to give the Seifert–Weber space a geometric structure as a hyperbolic manifold. It is a quotient space of the order-5 dodecahedral honeycomb, a regular tessellation of hyperbolic 3-space by dodecahedra with this dihedral angle. Gieseking manifold In mathematics, the Gieseking manifold is a cusped hyperbolic 3-manifold of finite volume. It is non-orientable and has the smallest volume among non-compact hyperbolic manifolds, having volume approximately 1.01494161. It was discovered by . The Gieseking manifold can be constructed by removing the vertices from a tetrahedron, then gluing the faces together in pairs using affine-linear maps. Label the vertices 0, 1, 2, 3. Glue the face with vertices 0,1,2 to the face with vertices 3,1,0 in that order. Glue the face 0,2,3 to the face 3,2,1 in that order. In the hyperbolic structure of the Gieseking manifold, this ideal tetrahedron is the canonical polyhedral decomposition of David B. A. Epstein and Robert C. Penner. Moreover, the angle made by the faces is . The triangulation has one tetrahedron, two faces, one edge and no vertices, so all the edges of the original tetrahedron are glued together. Some important classes of 3-manifolds Graph manifold Haken manifold Homology spheres Hyperbolic 3-manifold I-bundles Knot and link complements Lens space Seifert fiber spaces, Circle bundles Spherical 3-manifold Surface bundles over the circle Torus bundle Hyperbolic link complements A hyperbolic link is a link in the 3-sphere with complement that has a complete Riemannian metric of constant negative curvature, i.e. has a hyperbolic geometry. A hyperbolic knot is a hyperbolic link with one component. The following examples are particularly well-known and studied. Figure eight knot Whitehead link Borromean rings The classes are not necessarily mutually exclusive. Some important structures on 3-manifolds Contact geometry Contact geometry is the study of a geometric structure on smooth manifolds given by a hyperplane distribution in the tangent bundle and specified by a one-form, both of which satisfy a 'maximum non-degeneracy' condition called 'complete non-integrability'. From the Frobenius theorem, one recognizes the condition as the opposite of the condition that the distribution be determined by a codimension one foliation on the manifold ('complete integrability'). Contact geometry is in many ways an odd-dimensional counterpart of symplectic geometry, which belongs to the even-dimensional world. Both contact and symplectic geometry are motivated by the mathematical formalism of classical mechanics, where one can consider either the even-dimensional phase space of a mechanical system or the odd-dimensional extended phase space that includes the time variable. Haken manifold A Haken manifold is a compact, P²-irreducible 3-manifold that is sufficiently large, meaning that it contains a properly embedded two-sided incompressible surface. Sometimes one considers only orientable Haken manifolds, in which case a Haken manifold is a compact, orientable, irreducible 3-manifold that contains an orientable, incompressible surface. A 3-manifold finitely covered by a Haken manifold is said to be virtually Haken. The Virtually Haken conjecture asserts that every compact, irreducible 3-manifold with infinite fundamental group is virtually Haken. Haken manifolds were introduced by Wolfgang Haken. Haken proved that Haken manifolds have a hierarchy, where they can be split up into 3-balls along incompressible surfaces. Haken also showed that there was a finite procedure to find an incompressible surface if the 3-manifold had one. Jaco and Oertel gave an algorithm to determine if a 3-manifold was Haken. Essential lamination An essential lamination is a lamination where every leaf is incompressible and end incompressible, if the complementary regions of the lamination are irreducible, and if there are no spherical leaves. Essential laminations generalize the incompressible surfaces found in Haken manifolds. Heegaard splitting A Heegaard splitting is a decomposition of a compact oriented 3-manifold that results from dividing it into two handlebodies. Every closed, orientable three-manifold may be so obtained; this follows from deep results on the triangulability of three-manifolds due to Moise. This contrasts strongly with higher-dimensional manifolds which need not admit smooth or piecewise linear structures. Assuming smoothness the existence of a Heegaard splitting also follows from the work of Smale about handle decompositions from Morse theory. Taut foliation A taut foliation is a codimension 1 foliation of a 3-manifold with the property that there is a single transverse circle intersecting every leaf. By transverse circle, is meant a closed loop that is always transverse to the tangent field of the foliation. Equivalently, by a result of Dennis Sullivan, a codimension 1 foliation is taut if there exists a Riemannian metric that makes each leaf a minimal surface. Taut foliations were brought to prominence by the work of William Thurston and David Gabai. Foundational results Some results are named as conjectures as a result of historical artifacts. We begin with the purely topological: Moise's theorem In geometric topology, Moise's theorem, proved by Edwin E. Moise in, states that any topological 3-manifold has an essentially unique piecewise-linear structure and smooth structure. As corollary, every compact 3-manifold has a Heegaard splitting. Prime decomposition theorem The prime decomposition theorem for 3-manifolds states that every compact, orientable 3-manifold is the connected sum of a unique (up to homeomorphism) collection of prime 3-manifolds. A manifold is prime if it cannot be presented as a connected sum of more than one manifold, none of which is the sphere of the same dimension. Kneser–Haken finiteness Kneser-Haken finiteness says that for each compact 3-manifold, there is a constant C such that any collection of disjoint incompressible embedded surfaces of cardinality greater than C must contain parallel elements. Loop and Sphere theorems The loop theorem is a generalization of Dehn's lemma and should more properly be called the "disk theorem". It was first proven by Christos Papakyriakopoulos in 1956, along with Dehn's lemma and the Sphere theorem. A simple and useful version of the loop theorem states that if there is a map with not nullhomotopic in , then there is an embedding with the same property. The sphere theorem of gives conditions for elements of the second homotopy group of a 3-manifold to be represented by embedded spheres. One example is the following: Let be an orientable 3-manifold such that is not the trivial group. Then there exists a non-zero element of having a representative that is an embedding . Annulus and Torus theorems The annulus theorem states that if a pair of disjoint simple closed curves on the boundary of a three manifold are freely homotopic then they cobound a properly embedded annulus. This should not be confused with the high dimensional theorem of the same name. The torus theorem is as follows: Let M be a compact, irreducible 3-manifold with nonempty boundary. If M admits an essential map of a torus, then M admits an essential embedding of either a torus or an annulus JSJ decomposition The JSJ decomposition, also known as the toral decomposition, is a topological construct given by the following theorem: Irreducible orientable closed (i.e., compact and without boundary) 3-manifolds have a unique (up to isotopy) minimal collection of disjointly embedded incompressible tori such that each component of the 3-manifold obtained by cutting along the tori is either atoroidal or Seifert-fibered. The acronym JSJ is for William Jaco, Peter Shalen, and Klaus Johannson. The first two worked together, and the third worked independently. Scott core theorem The Scott core theorem is a theorem about the finite presentability of fundamental groups of 3-manifolds due to G. Peter Scott. The precise statement is as follows: Given a 3-manifold (not necessarily compact) with finitely generated fundamental group, there is a compact three-dimensional submanifold, called the compact core or Scott core, such that its inclusion map induces an isomorphism on fundamental groups. In particular, this means a finitely generated 3-manifold group is finitely presentable. A simplified proof is given in, and a stronger uniqueness statement is proven in. Lickorish–Wallace theorem The Lickorish–Wallace theorem states that any closed, orientable, connected 3-manifold may be obtained by performing Dehn surgery on a framed link in the 3-sphere with surgery coefficients. Furthermore, each component of the link can be assumed to be unknotted. Waldhausen's theorems on topological rigidity Friedhelm Waldhausen's theorems on topological rigidity say that certain 3-manifolds (such as those with an incompressible surface) are homeomorphic if there is an isomorphism of fundamental groups which respects the boundary. Waldhausen conjecture on Heegaard splittings Waldhausen conjectured that every closed orientable 3-manifold has only finitely many Heegaard splittings (up to homeomorphism) of any given genus. Smith conjecture The Smith conjecture (now proven) states that if f is a diffeomorphism of the 3-sphere of finite order, then the fixed point set of f cannot be a nontrivial knot. Cyclic surgery theorem The cyclic surgery theorem states that, for a compact, connected, orientable, irreducible three-manifold M whose boundary is a torus T, if M is not a Seifert-fibered space and r,s are slopes on T such that their Dehn fillings have cyclic fundamental group, then the distance between r and s (the minimal number of times that two simple closed curves in T representing r and s must intersect) is at most 1. Consequently, there are at most three Dehn fillings of M with cyclic fundamental group. Thurston's hyperbolic Dehn surgery theorem and the Jørgensen–Thurston theorem Thurston's hyperbolic Dehn surgery theorem states: is hyperbolic as long as a finite set of exceptional slopes is avoided for the i-th cusp for each i. In addition, converges to M in H as all for all corresponding to non-empty Dehn fillings . This theorem is due to William Thurston and fundamental to the theory of hyperbolic 3-manifolds. It shows that nontrivial limits exist in H. Troels Jorgensen's study of the geometric topology further shows that all nontrivial limits arise by Dehn filling as in the theorem. Another important result by Thurston is that volume decreases under hyperbolic Dehn filling. In fact, the theorem states that volume decreases under topological Dehn filling, assuming of course that the Dehn-filled manifold is hyperbolic. The proof relies on basic properties of the Gromov norm. Jørgensen also showed that the volume function on this space is a continuous, proper function. Thus by the previous results, nontrivial limits in H are taken to nontrivial limits in the set of volumes. In fact, one can further conclude, as did Thurston, that the set of volumes of finite volume hyperbolic 3-manifolds has ordinal type . This result is known as the Thurston-Jørgensen theorem. Further work characterizing this set was done by Gromov. Also, Gabai, Meyerhoff & Milley showed that the Weeks manifold has the smallest volume of any closed orientable hyperbolic 3-manifold. Thurston's hyperbolization theorem for Haken manifolds One form of Thurston's geometrization theorem states: If M is a compact irreducible atoroidal Haken manifold whose boundary has zero Euler characteristic, then the interior of M has a complete hyperbolic structure of finite volume. The Mostow rigidity theorem implies that if a manifold of dimension at least 3 has a hyperbolic structure of finite volume, then it is essentially unique. The conditions that the manifold M should be irreducible and atoroidal are necessary, as hyperbolic manifolds have these properties. However the condition that the manifold be Haken is unnecessarily strong. Thurston's hyperbolization conjecture states that a closed irreducible atoroidal 3-manifold with infinite fundamental group is hyperbolic, and this follows from Perelman's proof of the Thurston geometrization conjecture. Tameness conjecture, also called the Marden conjecture or tame ends conjecture The tameness theorem states that every complete hyperbolic 3-manifold with finitely generated fundamental group is topologically tame, in other words homeomorphic to the interior of a compact 3-manifold. The tameness theorem was conjectured by Marden. It was proved by Agol and, independently, by Danny Calegari and David Gabai. It is one of the fundamental properties of geometrically infinite hyperbolic 3-manifolds, together with the density theorem for Kleinian groups and the ending lamination theorem. It also implies the Ahlfors measure conjecture. Ending lamination conjecture The ending lamination theorem, originally conjectured by William Thurston and later proven by Jeffrey Brock, Richard Canary, and Yair Minsky, states that hyperbolic 3-manifolds with finitely generated fundamental groups are determined by their topology together with certain "end invariants", which are geodesic laminations on some surfaces in the boundary of the manifold. Poincaré conjecture The 3-sphere is an especially important 3-manifold because of the now-proven Poincaré conjecture. Originally conjectured by Henri Poincaré, the theorem concerns a space that locally looks like ordinary three-dimensional space but is connected, finite in size, and lacks any boundary (a closed 3-manifold). The Poincaré conjecture claims that if such a space has the additional property that each loop in the space can be continuously tightened to a point, then it is necessarily a three-dimensional sphere. An analogous result has been known in higher dimensions for some time. After nearly a century of effort by mathematicians, Grigori Perelman presented a proof of the conjecture in three papers made available in 2002 and 2003 on arXiv. The proof followed on from the program of Richard S. Hamilton to use the Ricci flow to attack the problem. Perelman introduced a modification of the standard Ricci flow, called Ricci flow with surgery to systematically excise singular regions as they develop, in a controlled way. Several teams of mathematicians have verified that Perelman's proof is correct. Thurston's geometrization conjecture Thurston's geometrization conjecture states that certain three-dimensional topological spaces each have a unique geometric structure that can be associated with them. It is an analogue of the uniformization theorem for two-dimensional surfaces, which states that every simply connected Riemann surface can be given one of three geometries (Euclidean, spherical, or hyperbolic). In three dimensions, it is not always possible to assign a single geometry to a whole topological space. Instead, the geometrization conjecture states that every closed 3-manifold can be decomposed in a canonical way into pieces that each have one of eight types of geometric structure. The conjecture was proposed by William , and implies several other conjectures, such as the Poincaré conjecture and Thurston's elliptization conjecture. Thurston's hyperbolization theorem implies that Haken manifolds satisfy the geometrization conjecture. Thurston announced a proof in the 1980s and since then several complete proofs have appeared in print. Grigori Perelman sketched a proof of the full geometrization conjecture in 2003 using Ricci flow with surgery. There are now several different manuscripts (see below) with details of the proof. The Poincaré conjecture and the spherical space form conjecture are corollaries of the geometrization conjecture, although there are shorter proofs of the former that do not lead to the geometrization conjecture. Virtually fibered conjecture and Virtually Haken conjecture The virtually fibered conjecture, formulated by American mathematician William Thurston, states that every closed, irreducible, atoroidal 3-manifold with infinite fundamental group has a finite cover which is a surface bundle over the circle. The virtually Haken conjecture states that every compact, orientable, irreducible three-dimensional manifold with infinite fundamental group is virtually Haken. That is, it has a finite cover (a covering space with a finite-to-one covering map) that is a Haken manifold. In a posting on the ArXiv on 25 Aug 2009, Daniel Wise implicitly implied (by referring to a then unpublished longer manuscript) that he had proven the Virtually fibered conjecture for the case where the 3-manifold is closed, hyperbolic, and Haken. This was followed by a survey article in Electronic Research Announcements in Mathematical Sciences. Several more preprints have followed, including the aforementioned longer manuscript by Wise. In March 2012, during a conference at Institut Henri Poincaré in Paris, Ian Agol announced he could prove the virtually Haken conjecture for closed hyperbolic 3-manifolds. The proof built on results of Kahn and Markovic in their proof of the Surface subgroup conjecture and results of Wise in proving the Malnormal Special Quotient Theorem and results of Bergeron and Wise for the cubulation of groups. Taken together with Wise's results, this implies the virtually fibered conjecture for all closed hyperbolic 3-manifolds. Simple loop conjecture If is a map of closed connected surfaces such that is not injective, then there exists a non-contractible simple closed curve such that is homotopically trivial. This conjecture was proven by David Gabai. Surface subgroup conjecture The surface subgroup conjecture of Friedhelm Waldhausen states that the fundamental group of every closed, irreducible 3-manifold with infinite fundamental group has a surface subgroup. By "surface subgroup" we mean the fundamental group of a closed surface not the 2-sphere. This problem is listed as Problem 3.75 in Robion Kirby's problem list. Assuming the geometrization conjecture, the only open case was that of closed hyperbolic 3-manifolds. A proof of this case was announced in the Summer of 2009 by Jeremy Kahn and Vladimir Markovic and outlined in a talk August 4, 2009 at the FRG (Focused Research Group) Conference hosted by the University of Utah. A preprint appeared on the arxiv in October 2009. Their paper was published in the Annals of Mathematics in 2012. In June 2012, Kahn and Markovic were given the Clay Research Awards by the Clay Mathematics Institute at a ceremony in Oxford. Important conjectures Cabling conjecture The cabling conjecture states that if Dehn surgery on a knot in the 3-sphere yields a reducible 3-manifold, then that knot is a -cable on some other knot, and the surgery must have been performed using the slope . Lubotzky–Sarnak conjecture The fundamental group of any finite volume hyperbolic n-manifold does not have Property τ.
Mathematics
Topology
null
4182449
https://en.wikipedia.org/wiki/Tablet%20computer
Tablet computer
A tablet computer, commonly shortened to tablet, is a mobile device, typically with a mobile operating system and touchscreen display processing circuitry, and a rechargeable battery in a single, thin and flat package. Tablets, being computers, have similar capabilities, but lack some input/output (I/O) abilities that others have. Modern tablets largely resemble modern smartphones, the only differences being that tablets are relatively larger than smartphones, with screens or larger, measured diagonally, and may not support access to a cellular network. Unlike laptops (which have traditionally run off operating systems usually designed for desktops), tablets usually run mobile operating systems, alongside smartphones. The touchscreen display is operated by gestures executed by finger or digital pen (stylus), instead of the mouse, touchpad, and keyboard of larger computers. Portable computers can be classified according to the presence and appearance of physical keyboards. Two species of tablet, the slate and booklet, do not have physical keyboards and usually accept text and other input by use of a virtual keyboard shown on their touchscreen displays. To compensate for their lack of a physical keyboard, most tablets can connect to independent physical keyboards by Bluetooth or USB; 2-in-1 PCs have keyboards, distinct from tablets. The form of the tablet was conceptualized in the middle of the 20th century (Stanley Kubrick depicted fictional tablets in the 1968 science fiction film 2001: A Space Odyssey) and prototyped and developed in the last two decades of that century. In 2010, Apple released the iPad, the first mass-market tablet to achieve widespread popularity. Thereafter, tablets rapidly rose in ubiquity and soon became a large product category used for personal, educational and workplace applications. Popular uses for a tablet PC include viewing presentations, video-conferencing, reading e-books, watching movies, sharing photos and more. As of 2021 there are 1.28 billion tablet users worldwide according to data provided by Statista, while Apple holds the largest manufacturer market share followed by Samsung and Lenovo. History The tablet computer and its associated operating system began with the development of pen computing. Electrical devices with data input and output on a flat information display existed as early as 1888 with the telautograph, which used a sheet of paper as display and a pen attached to electromechanical actuators. Throughout the 20th century devices with these characteristics have been imagined and created whether as blueprints, prototypes, or commercial products. In addition to many academic and research systems, several companies released commercial products in the 1980s, with various input/output types tried out. Fictional and prototype tablets Tablet computers appeared in a number of works of science fiction in the second half of the 20th century; all helped to promote and disseminate the concept to a wider audience. Examples include: Isaac Asimov described a Calculator Pad in his novel Foundation (1951) Stanisław Lem described the Opton in his novel Return from the Stars (1961) Numerous similar devices were depicted in Gene Roddenberry's Star Trek: The Original Series (1966) Dr Who: The Dominators Educator Balan holds a tablet which he hold and inputs data into using swipe gestures.(1967) Arthur C. Clarke's newspad was depicted in Stanley Kubrick's film 2001: A Space Odyssey (1968) Douglas Adams described a tablet computer in The Hitchhiker's Guide to the Galaxy and the associated comedy of the same name (1978) The science fiction TV series Star Trek: The Next Generation featured tablet computers which were designated as PADDs, notable for (as with most computers in the show) using a touchscreen interface, both with and without a stylus (1987) A device more powerful than today's tablets appeared briefly in The Mote in God's Eye (1974) The Star Wars franchise features datapads, first described in print in the 1991 novel Heir to the Empire, and depicted on screen in the 1999 feature film, Star Wars: Episode I – The Phantom Menace Further, real-life projects either proposed or created tablet computers, such as: In 1968, computer scientist Alan Kay envisioned a KiddiComp; he developed and described the concept as a Dynabook in his proposal, A personal computer for children of all ages (1972), which outlines functionality similar to that supplied via a laptop computer, or (in some of its other incarnations) a tablet or slate computer, with the exception of near eternal battery life. The target audience was children. In 1979, the idea of a touchscreen tablet that could detect an external force applied to one point on the screen was patented in Japan by a team at Hitachi consisting of Masao Hotta, Yoshikazu Miyamoto, Norio Yokozawa and Yoshimitsu Oshima, who later received a US patent for their idea. In 1992, Atari showed developers the Stylus, later renamed ST-Pad. The ST-Pad was based on the TOS/GEM Atari ST platform and prototyped early handwriting recognition. Shiraz Shivji's company Momentus demonstrated in the same time a failed x86 MS-DOS based Pen Computer with its own graphical user interface (GUI). In 1994, the European Union initiated the NewsPad project, inspired by Clarke and Kubrick's fictional work. Acorn Computers developed and delivered an ARM-based touch screen tablet computer for this program, branding it the "NewsPad"; the project ended in 1997. During the November 2000 COMDEX, Microsoft used the term Tablet PC to describe a prototype handheld device they were demonstrating. In 2001, Ericsson Mobile Communications announced an experimental product named the DelphiPad, which was developed in cooperation with the Centre for Wireless Communications in Singapore, with a touch-sensitive screen, Netscape Navigator as a web browser, and Linux as its operating system. Early tablets Following earlier tablet computer products such as the Pencept PenPad, the Linus Write-Top, and the CIC Handwriter, in September 1989, Grid Systems released the first commercially successful tablet computer, the GridPad. All four products were based on extended versions of the MS-DOS operating system. In 1992, IBM announced (in April) and shipped to developers (in October) the ThinkPad 700T (2521), which ran the GO Corporation's PenPoint OS. Also based on PenPoint was AT&T's EO Personal Communicator from 1993, which ran on AT&T's own hardware, including their own AT&T Hobbit CPU. Apple Computer launched the Apple Newton personal digital assistant in 1993. It used Apple's own new Newton OS, initially running on hardware manufactured by Motorola and incorporating an ARM CPU, that Apple had specifically co-developed with Acorn Computers. The operating system and platform design were later licensed to Sharp and Digital Ocean, who went on to manufacture their own variants. Pen computing was highly hyped by the media during the early 1990s. Microsoft, the dominant PC software vendor, released Windows for Pen Computing in 1992 to compete against PenPoint OS. The company launched the WinPad project, working together with OEMs such as Compaq, to create a small device with a Windows-like operating system and handwriting recognition. However, the project was abandoned two years later; instead Windows CE was released in the form of "Handheld PCs" in 1996. That year, Palm, Inc. released the first of the Palm OS based PalmPilot touch and stylus based PDA, the touch based devices initially incorporating a Motorola Dragonball (68000) CPU. Also in 1996 Fujitsu released the Stylistic 1000 tablet format PC, running Microsoft Windows 95, on a 100 MHz AMD486 DX4 CPU, with 8 MB RAM offering stylus input, with the option of connecting a conventional Keyboard and mouse. Intel announced a StrongARM processor-based touchscreen tablet computer in 1999, under the name WebPAD. It was later re-branded as the "Intel Web Tablet". In 2000, Norwegian company Screen Media AS and the German company Dosch & Amand Gmbh released the "FreePad". It was based on Linux and used the Opera browser. Internet access was provided by DECT DMAP, only available in Europe and provided up to 10 Mbit/s. The device had 16 MB storage, 32 MB of RAM and x86 compatible 166 MHz "Geode"-Microcontroller by National Semiconductor. The screen was 10.4" or 12.1" and was touch sensitive. It had slots for SIM cards to enable support of television set-up box. FreePad were sold in Norway and the Middle East; but the company was dissolved in 2003. Sony released its Airboard tablet in Japan in late 2000 with full wireless Internet capabilities. In the late 1990s, Microsoft launched the Handheld PC platform using their Windows CE operating system; while most devices were not tablets, a few touch enabled tablets were released on the platform such as the Fujitsu PenCentra 130 or Siemens's SIMpad. Microsoft took a more significant approach to tablets in 2002 as it attempted to define the Microsoft Tablet PC as a mobile computer for field work in business, though their devices failed, mainly due to pricing and usability decisions that limited them to their original purpose – such as the existing devices being too heavy to be held with one hand for extended periods, and having legacy applications created for desktop interfaces and not well adapted to the slate format. Nokia had plans for an Internet tablet since before 2000. An early model was test manufactured in 2001, the Nokia M510, which was running on EPOC and featuring an Opera browser, speakers and a 10-inch 800×600 screen, but it was not released because of fears that the market was not ready for it. Nokia entered the tablet space in May 2005 with the Nokia 770 running Maemo, a Debian-based Linux distribution custom-made for their Internet tablet line. The user interface and application framework layer, named Hildon, was an early instance of a software platform for generic computing in a tablet device intended for internet consumption. But Nokia did not commit to it as their only platform for their future mobile devices and the project competed against other in-house platforms and later replaced it with the Series 60. Nokia used the term internet tablet to refer to a portable information appliance that focused on Internet use and media consumption, in the range between a personal digital assistant (PDA) and an Ultra-Mobile PC (UMPC). They made two mobile phones, the N900 that runs Maemo, and N9 that run Meego. Before the release of iPad, Axiotron introduced an aftermarket, heavily modified Apple MacBook called Modbook, a Mac OS X-based tablet computer. The Modbook uses Apple's Inkwell for handwriting and gesture recognition, and uses digitization hardware from Wacom. To get Mac OS X to talk to the digitizer on the integrated tablet, the Modbook was supplied with a third-party driver. Following the launch of the Ultra-mobile PC, Intel began the Mobile Internet Device initiative, which took the same hardware and combined it with a tabletized Linux configuration. Intel codeveloped the lightweight Moblin (mobile Linux) operating system following the successful launch of the Atom CPU series on netbooks. In 2010, Nokia and Intel combined the Maemo and Moblin projects to form MeeGo, a Linux-based operating system supports netbooks and tablets. The first tablet using MeeGo was the Neofonie WeTab launched September 2010 in Germany. The WeTab used an extended version of the MeeGo operating system called WeTab OS. WeTab OS adds runtimes for Android and Adobe AIR and provides a proprietary user interface optimized for the WeTab device. On September 27, 2011, the Linux Foundation announced that MeeGo would be replaced in 2012 by Tizen. Modern tablets Android was the first of the 2000s-era dominating platforms for tablet computers to reach the market. In 2008, the first plans for Android-based tablets appeared. The first products were released in 2009. Among them was the Archos 5, a pocket-sized model with a 5-inch touchscreen, that was first released with a proprietary operating system and later (in 2009) released with Android 1.4. The Camangi WebStation was released in Q2 2009. The first LTE Android tablet appeared late 2009 and was made by ICD for Verizon. This unit was called the Ultra, but a version called Vega was released around the same time. Ultra had a 7-inch display while Vega's was 15 inches. Many more products followed in 2010. Several manufacturers waited for Android Honeycomb, specifically adapted for use with tablets, which debuted in February 2011. Apple is often credited for defining a new class of consumer device with the iPad, which shaped the commercial market for tablets in the following years, and was the most successful tablet at the time of its release. iPads and competing devices were tested by the U.S. military in 2011 and cleared for secure use in 2013. Its debut in 2010 pushed tablets into the mainstream. Samsung's Galaxy Tab and others followed, continuing the trends towards the features listed above. In March 2012, PC Magazine reported that 31% of U.S. Internet users owned a tablet, used mainly for viewing published content such as video and news. The top-selling line of devices was Apple's iPad with 100 million sold between its release in April 2010 and mid-October 2012, but iPad market share (number of units) dropped to 36% in 2013 with Android tablets climbing to 62%. Android tablet sales volume was 121 million devices, plus 52 million, between 2012 and 2013 respectively. Individual brands of Android operating system devices or compatibles follow iPad with Amazon's Kindle Fire with 7 million, and Barnes & Noble's Nook with 5 million. The BlackBerry PlayBook was announced in September 2010 that ran the BlackBerry Tablet OS. The BlackBerry PlayBook was officially released to US and Canadian consumers on April 19, 2011. Hewlett-Packard announced that the TouchPad, running WebOS 3.0 on a 1.2 GHz Qualcomm Snapdragon CPU, would be released in June 2011. On August 18, 2011, HP announced the discontinuation of the TouchPad, due to sluggish sales. In 2013, the Mozilla Foundation announced a prototype tablet model with Foxconn which ran on Firefox OS. Firefox OS was discontinued in 2016. The Canonical hinted that Ubuntu would be available on tablets by 2014. In February 2016, there was a commercial release of the BQ Aquaris Ubuntu tablet using the Ubuntu Touch operating system. Canonical terminated support for the project due to lack of market interest on April 5, 2017 and it was then adopted by the UBports as a community project. As of February 2014, 83% of mobile app developers were targeting tablets, but 93% of developers were targeting smartphones. By 2014, around 23% of B2B companies were said to have deployed tablets for sales-related activities, according to a survey report by Corporate Visions. The iPad held majority use in North America, Western Europe, Japan, Australia, and most of the Americas. Android tablets were more popular in most of Asia (China and Russia an exception), Africa and Eastern Europe. In 2015 tablet sales did not increase. Apple remained the largest seller but its market share declined below 25%. Samsung vice president Gary Riding said early in 2016 that tablets were only doing well among those using them for work. Newer models were more expensive and designed for a keyboard and stylus, which reflected the changing uses. As of early 2016, Android reigned over the market with 65%. Apple took the number 2 spot with 26%, and Windows took a distant third with the remaining 9%. In 2018, out of 4.4 billion computing devices Android accounted for 2 billion, iOS for 1 billion, and the remainder were PCs, in various forms (desktop, notebook, or tablet), running various operating systems (Windows, macOS, ChromeOS, Linux, etc.). Since the early 2020s, various companies such as Samsung are beginning to introduce foldable technology into their tablets. Types Tablets can be loosely grouped into several categories by physical size, kind of operating system installed, input and output technology, and uses. Slate The size of a slate varies, but slates begin at 6 inches (approximately 15 cm). Some models in the larger than 10-inch (25 cm) category include the Samsung Galaxy Tab Pro 12.2 at 12.2 inches (31 cm), the Toshiba Excite at 13.3 inches (33 cm) and the Dell XPS 18 at 18.4 inches (47 cm). As of March 2013, the thinnest tablet on the market was the Sony Xperia Tablet Z at only 0.27 inches (6.9 mm) thick. On September 9, 2015, Apple released the iPad Pro with a screen size, larger than the regular iPad. Mini tablet Mini tablets are smaller and weigh less than slates, with typical screen sizes between . The first commercially successful mini tablets were introduced by Amazon.com (Kindle Fire), Barnes & Noble (Nook Tablet), and Samsung (Galaxy Tab) in 2011; and by Google (Nexus 7) in 2012. They operate identically to ordinary tablets but have lower specifications compared to them. On September 14, 2012, Amazon, Inc. released an upgraded version of the Kindle Fire, the Kindle Fire HD, with higher screen resolution and more features compared to its predecessor, yet remaining only 7 inches. In October 2012, Apple released the iPad Mini with a 7.9-inch screen size, about 2 inches smaller than the regular iPad, but less powerful than the then current iPad 3. On July 24, 2013, Google released an upgraded version of the Nexus 7, with FHD display, dual cameras, stereo speakers, more color accuracy, performance improvement, built-in wireless charging, and a variant with 4G LTE support for AT&T, T-Mobile, and Verizon. In September 2013, Amazon further updated the Fire tablet with the Kindle Fire HDX. In November 2013, Apple released the iPad Mini 2, which remained at 7.9 inches and nearly matched the hardware of the iPad Air. Phablet Smartphones and tablets are similar devices, differentiated by the former typically having smaller screens and most tablets lacking cellular network capability. Since 2010, crossover touchscreen smartphones with screens larger than 5 inches have been released. That size is generally considered larger than a traditional smartphone, creating the hybrid category of the phablet by Forbes and other publications. "Phablet" is a portmanteau of "phone" and "tablet". At the time of the introduction of the first phablets, they had screens of 5.3 to 5.5 inches, but as of 2017 screen sizes up to 5.5 inches are considered typical. Examples of phablets from 2017 and onward are the Samsung Galaxy Note series (newer models of 5.7 inches), the LG V10/V20 (5.7 inches), the Sony Xperia XA Ultra (6 inches), the Huawei Mate 9 (5.9 inches), and the Huawei Honor (MediaPad) X2 (7 inches). 2-in-1 A 2-in-1 PC is a hybrid or combination of a tablet and laptop computer that has features of both. Distinct from tablets, 2-in-1 PCs all have physical keyboards, but they are either concealable by folding them back and under the touchscreen ("2-in-1 convertible") or detachable ("2-in-1 detachable"). 2-in-1s typically also can display a virtual keyboard on their touchscreens when their physical keyboards are concealed or detached. Some 2-in-1s have processors and operating systems like those of laptops, such as Windows 10, while having the flexibility of operation as a tablet. Further, 2-in-1s may have typical laptop I/O ports, such as USB 3 and DisplayPort, and may connect to traditional PC peripheral devices and external displays. Simple tablets are mainly used as media consumption devices, while 2-in-1s have capacity for both media consumption and content creation, and thus 2-in-1s are often called laptop or desktop replacement computers. There are two species of 2-in-1s: Convertibles have a chassis design by which their physical keyboard may be concealed by flipping/folding the keyboard behind the chassis. Examples include 2-in-1 PCs of the Lenovo Yoga series. Detachables or Hybrids have physical keyboards that may be detached from their chassis, even while the 2-in-1 is operating. Examples include 2-in-1 PCs of the Asus Transformer Pad and Book series, the iPad Pro, and the Microsoft Surface Book and Surface Pro. Gaming tablet Some tablets are modified by adding physical gamepad buttons such as D-pad and thumb sticks for better gaming experience combined with the touchscreen and all other features of a typical tablet computer. Most of these tablets are targeted to run native OS games and emulator games. Nvidia's Shield Tablet, with an display, and running Android, is an example. It runs Android games purchased from Google Play store. PC games can also be streamed to the tablet from computers with some higher end models of Nvidia-powered video cards. The Nintendo Switch hybrid console is also a gaming tablet that runs on its own system software, features detachable Joy-Con controllers with motion controls and three gaming modes: table-top mode using its kickstand, traditional docked/TV mode and handheld mode. While not entirely an actual tablet form factor due to their sizes, some other handheld console including the smaller version of Nintendo Switch, the Nintendo Switch Lite, and PlayStation Vita are treated as an gaming tablet or tablet replacement by community and reviewer/publisher due to their capabilities on browsing the internet and multimedia capabilities. Booklet Booklets are dual-touchscreen tablet computers with a clamshell design that can fold like a laptop. Examples include the Microsoft Courier, which was discontinued in 2010, the Sony Tablet P (considered a flop), and the Toshiba Libretto W100. Customized business tablet Customized business tablets are built specifically for a business customer's particular needs from a hardware and software perspective, and delivered in a business-to-business transaction. For example, in hardware, a transportation company may find that the consumer-grade GPS module in an off-the-shelf tablet provides insufficient accuracy, so a tablet can be customized and embedded with a professional-grade antenna to provide a better GPS signal. Such tablets may also be ruggedized for field use. For a software example, the same transportation company might remove certain software functions in the Android system, such as the web browser, to reduce costs from needless cellular network data consumption of an employee, and add custom package management software. Other applications may call for a resistive touchscreen and other special hardware and software. A table ordering tablet is a touchscreen tablet computer designed for use in casual restaurants. Such devices allow users to order food and drinks, play games and pay their bill. Since 2013, restaurant chains including Chili's, Olive Garden and Red Robin have adopted them. As of 2014, the two most popular brands were Ziosk and Presto. The devices have been criticized by servers who claim that some restaurants determine their hours based on customer feedback in areas unrelated to service. E-reader Any device that can display text on a screen may act as an E-reader. While traditionally E-readers are designed primarily for the purpose of reading digital e-books and periodicals, modern E-readers that use a mobile operating system such as Android have incorporated modern functionally including internet browsing and multimedia capabilities; for example Huawei MatePad Paper is a tablet that uses e-ink instead of typical LCD or LED panel, hence focusing on the reading digital content while maintaining the internet and multimedia capabilities. Some E-reader such as PocketBook InkPad Color and ONYX BOOX NOVA 3 Color even came with colored e-ink panel and speaker which allowed for higher degree of multimedia consumption and video playback. The Kindle line from Amazon was originally limited to E-reading capabilities; however, an update to their Kindle firmware added the ability to browse the Internet and play audio, allowing Kindles to be alternatives to a traditional tablet, in some cases, with a more readable e-ink panel and greater battery life, and providing the user with access to wider multimedia capabilities compared to the older model. Hardware System architecture Two major architectures dominate the tablet market, ARM Ltd.'s ARM architecture and Intel's and AMD's x86. Intel's x86, including x86-64 has powered the "IBM compatible" PC since 1981 and Apple's Macintosh computers since 2006. The CPUs have been incorporated into tablet PCs over the years and generally offer greater performance along with the ability to run full versions of Microsoft Windows, along with Windows desktop and enterprise applications. Non-Windows based x86 tablets include the JooJoo. Intel announced plans to enter the tablet market with its Atom in 2010. In October 2013, Intel's foundry operation announced plans to build FPGA-based quad cores for ARM and x86 processors. ARM has been the CPU architecture of choice for manufacturers of smartphones (95% ARM), PDAs, digital cameras (80% ARM), set-top boxes, DSL routers, smart televisions (70% ARM), storage devices and tablet computers (95% ARM). This dominance began with the release of the mobile-focused and comparatively power-efficient 32-bit ARM610 processor originally designed for the Apple Newton in 1993 and ARM3-using Acorn A4 laptop in 1992. The chip was adopted by Psion, Palm and Nokia for PDAs and later smartphones, camera phones, cameras, etc. ARM's licensing model supported this success by allowing device manufacturers to license, alter and fabricate custom SoC derivatives tailored to their own products. This has helped manufacturers extend battery life and shrink component count along with the size of devices. The multiple licensees ensured that multiple fabricators could supply near-identical products, while encouraging price competition. This forced unit prices down to a fraction of their x86 equivalents. The architecture has historically had limited support from Microsoft, with only Windows CE available, but with the 2012 release of Windows 8, Microsoft announced added support for the architecture, shipping their own ARM-based tablet computer, branded the Microsoft Surface, as well as an x86-64 Intel Core i5 variant branded as Microsoft Surface Pro. Intel tablet chip sales were 1 million units in 2012, and 12 million units in 2013. Intel chairman Andy Bryant has stated that its 2014 goal is to quadruple its tablet chip sales to 40 million units by the end of that year, as an investment for 2015. Display A key component among tablet computers is touch input on a touchscreen display. This allows the user to navigate easily and type with a virtual keyboard on the screen or press other icons on the screen to open apps or files. The first tablet to do this was the Linus Write-Top by Linus Technologies; the tablet featured both a stylus, a pen-like tool to aid with precision in a touchscreen device, as well as handwriting recognition. The system must respond to on-screen touches rather than clicks of a keyboard or mouse. This operation makes precise use of our eye–hand coordination. Touchscreens usually come in one of two forms: Resistive touchscreens are passive and respond to pressure on the screen. They allow a high level of precision, useful in emulating a pointer (as is common in tablet computers) but may require calibration. Because of the high resolution, a stylus or fingernail is often used. Stylus-oriented systems are less suited to multi-touch. Capacitive touchscreens tend to be less accurate, but more responsive than resistive devices. Because they require a conductive material, such as a fingertip, for input, they are not common among stylus-oriented devices but are prominent on consumer devices. Most finger-driven capacitive screens do not currently support pressure input (except for the iPhone 6S and later models), but some tablets use a pressure-sensitive stylus or active pen. Some tablets can recognize individual palms, while some professional-grade tablets use pressure-sensitive films, such as those on graphics tablets. Some capacitive touch-screens can detect the size of the touched area and the pressure used. Since mid-2010s, most tablets use capacitive touchscreens with multi-touch, unlike earlier resistive touchscreen devices which users needed styluses to perform inputs. There are also electronic paper tablets such as Sony Digital Paper DPTS1 and reMarkable that use E ink for its display technology. Handwriting recognition Many tablets support a stylus and support handwriting recognition. Wacom and N-trig digital pens provide approximately 2500 DPI resolution for handwriting, exceeding the resolution of capacitive touch screens by more than a factor of 10. These pens also support pressure sensitivity, allowing for "variable-width stroke-based" characters, such as Chinese/Japanese/Korean writing, due to their built-in capability of "pressure sensing". Pressure is also used in digital art applications such as Autodesk Sketchbook. Apps exist on both iOS and Android platforms for handwriting recognition and in 2015 Google introduced its own handwriting input with support for 82 languages. Other features After 2007, with access to capacitive screens and the success of the iPhone, other features became common, such as multi-touch features (in which the user can touch the screen in multiple places to trigger actions and other natural user interface features, as well as flash memory solid state storage and "instant on" warm-booting; external USB and Bluetooth keyboards defined tablets. Most tablets released since mid-2010 use a version of an ARM processor for longer battery life. The ARM Cortex family is powerful enough for tasks such as internet browsing, light creative and production work and mobile games. Other features are: High-definition, anti-glare display, touchscreen, lower weight and longer battery life than a comparably-sized laptop, wireless local area and internet connectivity (usually with Wi-Fi standard and optional mobile broadband), Bluetooth for connecting peripherals and communicating with local devices, ports for wired connections and charging, for example USB ports, Early devices had IR support and could work as a TV remote controller, docking station, keyboard and added connectivity, on-board flash memory, ports for removable storage, various cloud storage services for backup and syncing data across devices, local storage on a local area network (LAN). Speech recognition Google introduced voice input in Android 2.1 in 2009 and voice actions in 2.2 in 2010, with up to five languages (now around 40). Siri was introduced as a system-wide personal assistant on the iPhone 4S in 2011 and now supports nearly 20 languages. In both cases, the voice input is sent to central servers to perform general speech recognition and thus requires a network connection for more than simple commands. Near-field communication with other compatible devices including ISO/IEC 14443 RFID tags. Software Current tablet operating systems Tablets, like conventional PCs, use several different operating systems, though dual-booting is rare. Tablet operating systems come in two classes: Desktop computer operating systems Mobile operating systems Desktop OS-based tablets are currently thicker and heavier. They require more storage and more cooling and give less battery life. They can run processor-intensive graphical applications in addition to mobile apps, and have more ports. Mobile-based tablets are the reverse, and run only mobile apps. They can use battery life conservatively because the processor is significantly smaller. This allows the battery to last much longer than the common laptop. In Q1 2018, Android tablets had 62% of the market, Apple's iOS had 23.4% of the market and Windows 10 had 14.6% of the market. In late 2021, iOS has 55% use worldwide (varies by continent, e.g. below 50% in South America and Africa) and Android 45% use. Still, Android tablets have more use than iOS in virtually all countries, except for e.g. the U.S. and China. Android Android is a Linux-based operating system that Google offers as open source under the Apache license. It is designed primarily for mobile devices such as smartphones and tablet computers. Android supports low-cost ARM systems and others. The first tablets running Android were released in 2009. Vendors such as Motorola and Lenovo delayed deployment of their tablets until after 2011, when Android was reworked to include more tablet features. Android 3.0 (Honeycomb), released in 2011 and later versions support larger screen sizes, mainly tablets, and have access to the Google Play service. Android includes operating system, middleware and key applications. Other vendors sell customized Android tablets, such as Kindle Fire and Nook, which are used to consume mobile content and provide their own app store, rather than using the larger Google Play system, thereby fragmenting the Android market. In 2022 Google began to re-emphasize in-house Android tablet development — at this point, a multi-year commitment. Android Go A few tablet computers are shipped with Android Go. Fire OS As mentioned above, Amazon Fire OS is an Android-based mobile operating system produced by Amazon for its Fire range of tablets. It is forked from Android. Fire OS primarily centers on content consumption, with a customized user interface and heavy ties to content available from Amazon's own storefronts and services. ChromeOS Several devices that run ChromeOS came on the market in 2017–2019, as tablets, or as 2-in-1s with touchscreen and 360-degree hinge. HarmonyOS HarmonyOS (HMOS) () is a distributed operating system developed by Huawei to collaborate and interconnect with multiple smart devices on the Internet of Things (IoT) ecosystem. In its current multi-kernel design, the operating system selects suitable kernels from the abstraction layer for devices with diverse resources. For IoT devices, the system is known to be based on LiteOS kernel; while for smartphones and tablets, it is based on a Linux kernel layer with AOSP libraries to support Android application package (APK) apps using Android Runtime (ART) through the Ark Compiler, in addition to native HarmonyOS apps built via integrated development environment (IDE) known as DevEco Studio. iPadOS The iPad runs on iPadOS. Prior to the introduction of iPadOS in 2019, the iPad ran iOS, which was created for the iPhone and iPod Touch. The first iPad was released in 2010. Although built on the same underlying Unix implementation as macOS, its user interface is radically different. iPadOS is designed for touch input from the user's fingers and has none of the features that required a stylus on earlier tablets. Apple introduced multi-touch gestures, such as moving two fingers apart or together to zoom in or out, also termed pinch to zoom. iPadOS and iOS are built for the ARM architecture. Kindle firmware Kindle firmware is a mobile operating system specifically designed for Amazon Kindle e-readers. It is based on a custom Linux kernel; however, it is entirely closed-source and proprietary, and only runs on Amazon Kindle line up manufactured under the Amazon brand. Nintendo Switch system software The Nintendo Switch system software (also known by its codename Horizon) is an updatable firmware and operating system used by the Nintendo Switch hybrid video game console/tablet and Nintendo Switch Lite handheld game console. It is based on a proprietary microkernel. The UI includes a HOME screen, consisting of the top bar, the screenshot viewer ("Album"), and shortcuts to the Nintendo eShop, News, and Settings. PlayStation Vita system software The PlayStation Vita system software is the official firmware and operating system for the PlayStation Vita and PlayStation TV video game consoles. It uses the LiveArea as its graphical shell. The PlayStation Vita system software has one optional add-on component, the PlayStation Mobile Runtime Package. The system is built on a Unix-base which is derived from FreeBSD and NetBSD. Due to it capabilities on browsing the internet and multimedia capabilities, it is treat as an gaming tablet or tablet replacement by community and reviewer/publisher. Ubuntu Touch Ubuntu Touch is an open-source (GPL) mobile version of the Ubuntu operating system originally developed in 2013 by Canonical Ltd. and continued by the non-profit UBports Foundation in 2017. Ubuntu Touch can run on a pure GNU/Linux base on phones with the required drivers, such as the Librem 5 and the PinePhone. To enable hardware that was originally shipped with Android, Ubuntu Touch makes use of the Android Linux kernel, using Android drivers and services via an LXC container, but does not use any of the Java-like code of Android. As of February 2022, Ubuntu Touch is available on 78 phones and tablets. The UBports Installer serves as an easy-to-use tool to allow inexperienced users to install the operating system on third-party devices without damaging their hardware. Windows Following Windows for Pen Computing for Windows 3.1 in 1991, Microsoft supported tablets running Windows XP under the Microsoft Tablet PC name. Microsoft Tablet PCs were pen-based, fully functional x86 PCs with handwriting and voice recognition functionality. Windows XP Tablet PC Edition provided pen support. Tablet support was added to both Home and Business versions of Windows Vista and Windows 7. Tablets running Windows could use the touchscreen for mouse input, hand writing recognition and gesture support. Following Tablet PC, Microsoft announced the Ultra-mobile PC initiative in 2006 which brought Windows tablets to a smaller, touch-centric form factor. In 2008, Microsoft showed a prototype of a two-screen tablet called Microsoft Courier, but cancelled the project. In 2012, Microsoft released Windows 8, which features significant changes to various aspects of the operating system's user interface and platform which are designed for touch-based devices such as tablets. The operating system also introduced an application store and a new style of application optimized primarily for use on tablets. Microsoft also introduced Windows RT, an edition of Windows 8 for use on ARM-based devices. The launch of Windows 8 and RT was accompanied by the release of devices with the two operating systems by various manufacturers (including Microsoft themselves, with the release of Surface), such as slate tablets, hybrids, and convertibles. Released in July 2015, Windows 10 introduces what Microsoft described as "universal apps"; expanding on Metro-style apps, these apps can be designed to run across multiple Microsoft product families with nearly identical code‍ – ‌including PCs, tablets, smartphones, embedded systems, Xbox One, Surface Hub and Windows Holographic. The Windows user interface was revised to handle transitions between a mouse-oriented interface and a touchscreen-optimized interface based on available input devices‍ – ‌particularly on 2-in-1 PCs; both interfaces include an updated Start menu. Windows 10 replaced all earlier editions of Windows. Hybrid OS operation Several hardware companies have built hybrid devices with the possibility to work with both Android and Windows Phone operating systems (or in rare cases Windows 8.1, as with the, by now cancelled, Asus Transformer Book Duet), while Ars Technica stated: "dual-OS devices are always terrible products. Windows and Android almost never cross-communicate, so any dual-OS device means dealing with separate apps, data, and storage pools and completely different UI paradigms. So from a consumer perspective, Microsoft and Google are really just saving OEMs from producing tons of clunky devices that no one will want." Discontinued tablet operating systems BlackBerry 10 BlackBerry 10 (based on the QNX OS) is from BlackBerry. As a smartphone OS, it is closed-source and proprietary, and only runs on phones and tablets manufactured by BlackBerry. One of the dominant platforms in the world in the late 2000s, its global market share was reduced significantly by the mid-2010s. In late 2016, BlackBerry announced that it will continue to support the OS, with a promise to release 10.3.3. Therefore, BlackBerry 10 would not receive any major updates as BlackBerry and its partners would focus more on their Android base development. BlackBerry Tablet OS BlackBerry Tablet OS is an operating system from BlackBerry Ltd based on the QNX Neutrino real-time operating system designed to run Adobe AIR and BlackBerry WebWorks applications, currently available for the BlackBerry PlayBook tablet computer. The BlackBerry Tablet OS is the first tablet running an operating system from QNX (now a subsidiary of RIM). BlackBerry Tablet OS supports standard BlackBerry Java applications. Support for Android apps has also been announced, through sandbox "app players" which can be ported by developers or installed through sideloading by users. A BlackBerry Tablet OS Native Development Kit, to develop native applications with the GNU toolchain is currently in closed beta testing. The first device to run BlackBerry Tablet OS was the BlackBerry PlayBook tablet computer. Application store Apps that do not come pre-installed with the system are supplied through online distribution. These sources, termed app stores, provide centralized catalogs of software and allow "one click" on-device software purchasing, installation and updates. Mobile device suppliers may adopt a "walled garden" approach, wherein the supplier controls what software applications ("apps") are available. Software development kits are restricted to approved software developers. This can be used to reduce the impact of malware, provide software with an approved content rating, control application quality and exclude competing vendors. Apple, Google, Amazon, Microsoft and Barnes & Noble all adopted the strategy. B&N originally allowed arbitrary apps to be installed, but, in December 2011, excluded third parties. Apple and IBM have agreed to cooperate in cross-selling IBM-developed applications for iPads and iPhones in enterprise-level accounts. Proponents of open source software say that the iPad (or such "walled garden" app store approach) violates the spirit of personal control that traditional personal computers have always provided. Sales Around 2010, tablet use by businesses jumped, as business began to use them for conferences, events, and trade shows. In 2012, Intel reported that their tablet program improved productivity for about 19,000 of their employees by an average of 57 minutes a day. In October 2012, display screen shipments for tablets began surpassing shipments for laptop display screens. Tablets became increasingly used in the construction industry to look at blueprints, field documentation and other relevant information on the device instead of carrying around large amounts of paper. Time described the product's popularity as a "global tablet craze" in a November 2012 article. As of the start of 2014, 44% of US online consumers owned tablets, a significant jump from 5% in 2011. Tablet use also became increasingly common among children. A 2014 survey found that mobiles were the most frequently used object for play among American children under the age of 12. Mobiles were used more often in play than video game consoles, board games, puzzles, play vehicles, blocks and dolls/action figures. Despite this, the majority of parents said that a mobile was "never" or only "sometimes" a toy. As of 2014, nearly two-thirds of American 2- to 10-year-olds have access to a tablet or e-reader. The large use of tablets by adults is as a personal internet-connected TV. A 2015 study found that a third of children under five have their own tablet device. After a fast rise in sales during the early 2010s, the tablet market had plateaued in 2015 and by Q3 2018 sales had declined by 35% from its Q3 2014 peak. In spite of this, tablet sales worldwide had surpassed sales of desktop computers in 2017, and worldwide PC sales were flat for the first quarter of 2018. In 2020 the tablet market saw a large surge in sales with 164 million tablet units being shipped worldwide due to a large demand for work from home and online learning. 2010 to 2014 figures are estimated by Gartner. 2014 to 2021 figures are estimated by IDC. By manufacturer By operating system According to a survey conducted by the Online Publishers Association (OPA) now called Digital Content Next (DCN) in March 2012, it found that 72% of tablet owners had an iPad, while 32% had an Android tablet. By 2012, Android tablet adoption had increased. 52% of tablet owners owned an iPad, while 51% owned an Android-powered tablet (percentages do not add up to 100% because some tablet owners own more than one type). By end of 2013, Android's market share rose to 61.9%, followed by iOS at 36%. By late 2014, Android's market share rose to 72%, followed by iOS at 22.3% and Windows at 5.7%. As of early 2016, Android has 65% marketshare, Apple has 26% and Windows has 9% marketshare. In Q1 2018, Android tablets had 62% of the market, Apple's iOS had 23.4% of the market and Windows 10 had 14.6% of the market. Source: Strategy Analytics Use Sleep The blue wavelength of light from back-lit tablets may impact one's ability to fall asleep when reading at night, through the suppression of melatonin. Experts at Harvard Medical School suggest limiting tablets for reading use in the evening. Those who have a delayed body clock, such as teenagers, which makes them prone to stay up late in the evening and sleep later in the morning, may be at particular risk for increases in sleep deficiencies. A PC app such as F.lux and Android apps such as CF.lumen and Twilight attempt to decrease the impact on sleep by filtering blue wavelengths from the display. iOS 9.3 includes Night Shift that shifts the colors of the device's display to be warmer during the later hours. By plane Because of, among other things, electromagnetic waves emitted by this type of device, the use of any type of electronic device during the take-off and landing phases was totally prohibited on board commercial flights. On November 13, 2013, the European Aviation Safety Agency (EASA) announced that the use of mobile terminals could be authorized on the flights of European airlines during these phases from 2014 onwards, on the condition that the cellular functions are deactivated ("airplane" mode activated). In September 2014, EASA issued guidance that allows EU airlines to permit use of tablets, e-readers, smartphones, and other portable electronic devices to stay on without the need to be in airplane mode during all parts of EU flights; however, each airline has to decide to allow this behavior. In the U.S., the Federal Aviation Administration allowed use of portable electronic devices during all parts of flights while in airplane mode in late 2013. Tourism Some French historical monuments are equipped with digital tactile tablets called "HistoPad". It is an application integrated with an iPad Mini offering an interaction in augmented and virtual reality with several pieces of the visit, the visitor being able to take control of their visit in an interactive and personalized way. Professional use for specific sectors Some professionals – for example, in the construction industry, insurance experts, lifeguards or surveyors – use so-called rugged shelf models in the field that can withstand extreme hot or cold shocks or climatic environments. Some units are hardened against drops and screen breakage. Satellite-connectivity-equipped tablets such as the Thorium X, for example, can be used in areas where there is no other connectivity. This is a valuable feature in the aeronautical and military realms. For example, United States Army helicopter pilots are moving to tablets as electronic flight bags, which confer the advantages of rapid, convenient synchronization of large groups of users, and the seamless updating of information. US Army chaplains who are deployed in the field with the troops cite the accessibility of Army regulations, field manuals, and other critical information to help with their services; however, power generation, speakers, and a tablet rucksack are also necessary for the chaplains.
Technology
Computer hardware
null
4185286
https://en.wikipedia.org/wiki/Mammaliaformes
Mammaliaformes
Mammaliaformes ("mammalian forms") is a clade of synapsid tetrapods that includes the crown group mammals and their closest extinct relatives; the group radiated from earlier probainognathian cynodonts during the Late Triassic. It is defined as the clade originating from the most recent common ancestor of Morganucodonta and the crown group mammals; the latter is the clade originating with the most recent common ancestor of extant Monotremata, Marsupialia and Placentalia. Besides Morganucodonta and the crown group mammals, Mammaliaformes also includes Docodonta and Hadrocodium. Mammaliaformes is a term of phylogenetic nomenclature. In contrast, the assignment of organisms to class Mammalia has traditionally been founded on traits and, on this basis, Mammalia is slightly more inclusive than Mammaliaformes. In particular, trait-based taxonomy generally includes Adelobasileus and Sinoconodon in Mammalia, though they fall outside the Mammaliaformes definition. These genera are included in the broader clade Mammaliamorpha, defined phylogenetically as the clade originating with the last common ancestor of Tritylodontidae and the crown group mammals. This wider group includes some families that trait-based taxonomy does not include in Mammalia, in particular Tritylodontidae and Brasilodontidae. Animals in the clade Mammaliaformes are often called mammaliaforms, without the e. Sometimes, the spelling mammaliforms is used. The origin of crown-group mammals extends back to the Jurassic, with extensive findings in the Late Jurassic outcrops of Portugal and China. The earliest confirmed specimens of fur are found in them, demonstrating that the ancestors of mammals had already developed fur. Mammaliaforms in life Early mammaliaforms were generally shrew-like in appearance and size, and most of their distinguishing characteristics were internal. In particular, the structure of the mammaliaform (and mammal) jaw and the arrangement of teeth are nearly unique. Instead of having many teeth that are frequently replaced, mammals have one set of baby teeth and later one set of adult teeth that fit together precisely. This is thought to aid in the grinding of food to make it quicker to digest. Endothermic animals require more calories than those that are ectothermic, so speeding up the pace of digestion is a necessity. The drawback to the fixed dentition is that worn teeth cannot be replaced, as was possible for the reptiliomorph ancestors of mammaliaforms. To compensate, mammals developed prismatic enamel, characterized by crystallite discontinuities that helped spread out the force of the bite. Lactation, along with other characteristically mammalian features, is also thought to characterize the Mammaliaformes, but these traits are difficult to study in the fossil record. Evidence of lactation is present in morganucodontans, via tooth replacement patterns. Combined with the more basal tritylodontids that also display evidence of lactation, this seems to imply that milk is an ancestral characteristic in this group. However, the fairly derived Sinoconodon appears to have uniquely discarded milk altogether. Prior to hatching, the milk glands would provide moisture to the leathery eggs, a situation still found in monotremes. The early mammaliaforms did have a harderian gland. In modern mammals, this is used for cleaning the fur, indicating that they, contrary to their cynodont ancestors, had a furry covering. An insulative covering is necessary to keep a homeothermic animal warm if it is very small, less than 5 cm (1.97 in) long; the 3.2 cm (1.35 in) Hadrocodium must have had fur, therefore, but the 10 cm (3.94 in) Morganucodon might not have needed it. The docodont Castorocauda, further removed from crown group mammals than Hadrocodium, had two layers of fur, guard hairs and underfur, as do mammals today. It is possible that early mammaliaforms had vibrissae; Tritheledontidae, a group of cynodonts, probably had whiskers. A common ancestor of all therian mammals did so. Indeed, some humans even still develop vestigial vibrissal muscles in the upper lip. Thus, it is possible that the development of the whisker sensory system played an important role in mammalian development, more generally. Like monotremes today, the legs of early mammaliaforms were somewhat sprawling, giving a rather "reptilian" type of gait. However, there was a general tendency to have more erect forelimbs, forms like eutriconodonts even having a fundamentally modern forelimb anatomy while the hindlimbs remained "primitive"; this tendency is in some effect still seen in modern therian mammals, which often have more sprawling hindlimbs. In some forms, the hind feet likely bore a spur similar to those found in the platypus and echidnas. Such a spur would have been connected to a venom gland for protection or mating competition. Hadrocodium lacks the multiple bones in its lower jaw seen in reptiles. These are still retained, however, in earlier mammaliaforms. With the possible exception of Megazostrodon and Erythrotherium (as well as placental mammals), all mammaliforms possess epipubic bones, a possibly synapomorphy with tritylodontids, which also have them. These pelvic bones strengthen the torso and support abdominal and hindlimb musculature. They, however, prevent the expansion of the abdomen, and so force species that possess them to either give birth to larval young (as in modern marsupials), or produce minuscule eggs that hatch into larval young (as in modern monotremes). Most mammaliforms, therefore, probably had the same constraints, and some species could have borne pouches. Phylogeny The cladogram below follows the analysis of Luo and colleagues in 2015. Expanded from above Cladogram based on Rougier et al. (1996) with Tikitherium included following Luo and Martin (2007). However, Tikitherium is later considered as misidentification of Neogene shrew.
Biology and health sciences
Stem-mammals
Animals
4186250
https://en.wikipedia.org/wiki/Veneridae
Veneridae
The Veneridae or venerids, common name: Venus clams, are a very large family of minute to large, saltwater clams, marine bivalve molluscs. Over 500 living species of venerid bivalves are known, most of which are edible, and many of which are exploited as food sources. Many of the most important edible species are commonly known (in the USA) simply as "clams". Venerids make up a significant proportion of the world fishery of edible bivalves. The family includes some species that are important commercially, such as (in the USA) the hard clam or quahog, Mercenaria mercenaria. Taxonomy The classification within the family Veneridae has been controversial at least since the 1930s. Molecular approaches show that much of this traditional classification is unnatural. Some common species have been moved between genera (including genera in different subfamilies) because of repeated attempts to bring a more valid organization to the classification or taxonomy of the family, therefore changes in the generic name of species are frequently encountered. The characters used for classifying this group still tend to be superficial, focusing on external features, especially those of the shell. Venerid clams are characterized as bivalves with an external posterior ligament, usually a well demarcated anterior area known as the lunule, and three interlocking structures (called cardinal teeth) in the top of each valve; several of the subfamilies also have anterior lateral teeth, anterior to the cardinal teeth: one in the left valve, and two (sometimes obscure) in the right valve. The inner lower peripheries of the valves can be finely toothed or smooth. Classification The following genera are recognised in the family Veneridae: Description Shell sculpture tends to be primarily concentric, but radial and divaricating ornamentation (see Gafrarium), and rarely spines (Pitar lupanaria for example) occur on some. One small subfamily, the Samarangiinae, is created for a unique and rare clam found in coral reefs with an outer covering of cemented sand or mud that texturally camouflages it while enhancing the thickness of the shell. Several venerid clams have overall shell shapes adapted to their environments. Tivela species, for example, have the triangular outline of the surf clams in other bivalve families, and occur often in surf zones. Some Dosinia species are almost disc-like in shape and reminiscent of lucinid bivalves; both types of circular bivalves tend to burrow relatively deeply into the sediment. Further reclassification is to be expected as the results of current research in molecular systematics on the group appear in the literature. Venerids have rounded or oval solid shells with the umbones (projections) inturned towards the anterior end. Three or four cardinal teeth are on each valve. The siphons are short and united, except at the tip, and are not very long. The foot is large.
Biology and health sciences
Bivalvia
Animals
4189127
https://en.wikipedia.org/wiki/River%20ecosystem
River ecosystem
River ecosystems are flowing waters that drain the landscape, and include the biotic (living) interactions amongst plants, animals and micro-organisms, as well as abiotic (nonliving) physical and chemical interactions of its many parts. River ecosystems are part of larger watershed networks or catchments, where smaller headwater streams drain into mid-size streams, which progressively drain into larger river networks. The major zones in river ecosystems are determined by the river bed's gradient or by the velocity of the current. Faster moving turbulent water typically contains greater concentrations of dissolved oxygen, which supports greater biodiversity than the slow-moving water of pools. These distinctions form the basis for the division of rivers into upland and lowland rivers. The food base of streams within riparian forests is mostly derived from the trees, but wider streams and those that lack a canopy derive the majority of their food base from algae. Anadromous fish are also an important source of nutrients. Environmental threats to rivers include loss of water, dams, chemical pollution and introduced species. A dam produces negative effects that continue down the watershed. The most important negative effects are the reduction of spring flooding, which damages wetlands, and the retention of sediment, which leads to the loss of deltaic wetlands. River ecosystems are prime examples of lotic ecosystems. Lotic refers to flowing water, from the Latin , meaning washed. Lotic waters range from springs only a few centimeters wide to major rivers kilometers in width. Much of this article applies to lotic ecosystems in general, including related lotic systems such as streams and springs. Lotic ecosystems can be contrasted with lentic ecosystems, which involve relatively still terrestrial waters such as lakes, ponds, and wetlands. Together, these two ecosystems form the more general study area of freshwater or aquatic ecology. The following unifying characteristics make the ecology of running waters unique among aquatic habitats: the flow is unidirectional, there is a state of continuous physical change, and there is a high degree of spatial and temporal heterogeneity at all scales (microhabitats), the variability between lotic systems is quite high and the biota is specialized to live with flow conditions. Abiotic components (non-living) The non-living components of an ecosystem are called abiotic components. E.g. stone, air, soil, etc. Water flow Unidirectional water flow is the key factor in lotic systems influencing their ecology. Streamflow can be continuous or intermittent, though. Streamflow is the result of the summative inputs from groundwater, precipitation, and overland flow. Water flow can vary between systems, ranging from torrential rapids to slow backwaters that almost seem like lentic systems. The speed or velocity of the water flow of the water column can also vary within a system and is subject to chaotic turbulence, though water velocity tends to be highest in the middle part of the stream channel (known as the thalveg). This turbulence results in divergences of flow from the mean downslope flow vector as typified by eddy currents. The mean flow rate vector is based on the variability of friction with the bottom or sides of the channel, sinuosity, obstructions, and the incline gradient. In addition, the amount of water input into the system from direct precipitation, snowmelt, and/or groundwater can affect the flow rate. The amount of water in a stream is measured as discharge (volume per unit time). As water flows downstream, streams and rivers most often gain water volume, so at base flow (i.e., no storm input), smaller headwater streams have very low discharge, while larger rivers have much higher discharge. The "flow regime" of a river or stream includes the general patterns of discharge over annual or decadal time scales, and may capture seasonal changes in flow. While water flow is strongly determined by slope, flowing waters can alter the general shape or direction of the stream bed, a characteristic also known as geomorphology. The profile of the river water column is made up of three primary actions: erosion, transport, and deposition. Rivers have been described as "the gutters down which run the ruins of continents". Rivers are continuously eroding, transporting, and depositing substrate, sediment, and organic material. The continuous movement of water and entrained material creates a variety of habitats, including riffles, glides, and pools. Light Light is important to lotic systems, because it provides the energy necessary to drive primary production via photosynthesis, and can also provide refuge for prey species in shadows it casts. The amount of light that a system receives can be related to a combination of internal and external stream variables. The area surrounding a small stream, for example, might be shaded by surrounding forests or by valley walls. Larger river systems tend to be wide so the influence of external variables is minimized, and the sun reaches the surface. These rivers also tend to be more turbulent, however, and particles in the water increasingly attenuate light as depth increases. Seasonal and diurnal factors might also play a role in light availability because the angle of incidence, the angle at which light strikes water can lead to light lost from reflection. Known as Beer's Law, the shallower the angle, the more light is reflected and the amount of solar radiation received declines logarithmically with depth. Additional influences on light availability include cloud cover, altitude, and geographic position. Temperature Most lotic species are poikilotherms whose internal temperature varies with their environment, thus temperature is a key abiotic factor for them. Water can be heated or cooled through radiation at the surface and conduction to or from the air and surrounding substrate. Shallow streams are typically well mixed and maintain a relatively uniform temperature within an area. In deeper, slower moving water systems, however, a strong difference between the bottom and surface temperatures may develop. Spring fed systems have little variation as springs are typically from groundwater sources, which are often very close to ambient temperature. Many systems show strong diurnal fluctuations and seasonal variations are most extreme in arctic, desert and temperate systems. The amount of shading, climate and elevation can also influence the temperature of lotic systems. Chemistry Water chemistry in river ecosystems varies depending on which dissolved solutes and gases are present in the water column of the stream. Specifically river water can include, apart from the water itself, dissolved inorganic matter and major ions (calcium, sodium, magnesium, potassium, bicarbonate, sulphide, chloride) dissolved inorganic nutrients (nitrogen, phosphorus, silica) suspended and dissolved organic matter gases (nitrogen, nitrous oxide, carbon dioxide, oxygen) trace metals and pollutants Dissolved ions and nutrients Dissolved stream solutes can be considered either reactive or conservative. Reactive solutes are readily biologically assimilated by the autotrophic and heterotrophic biota of the stream; examples can include inorganic nitrogen species such as nitrate or ammonium, some forms of phosphorus (e.g., soluble reactive phosphorus), and silica. Other solutes can be considered conservative, which indicates that the solute is not taken up and used biologically; chloride is often considered a conservative solute. Conservative solutes are often used as hydrologic tracers for water movement and transport. Both reactive and conservative stream water chemistry is foremost determined by inputs from the geology of its watershed, or catchment area. Stream water chemistry can also be influenced by precipitation, and the addition of pollutants from human sources. Large differences in chemistry do not usually exist within small lotic systems due to a high rate of mixing. In larger river systems, however, the concentrations of most nutrients, dissolved salts, and pH decrease as distance increases from the river's source. Dissolved gases In terms of dissolved gases, oxygen is likely the most important chemical constituent of lotic systems, as all aerobic organisms require it for survival. It enters the water mostly via diffusion at the water-air interface. Oxygen's solubility in water decreases as water pH and temperature increases. Fast, turbulent streams expose more of the water's surface area to the air and tend to have low temperatures and thus more oxygen than slow, backwaters. Oxygen is a byproduct of photosynthesis, so systems with a high abundance of aquatic algae and plants may also have high concentrations of oxygen during the day. These levels can decrease significantly during the night when primary producers switch to respiration. Oxygen can be limiting if circulation between the surface and deeper layers is poor, if the activity of lotic animals is very high, or if there is a large amount of organic decay occurring. Suspended matter Rivers can also transport suspended inorganic and organic matter. These materials can include sediment or terrestrially-derived organic matter that falls into the stream channel. Often, organic matter is processed within the stream via mechanical fragmentation, consumption and grazing by invertebrates, and microbial decomposition. Leaves and woody debris recognizable coarse particulate organic matter (CPOM) into particulate organic matter (POM), down to fine particulate organic matter. Woody and non-woody plants have different instream breakdown rates, with leafy plants or plant parts (e.g., flower petals) breaking down faster than woody logs or branches. Substrate The inorganic substrate of lotic systems is composed of the geologic material present in the catchment that is eroded, transported, sorted, and deposited by the current. Inorganic substrates are classified by size on the Wentworth scale, which ranges from boulders, to pebbles, to gravel, to sand, and to silt. Typically, substrate particle size decreases downstream with larger boulders and stones in more mountainous areas and sandy bottoms in lowland rivers. This is because the higher gradients of mountain streams facilitate a faster flow, moving smaller substrate materials further downstream for deposition. Substrate can also be organic and may include fine particles, autumn shed leaves, large woody debris such as submerged tree logs, moss, and semi-aquatic plants. Substrate deposition is not necessarily a permanent event, as it can be subject to large modifications during flooding events. Biotic components (living) The living components of an ecosystem are called the biotic components. Streams have numerous types of biotic organisms that live in them, including bacteria, primary producers, insects and other invertebrates, as well as fish and other vertebrates. Microorganisms Bacteria are present in large numbers in lotic waters. Free-living forms are associated with decomposing organic material, biofilm on the surfaces of rocks and vegetation, in between particles that compose the substrate, and suspended in the water column. Other forms are also associated with the guts of lotic organisms as parasites or in commensal relationships. Bacteria play a large role in energy recycling (see below). Diatoms are one of the main dominant groups of periphytic algae in lotic systems and have been widely used as efficient indicators of water quality, because they respond quickly to environmental changes, especially organic pollution and eutrophication, with a broad spectrum of tolerances to conditions ranging, from oligotrophic to eutrophic. Biofilm A biofilm is a combination of algae (diatoms etc.), fungi, bacteria, and other small microorganisms that exist in a film along the streambed or the benthos. Biofilm assemblages themselves are complex, and add to the complexity of a streambed. The different biofilm components (algae and bacteria are the principal components) are embedded in an exopolysaccharide matrix (EPS), and are net receptors of inorganic and organic elements and remain submitted to the influences of the different environmental factors. Biofilms are one of the main biological interphases in river ecosystems, and probably the most important in intermittent rivers, where the importance of the water column is reduced during extended low-activity periods of the hydrological cycle. Biofilms can be understood as microbial consortia of autotrophs and heterotrophs, coexisting in a matrix of hydrated extracellular polymeric substances (EPS). These two main biological components are respectively mainly algae and cyanobacteria on one side, and bacteria and fungi on the other. Micro- and meiofauna also inhabit the biofilm, predating on the organisms and organic particles and contributing to its evolution and dispersal. Biofilms therefore form a highly active biological consortium, ready to use organic and inorganic materials from the water phase, and also ready to use light or chemical energy sources. The EPS immobilize the cells and keep them in close proximity allowing for intense interactions including cell-cell communication and the formation of synergistic consortia. The EPS is able to retain extracellular enzymes and therefore allows the utilization of materials from the environment and the transformation of these materials into dissolved nutrients for the use by algae and bacteria. At the same time, the EPS contributes to protect the cells from desiccation as well from other hazards (e.g., biocides, UV radiation, etc.) from the outer world. On the other hand, the packing and the EPS protection layer limits the diffusion of gases and nutrients, especially for the cells far from the biofilm surface, and this limits their survival and creates strong gradients within the biofilm. Both the biofilm physical structure, and the plasticity of the organisms that live within it, ensure and support their survival in harsh environments or under changing environmental conditions. Primary producers Algae, consisting of phytoplankton and periphyton, are the most significant sources of primary production in most streams and rivers. Phytoplankton float freely in the water column and thus are unable to maintain populations in fast flowing streams. They can, however, develop sizeable populations in slow moving rivers and backwaters. Periphyton are typically filamentous and tufted algae that can attach themselves to objects to avoid being washed away by fast currents. In places where flow rates are negligible or absent, periphyton may form a gelatinous, unanchored floating mat. Plants exhibit limited adaptations to fast flow and are most successful in reduced currents. More primitive plants, such as mosses and liverworts attach themselves to solid objects. This typically occurs in colder headwaters where the mostly rocky substrate offers attachment sites. Some plants are free floating at the water's surface in dense mats like duckweed or water hyacinth. Others are rooted and may be classified as submerged or emergent. Rooted plants usually occur in areas of slackened current where fine-grained soils are found. These rooted plants are flexible, with elongated leaves that offer minimal resistance to current. Living in flowing water can be beneficial to plants and algae because the current is usually well aerated and it provides a continuous supply of nutrients. These organisms are limited by flow, light, water chemistry, substrate, and grazing pressure. Algae and plants are important to lotic systems as sources of energy, for forming microhabitats that shelter other fauna from predators and the current, and as a food resource. Insects and other invertebrates Up to 90% of invertebrates in some lotic systems are insects. These species exhibit tremendous diversity and can be found occupying almost every available habitat, including the surfaces of stones, deep below the substratum in the hyporheic zone, adrift in the current, and in the surface film. Insects have developed several strategies for living in the diverse flows of lotic systems. Some avoid high current areas, inhabiting the substratum or the sheltered side of rocks. Others have flat bodies to reduce the drag forces they experience from living in running water. Some insects, like the giant water bug (Belostomatidae), avoid flood events by leaving the stream when they sense rainfall. In addition to these behaviors and body shapes, insects have different life history adaptations to cope with the naturally-occurring physical harshness of stream environments. Some insects time their life events based on when floods and droughts occur. For example, some mayflies synchronize when they emerge as flying adults with when snowmelt flooding usually occurs in Colorado streams. Other insects do not have a flying stage and spend their entire life cycle in the river. Like most of the primary consumers, lotic invertebrates often rely heavily on the current to bring them food and oxygen. Invertebrates are important as both consumers and prey items in lotic systems. The common orders of insects that are found in river ecosystems include Ephemeroptera (also known as a mayfly), Trichoptera (also known as a caddisfly), Plecoptera (also known as a stonefly, Diptera (also known as a true fly), some types of Coleoptera (also known as a beetle), Odonata (the group that includes the dragonfly and the damselfly), and some types of Hemiptera (also known as true bugs). Additional invertebrate taxa common to flowing waters include mollusks such as snails, limpets, clams, mussels, as well as crustaceans like crayfish, amphipoda and crabs. Fish and other vertebrates Fish are probably the best-known inhabitants of lotic systems. The ability of a fish species to live in flowing waters depends upon the speed at which it can swim and the duration that its speed can be maintained. This ability can vary greatly between species and is tied to the habitat in which it can survive. Continuous swimming expends a tremendous amount of energy and, therefore, fishes spend only short periods in full current. Instead, individuals remain close to the bottom or the banks, behind obstacles, and sheltered from the current, swimming in the current only to feed or change locations. Some species have adapted to living only on the system bottom, never venturing into the open water flow. These fishes are dorso-ventrally flattened to reduce flow resistance and often have eyes on top of their heads to observe what is happening above them. Some also have sensory barrels positioned under the head to assist in the testing of substratum. Lotic systems typically connect to each other, forming a path to the ocean (spring → stream → river → ocean), and many fishes have life cycles that require stages in both fresh and salt water. Salmon, for example, are anadromous species that are born in freshwater but spend most of their adult life in the ocean, returning to fresh water only to spawn. Eels are catadromous species that do the opposite, living in freshwater as adults but migrating to the ocean to spawn. Other vertebrate taxa that inhabit lotic systems include amphibians, such as salamanders, reptiles (e.g. snakes, turtles, crocodiles and alligators) various bird species, and mammals (e.g., otters, beavers, hippos, and river dolphins). With the exception of a few species, these vertebrates are not tied to water as fishes are, and spend part of their time in terrestrial habitats. Many fish species are important as consumers and as prey species to the larger vertebrates mentioned above. Trophic level dynamics The concept of trophic levels are used in food webs to visualise the manner in which energy is transferred from one part of an ecosystem to another. Trophic levels can be assigned numbers determining how far an organism is along the food chain. Level one: Producers, plant-like organisms that generate their own food using solar radiation, including algae, phytoplankton, mosses and lichens. Level two: Consumers, animal-like organism that get their energy from eating producers, such as zooplankton, small fish, and crustaceans. Level three: Decomposers, organisms that break down the dead matter of consumers and producers and return the nutrients back to the system. Example are bacteria and fungi. All energy transactions within an ecosystem derive from a single external source of energy, the sun. Some of this solar radiation is used by producers (plants) to turn inorganic substances into organic substances which can be used as food by consumers (animals). Plants release portions of this energy back into the ecosystem through a catabolic process. Animals then consume the potential energy that is being released from the producers. This system is followed by the death of the consumer organism which then returns nutrients back into the ecosystem. This allow further growth for the plants, and the cycle continues. Breaking cycles down into levels makes it easier for ecologists to understand ecological succession when observing the transfer of energy within a system. Top-down and bottom-up affect A common issue with trophic level dynamics is how resources and production are regulated. The usage and interaction between resources have a large impact on the structure of food webs as a whole. Temperature plays a role in food web interactions including top-down and bottom-up forces within ecological communities. Bottom-up regulations within a food web occur when a resource available at the base or bottom of the food web increases productivity, which then climbs the chain and influence the biomass availability to higher trophic organism. Top-down regulations occur when a predator population increases. This limits the available prey population, which limits the availability of energy for lower trophic levels within the food chain. Many biotic and abiotic factors can influence top-down and bottom-up interactions. Trophic cascade Another example of food web interactions are trophic cascades. Understanding trophic cascades has allowed ecologists to better understand the structure and dynamics of food webs within an ecosystem. The phenomenon of trophic cascades allows keystone predators to structure entire food web in terms of how they interact with their prey. Trophic cascades can cause drastic changes in the energy flow within a food web. For example, when a top or keystone predator consumes organisms below them in the food web, the density and behavior of the prey will change. This, in turn, affects the abundance of organisms consumed further down the chain, resulting in a cascade down the trophic levels. However, empirical evidence shows trophic cascades are much more prevalent in terrestrial food webs than aquatic food webs. Food chain A food chain is a linear system of links that is part of a food web, and represents the order in which organisms are consumed from one trophic level to the next. Each link in a food chain is associated with a trophic level in the ecosystem. The numbered steps it takes for the initial source of energy starting from the bottom to reach the top of the food web is called the food chain length. While food chain lengths can fluctuate, aquatic ecosystems start with primary producers that are consumed by primary consumers which are consumed by secondary consumers, and those in turn can be consumed by tertiary consumers so on and so forth until the top of the food chain has been reached. Primary producers Primary producers start every food chain. Their production of energy and nutrients comes from the sun through photosynthesis. Algae contributes to a lot of the energy and nutrients at the base of the food chain along with terrestrial litter-fall that enters the stream or river. Production of organic compounds like carbon is what gets transferred up the food chain. Primary producers are consumed by herbivorous invertebrates that act as the primary consumers. Productivity of these producers and the function of the ecosystem as a whole are influenced by the organism above it in the food chain. Primary consumers Primary consumers are the invertebrates and macro-invertebrates that feed upon the primary producers. They play an important role in initiating the transfer of energy from the base trophic level to the next. They are regulatory organisms which facilitate and control rates of nutrient cycling and the mixing of aquatic and terrestrial plant materials. They also transport and retain some of those nutrients and materials. There are many different functional groups of these invertebrate, including grazers, organisms that feed on algal biofilm that collects on submerged objects, shredders that feed on large leaves and detritus and help break down large material. Also filter feeders, macro-invertebrates that rely on stream flow to deliver them fine particulate organic matter (FPOM) suspended in the water column, and gatherers who feed on FPOM found on the substrate of the river or stream. Secondary consumers The secondary consumers in a river ecosystem are the predators of the primary consumers. This includes mainly insectivorous fish. Consumption by invertebrate insects and macro-invertebrates is another step of energy flow up the food chain. Depending on their abundance, these predatory consumers can shape an ecosystem by the manner in which they affect the trophic levels below them. When fish are at high abundance and eat lots of invertebrates, then algal biomass and primary production in the stream is greater, and when secondary consumers are not present, then algal biomass may decrease due to the high abundance of primary consumers. Energy and nutrients that starts with primary producers continues to make its way up the food chain and depending on the ecosystem, may end with these predatory fish. Food web complexity Diversity, productivity, species richness, composition and stability are all interconnected by a series of feedback loops. Communities can have a series of complex, direct and/or indirect, responses to major changes in biodiversity. Food webs can include a wide array of variables, the three main variables ecologists look at regarding ecosystems include species richness, biomass of productivity and stability/resistant to change. When a species is added or removed from an ecosystem it will have an effect on the remaining food web, the intensity of this effect is related to species connectedness and food web robustness. When a new species is added to a river ecosystem the intensity of the effect is related to the robustness or resistance to change of the current food web. When a species is removed from a river ecosystem the intensity of the effect is related to the connectedness of the species to the food web. An invasive species could be removed with little to no effect, but if important and native primary producers, prey or predatory fish are removed you could have a negative trophic cascade. One highly variable component to river ecosystems is food supply (biomass of primary producers). Food supply or type of producers is ever changing with the seasons and differing habitats within the river ecosystem. Another highly variable component to river ecosystems is nutrient input from wetland and terrestrial detritus. Food and nutrient supply variability is important for the succession, robustness and connectedness of river ecosystem organisms. Trophic relationships Energy inputs Energy sources can be autochthonous or allochthonous. Autochthonous (from the Latin "auto" = "self) energy sources are those derived from within the lotic system. During photosynthesis, for example, primary producers form organic carbon compounds out of carbon dioxide and inorganic matter. The energy they produce is important for the community because it may be transferred to higher trophic levels via consumption. Additionally, high rates of primary production can introduce dissolved organic matter (DOM) to the waters. Another form of autochthonous energy comes from the decomposition of dead organisms and feces that originate within the lotic system. In this case, bacteria decompose the detritus or coarse particulate organic material (CPOM; >1 mm pieces) into fine particulate organic matter (FPOM; <1 mm pieces) and then further into inorganic compounds that are required for photosynthesis. This process is discussed in more detail below. Allochthonous energy sources are those derived from outside the lotic system, that is, from the terrestrial environment. Leaves, twigs, fruits, etc. are typical forms of terrestrial CPOM that have entered the water by direct litter fall or lateral leaf blow. In addition, terrestrial animal-derived materials, such as feces or carcasses that have been added to the system are examples of allochthonous CPOM. The CPOM undergoes a specific process of degradation. Allan gives the example of a leaf fallen into a stream. First, the soluble chemicals are dissolved and leached from the leaf upon its saturation with water. This adds to the DOM load in the system. Next microbes such as bacteria and fungi colonize the leaf, softening it as the mycelium of the fungus grows into it. The composition of the microbial community is influenced by the species of tree from which the leaves are shed (Rubbo and Kiesecker 2004). This combination of bacteria, fungi, and leaf are a food source for shredding invertebrates, which leave only FPOM after consumption. These fine particles may be colonized by microbes again or serve as a food source for animals that consume FPOM. Organic matter can also enter the lotic system already in the FPOM stage by wind, surface runoff, bank erosion, or groundwater. Similarly, DOM can be introduced through canopy drip from rain or from surface flows. Invertebrates Invertebrates can be organized into many feeding guilds in lotic systems. Some species are shredders, which use large and powerful mouth parts to feed on non-woody CPOM and their associated microorganisms. Others are suspension feeders, which use their setae, filtering aparati, nets, or even secretions to collect FPOM and microbes from the water. These species may be passive collectors, utilizing the natural flow of the system, or they may generate their own current to draw water, and also, FPOM in Allan. Members of the gatherer-collector guild actively search for FPOM under rocks and in other places where the stream flow has slackened enough to allow deposition. Grazing invertebrates utilize scraping, rasping, and browsing adaptations to feed on periphyton and detritus. Finally, several families are predatory, capturing and consuming animal prey. Both the number of species and the abundance of individuals within each guild is largely dependent upon food availability. Thus, these values may vary across both seasons and systems. Fish Fish can also be placed into feeding guilds. Planktivores pick plankton out of the water column. Herbivore-detritivores are bottom-feeding species that ingest both periphyton and detritus indiscriminately. Surface and water column feeders capture surface prey (mainly terrestrial and emerging insects) and drift (benthic invertebrates floating downstream). Benthic invertebrate feeders prey primarily on immature insects, but will also consume other benthic invertebrates. Top predators consume fishes and/or large invertebrates. Omnivores ingest a wide range of prey. These can be floral, faunal, and/or detrital in nature. Finally, parasites live off of host species, typically other fishes. Fish are flexible in their feeding roles, capturing different prey with regard to seasonal availability and their own developmental stage. Thus, they may occupy multiple feeding guilds in their lifetime. The number of species in each guild can vary greatly between systems, with temperate warm water streams having the most benthic invertebrate feeders, and tropical systems having large numbers of detritus feeders due to high rates of allochthonous input. Community patterns and diversity Local species richness Large rivers have comparatively more species than small streams. Many relate this pattern to the greater area and volume of larger systems, as well as an increase in habitat diversity. Some systems, however, show a poor fit between system size and species richness. In these cases, a combination of factors such as historical rates of speciation and extinction, type of substrate, microhabitat availability, water chemistry, temperature, and disturbance such as flooding seem to be important. Resource partitioning Although many alternate theories have been postulated for the ability of guild-mates to coexist (see Morin 1999), resource partitioning has been well documented in lotic systems as a means of reducing competition. The three main types of resource partitioning include habitat, dietary, and temporal segregation. Habitat segregation was found to be the most common type of resource partitioning in natural systems (Schoener, 1974). In lotic systems, microhabitats provide a level of physical complexity that can support a diverse array of organisms (Vincin and Hawknis, 1998). The separation of species by substrate preferences has been well documented for invertebrates. Ward (1992) was able to divide substrate dwellers into six broad assemblages, including those that live in: coarse substrate, gravel, sand, mud, woody debris, and those associated with plants, showing one layer of segregation. On a smaller scale, further habitat partitioning can occur on or around a single substrate, such as a piece of gravel. Some invertebrates prefer the high flow areas on the exposed top of the gravel, while others reside in the crevices between one piece of gravel and the next, while still others live on the bottom of this gravel piece. Dietary segregation is the second-most common type of resource partitioning. High degrees of morphological specializations or behavioral differences allow organisms to use specific resources. The size of nets built by some species of invertebrate suspension feeders, for example, can filter varying particle size of FPOM from the water (Edington et al. 1984). Similarly, members in the grazing guild can specialize in the harvesting of algae or detritus depending upon the morphology of their scraping apparatus. In addition, certain species seem to show a preference for specific algal species. Temporal segregation is a less common form of resource partitioning, but it is nonetheless an observed phenomenon. Typically, it accounts for coexistence by relating it to differences in life history patterns and the timing of maximum growth among guild mates. Tropical fishes in Borneo, for example, have shifted to shorter life spans in response to the ecological niche reduction felt with increasing levels of species richness in their ecosystem (Watson and Balon 1984). Persistence and succession Over long time scales, there is a tendency for species composition in pristine systems to remain in a stable state. This has been found for both invertebrate and fish species. On shorter time scales, however, flow variability and unusual precipitation patterns decrease habitat stability and can all lead to declines in persistence levels. The ability to maintain this persistence over long time scales is related to the ability of lotic systems to return to the original community configuration relatively quickly after a disturbance (Townsend et al. 1987). This is one example of temporal succession, a site-specific change in a community involving changes in species composition over time. Another form of temporal succession might occur when a new habitat is opened up for colonization. In these cases, an entirely new community that is well adapted to the conditions found in this new area can establish itself. River continuum concept The River continuum concept (RCC) was an attempt to construct a single framework to describe the function of temperate lotic ecosystems from the headwaters to larger rivers and relate key characteristics to changes in the biotic community (Vannote et al. 1980). The physical basis for RCC is size and location along the gradient from a small stream eventually linked to a large river. Stream order (see characteristics of streams) is used as the physical measure of the position along the RCC. According to the RCC, low ordered sites are small shaded streams where allochthonous inputs of CPOM are a necessary resource for consumers. As the river widens at mid-ordered sites, energy inputs should change. Ample sunlight should reach the bottom in these systems to support significant periphyton production. Additionally, the biological processing of CPOM (coarse particulate organic matter larger than 1 mm) inputs at upstream sites is expected to result in the transport of large amounts of FPOM (fine particulate organic matter smaller than 1 mm) to these downstream ecosystems. Plants should become more abundant at edges of the river with increasing river size, especially in lowland rivers where finer sediments have been deposited and facilitate rooting. The main channels likely have too much current and turbidity and a lack of substrate to support plants or periphyton. Phytoplankton should produce the only autochthonous inputs here, but photosynthetic rates will be limited due to turbidity and mixing. Thus, allochthonous inputs are expected to be the primary energy source for large rivers. This FPOM will come from both upstream sites via the decomposition process and through lateral inputs from floodplains. Biota should change with this change in energy from the headwaters to the mouth of these systems. Namely, shredders should prosper in low-ordered systems and grazers in mid-ordered sites. Microbial decomposition should play the largest role in energy production for low-ordered sites and large rivers, while photosynthesis, in addition to degraded allochthonous inputs from upstream will be essential in mid-ordered systems. As mid-ordered sites will theoretically receive the largest variety of energy inputs, they might be expected to host the most biological diversity (Vannote et al. 1980). Just how well the RCC actually reflects patterns in natural systems is uncertain and its generality can be a handicap when applied to diverse and specific situations. The most noted criticisms of the RCC are: 1. It focuses mostly on macroinvertebrates, disregarding that plankton and fish diversity is highest in high orders; 2. It relies heavily on the fact that low ordered sites have high CPOM inputs, even though many streams lack riparian habitats; 3. It is based on pristine systems, which rarely exist today; and 4. It is centered around the functioning of temperate streams. Despite its shortcomings, the RCC remains a useful idea for describing how the patterns of ecological functions in a lotic system can vary from the source to the mouth. Disturbances such as congestion by dams or natural events such as shore flooding are not included in the RCC model. Various researchers have since expanded the model to account for such irregularities. For example, J.V. Ward and J.A. Stanford came up with the Serial Discontinuity Concept in 1983, which addresses the impact of geomorphologic disorders such as congestion and integrated inflows. The same authors presented the Hyporheic Corridor concept in 1993, in which the vertical (in depth) and lateral (from shore to shore) structural complexity of the river were connected. The flood pulse concept, developed by W. J. Junk in 1989, further modified by P. B. Bayley in 1990 and K. Tockner in 2000, takes into account the large amount of nutrients and organic material that makes its way into a river from the sediment of surrounding flooded land. Human impacts Humans exert a geomorphic force that now rivals that of the natural Earth. The period of human dominance has been termed the Anthropocene, and several dates have been proposed for its onset. Many researchers have emphasised the dramatic changes associated with the Industrial Revolution in Europe after about 1750 CE (Common Era) and the Great Acceleration in technology at about 1950 CE. However, a detectable human imprint on the environment extends back for thousands of years, and an emphasis on recent changes minimises the enormous landscape transformation caused by humans in antiquity. Important earlier human effects with significant environmental consequences include megafaunal extinctions between 14,000 and 10,500 cal yr BP; domestication of plants and animals close to the start of the Holocene at 11,700 cal yr BP; agricultural practices and deforestation at 10,000 to 5000 cal yr BP; and widespread generation of anthropogenic soils at about 2000 cal yr BP. Key evidence of early anthropogenic activity is encoded in early fluvial successions, long predating anthropogenic effects that have intensified over the past centuries and led to the modern worldwide river crisis. Pollution River pollution can include but is not limited to: increasing sediment export, excess nutrients from fertilizer or urban runoff, sewage and septic inputs, plastic pollution, nano-particles, pharmaceuticals and personal care products, synthetic chemicals, road salt, inorganic contaminants (e.g., heavy metals), and even heat via thermal pollutions. The effects of pollution often depend on the context and material, but can reduce ecosystem functioning, limit ecosystem services, reduce stream biodiversity, and impact human health. Pollutant sources of lotic systems are hard to control because they can derive, often in small amounts, over a very wide area and enter the system at many locations along its length. While direct pollution of lotic systems has been greatly reduced in the United States under the government's Clean Water Act, contaminants from diffuse non-point sources remain a large problem. Agricultural fields often deliver large quantities of sediments, nutrients, and chemicals to nearby streams and rivers. Urban and residential areas can also add to this pollution when contaminants are accumulated on impervious surfaces such as roads and parking lots that then drain into the system. Elevated nutrient concentrations, especially nitrogen and phosphorus which are key components of fertilizers, can increase periphyton growth, which can be particularly dangerous in slow-moving streams. Another pollutant, acid rain, forms from sulfur dioxide and nitrous oxide emitted from factories and power stations. These substances readily dissolve in atmospheric moisture and enter lotic systems through precipitation. This can lower the pH of these sites, affecting all trophic levels from algae to vertebrates. Mean species richness and total species numbers within a system decrease with decreasing pH. Flow modification Flow modification can occur as a result of dams, water regulation and extraction, channel modification, and the destruction of the river floodplain and adjacent riparian zones. Dams alter the flow, temperature, and sediment regime of lotic systems. Additionally, many rivers are dammed at multiple locations, amplifying the impact. Dams can cause enhanced clarity and reduced variability in stream flow, which in turn cause an increase in periphyton abundance. Invertebrates immediately below a dam can show reductions in species richness due to an overall reduction in habitat heterogeneity. Also, thermal changes can affect insect development, with abnormally warm winter temperatures obscuring cues to break egg diapause and overly cool summer temperatures leaving too few acceptable days to complete growth. Finally, dams fragment river systems, isolating previously continuous populations, and preventing the migrations of anadromous and catadromous species. Invasive species Invasive species have been introduced to lotic systems through both purposeful events (e.g. stocking game and food species) as well as unintentional events (e.g. hitchhikers on boats or fishing waders). These organisms can affect natives via competition for prey or habitat, predation, habitat alteration, hybridization, or the introduction of harmful diseases and parasites. Once established, these species can be difficult to control or eradicate, particularly because of the connectivity of lotic systems. Invasive species can be especially harmful in areas that have endangered biota, such as mussels in the Southeast United States, or those that have localized endemic species, like lotic systems west of the Rocky Mountains, where many species evolved in isolation.
Physical sciences
Hydrology
Earth science
7282792
https://en.wikipedia.org/wiki/Shapley%20Supercluster
Shapley Supercluster
The Shapley Supercluster or Shapley Concentration (SCl 124) is the largest concentration of galaxies in our nearby universe that forms a gravitationally interacting unit, thereby pulling itself together instead of expanding with the universe. It appears as a striking overdensity in the distribution of galaxies in the constellation of Centaurus. It is 650 million light-years away (z=0.046). History In 1930, Harlow Shapley and his colleagues at the Harvard College Observatory started a survey of galaxies in the southern sky, using photographic plates obtained at the 24-inch Bruce telescope at Bloemfontein, South Africa. By 1932, Shapley reported the discovery of 76,000 galaxies brighter than 18th apparent magnitude in a third of the southern sky, based on galaxy counts from his plates. Some of this data was later published as part of the Harvard galaxy counts, intended to map galactic obscuration and to find the space density of galaxies. In this catalog, Shapley could see most of the 'Coma-Virgo cloud' (now known to be a superposition of the Coma Supercluster and the Virgo Supercluster), but found a 'cloud' in the constellation of Centaurus to be the most striking concentration of galaxies. He found it particularly interesting because of its "great linear dimension, the numerous population and distinctly elongated form". This can be identified with what we now know as the core of the Shapley Supercluster. Shapley estimated the distance to this cloud to be 14 times that to the Virgo Cluster, from the average diameters of the galaxies. This would place the Shapley Supercluster at a distance of 231 Mpc, based on the current estimate of the distance to Virgo. In recent times, the Shapley Supercluster was named by Somak Raychaudhury, from a survey of galaxies from UK Schmidt Telescope Sky survey plates, using the Automated Plate Measuring Facility (APM) at the University of Cambridge in England. In this paper, the supercluster was named after Harlow Shapley, in recognition of his pioneering survey of galaxies in which this concentration of galaxies was first seen. Around the same time, Roberto Scaramella and co-workers had also noticed the Shapley Supercluster in the Abell catalogue of clusters of galaxies: they had named it the Alpha concentration. Current interest The Shapley Supercluster lies very close to the direction in which the Local Group of galaxies (including our galaxy) is moving with respect to the cosmic microwave background (CMB) frame of reference. This has led many to speculate that the Shapley Supercluster may be one of the major causes of our galaxy's peculiar motion—the Great Attractor may be another—and has led to a surge of interest in this supercluster. It has been found that the Great Attractor and all the galaxies in our region of the universe (including our galaxy, the Milky Way) are moving toward the Shapley Supercluster. In 2017 it was proposed that the movement towards attractors like the Shapley Attractor in the supercluster creates a relative movement away from underdense areas, that may be visualized as a virtual repeller. This approach enables new ways of understanding and modelling variations in galactic movements. The nearest large underdense area has been labelled the dipole repeller.
Physical sciences
Notable galaxy clusters
Astronomy
7286999
https://en.wikipedia.org/wiki/Common%20house%20gecko
Common house gecko
The common house gecko (Hemidactylus frenatus) is a gecko native to South and Southeast Asia as well as Near Oceania. It is also known as the Asian house gecko, Pacific house gecko, wall gecko, house lizard, tiktiki, chipkali or moon lizard. These geckos are nocturnal; hiding during the day and foraging for insects at night. They can be seen climbing walls of houses and other buildings in search of insects attracted to porch lights, and are immediately recognisable by their characteristic chirping. They grow to a length of between , and live for about 7 years. These small geckos are non-venomous and not harmful to humans. Most medium-sized to large geckos are docile, but may bite if distressed, which might pierce skin. The common house gecko is a tropical species, and thrives in warm, humid areas where it can crawl around on rotting wood in search of the insects it eats, as well as within urban landscapes in warm climates. The animal is very adaptable and may prey on insects and spiders, displacing other gecko species which are less robust or behaviourally aggressive. In parts of Australia and Papua New Guinea they are often confused with a similar native lizard, the dubious dtella. Etymology Like many geckos, this species can lose its tail when alarmed. Its call or chirp rather resembles the sound "gecko, gecko", also interpreted as "tchak tchak tchak" (often sounded six to nine times in sequence). In Asia, notably Indonesia, Thailand, Singapore, and Malaysia in the south east, geckos have local names onomatopoetically derived from the sounds they make: Hemidactylus frenatus is called "chee chak" or "chi chak" (pr- chee chuck), said quickly, also commonly spelled as "cicak" in Malay dictionaries. In the Philippines, they are called "butiki" in Tagalog, "tiki" in Visayan, "alutiit" in Ilocano, and in Thailand, "jing-jok" (). In Myanmar, they are called "အိမ်မြှောင် - ain-mjong" ( "အိမ် - ain" means "house" and "မြှောင် - mjong" means "stick to"). In some parts of India and in Pakistan, they are called "chhipkali" (Urdu:چھپکلی, Hindi: छिपकली), from chhipkana, to stick. In Nepal, they are called "vhitti" (Nepali: भित्ती) or "mausuli" (Nepali: माउसुली). In other parts of India, they are called "kirli" (Punjabi: ਕਿੜਲੀ), "bismatiya" or "bistukiya" or "bistuiya" (Bhojpuri: बिसमतिया or बिसटुकिया or बिसटुईया), "jhiti piti" (Oriya: ଝିଟିପିଟି), "zethi" (Assamese: জেঠী), "thikthikiaa" (Maithili: ठिकठिकिया), "paal" (Marathi: पाल), "gawli" or "palli" (Malayalam: ഗവ്ളി (gawli), പല്ലി (palli), Tamil: பல்லி (palli)), Telugu: బల్లి (balli), Kannada: ಹಲ್ಲಿ (halli), "ali" (Sylheti: ꠀꠟꠤ), "garoli" (Gujarati: ગરોળી). In Bangladesh and West Bengal, they are called "tiktiki" (Bengali: টিকটিকি) as the sound is perceived as "tik tik tik". In Sri Lanka, they are called "huna" in singular form (Sinhalese: හුනා). In Maldives, they are called "hoanu" (divehi:ހޯނު). In China and some countries with Teochew speakers, they are called 檐龙 ("ji-leng", literally "roof dragon"). In Central America, they are sometimes called "limpia casas" (Spanish: "house cleaners") because they reduce the amount of insects and other arthropods in homes and are also called 'qui-qui' because of the sound they make. Habitat and diet The common house gecko is by no means a misnomer, displaying a clear preference for urban environments. The synanthropic gecko displays a tendency to hunt for insects in close proximity to urban lights. They have been found in bushland, but the current evidence seems to suggest they have a preference for urban environments, with their distribution being mostly defined by areas within or in close proximity to city bounds. The common house gecko appears to prefer areas in the light which are proximal to cracks, or places to escape. Geckos without an immediate opportunity to escape potential danger display behavioural modifications to compensate for this fact, emerging later in the night and retreating earlier in the morning. Without access to the urban landscape, they appear to prefer habitat which is composed of comparatively dense forest or eucalypt woodland which is proximal to closed forest. The selection of primarily urban habitats makes available the preferred foods of the common house gecko. The bulk of the diet of the gecko is made up of invertebrates, primarily hunted around urban structures. Primary invertebrate food sources include cockroaches, termites, some bees and wasps, butterflies, moths, flies, spiders, and several beetle groupings. It also feeds on molluscs and smaller geckos. There is limited evidence that cannibalism can occur in laboratory conditions, but this is yet to be observed in the wild. Distribution The common house gecko is prolific through the tropics and subtropics. It is able to exist in an ecologically analogous place with other Hemidactylus species. Despite being native throughout Southeast Asia, recent introductions, both deliberate and accidental, have seen them recorded in the Deep South of the United States, large parts of tropical and sub-tropical Australia, and many other countries in South and Central America, Caribbean Dominican Republic, Africa, South Asia and the Middle East (Bahrain, Jordan, Qatar, Kuwait, Saudi Arabia, Oman and the United Arab Emirates). Most recently, this species has also invaded the Caribbean Lesser Antilles, and is now present on Saint Martin (island), Saint Barthélemy, Sint Eustatius, Dominica and Saint Lucia. Their capacity to withstand a wide range of latitudes is also partially facilitated by their capacity to enter a state of brumation during colder months. The prospect of increased climate change interacts synergistically with increased urbanisation, greatly increasing the prospective distribution of the common house gecko. Due to concerns over its potential capacity as an invasive species, there are efforts to limit their introduction and presence in locations where they could be a risk to native gecko species. In Mexico, H. frenatus was first collected in Acapulco, Guerrero, in March 1895 and found to be well established there and in the surrounding regions by the early 1940s. It was likely introduced through shipping and cargo. H. frenatus now occurs throughout the lowlands of Mexico on both the Atlantic and Pacific versants including the Yucatan Peninsula, and Baja California, with records from 21 of the 32 Mexican states. Most records of H. frenatus in Mexico are from buildings such as homes, hotels, and other structure in cities and towns, with only a few reports of the species in natural habitat, and its impact, if any, on native fauna there is unknown. As an invasive species There is evidence to suggest that the presence of Hemidactylus frenatus has negatively impacted native gecko populations throughout tropical Asia, Central America and the Pacific. Some species which have been displaced include: Lepidodactylus lugubris Hemidactylus garnotti The genus Nactus on the Mascarene Islands (three species in this genus are now considered to be extinct) As an introduced species, they pose a threat through the potential introduction of new parasites and diseases, but have potential negative impacts which extend beyond this. The primary cause for concern appears to exist around their exclusionary behaviour and out-competition of other gecko species. Mechanistically, three explanations have been derived to justify the capacity of H. frenatus to outcompete other gecko species: Possessing a smaller body size. They fail to displace native species larger than themselves, such as the robust velvet gecko. Male H. frenatus displaying higher levels of aggression than females of other gecko species (particularly parthenogenic species with asexual females). Sexual females displaying an increased capacity to compete in comparison to asexual females. These differences provide H. frenatus a competitive edge in the limited urban areas they preferentially inhabit, particularly those with high degrees of habitat fragmentation. To compound this, they also are capable of operating on higher densities, which leads to an increase in gecko sightings and biomass in an area, even after reducing native species' density. The common house gecko also displays a higher tolerance to high light levels, which may allow for an increased risk-reward pay off in hunting endeavours. There is also some limited evidence for cannibalism, hunting on other small gecko species, particularly juveniles. Most of this evidence is in laboratory conditions, with several studies failing to find evidence of cannibalism in the wild for this species. Some males are more territorial than others. Territorial males will display larger heads, with a more pronounced head shape. This increase in head size incurs the cost of a poorer performance in escape sprint time. This suggests selective pressure prioritises the biting force capacity of the male, over their capacity to escape quickly. On the contrary, increases in female head size are met with a proportionate increase in hind limb length and no decrease in speed. Though both sexes use escape sprinting as a survival strategy, males are more likely to need to stop and fight using biting, due to the reduced mobility caused by disproportionate head to hind leg size, which in turn is correlated with localised territorial behaviours. The success of the common house gecko can also be explained through other elements of competition, such as postural displays and movement patterns. An example of this is how the common house gecko can trigger an "avoidance response" in the mourning gecko, causing it to avoid a specific area where food may become available. Though triggering avoidance in other species, they themselves can tolerate the presence of other gecko species well, regardless of whether those species are smaller or larger, faster or slower, or more physically aggressive or not. This allows them greater access to feeding areas and territories, making them a highly successful invasive species. Physiology The common house gecko is ectothermic ("cold-blooded") and displays a variety of means of thermoregulating through behaviour. Its physiology has ramifications for its distribution and nature of interaction with native species, as well as reproductive success as an introduced species. Metabolically, the demand of the common house gecko is not significantly variable from other lizard species of a similar size, with oxygen consumption appearing congruent with trends observed in other tropical, subtropical and temperate species of gecko. Thermal independence exists between 26–35 degrees, with some capacity to self regulate temperature. This means that where the environmental temperature is 26–35 degrees, the common house gecko can modify body temperature through behavioural adaptations. Breathing rates of geckos are temperature dependent above this maximal heat, but independent as it grows colder. There are behavioural mechanisms of thermoregulation present, such as the selection of sunlight and the substrates on which they sit. The common house gecko can be best defined as quinodiurnal. This means they thermoregulate during the daytime and forage at night. An active form of this thermoregulation includes the presence of the gecko in lighter environments, proximal to cracks in the substrate. As such, there is a close relationship between activity levels and correlated air temperature. Timing of the circadian rhythm of the common house gecko is further impacted by light levels. This rhythm tends to involve the highest population presence around midnight, with highest activity levels just after sunset, with a gradual reduction until dawn. Daily cycle differences from place to place can generally be explained by environmental factors such as human interaction, and structural features. A peak in hunting activity after dark places them in an ideal spot to take advantage of invertebrate congregation around artificial lighting in the urban environment. Due to this level of dependence on the environment, drops in temperature may act as a leading indicator for reduced gecko sightings in the medium term. Acute weather events such as rain or wind will result in acute decreases in Gecko sightings within that environment. It is unsure what impact these phenomena may have on the long term on distribution and the capacity of the common house gecko to compete with other gecko species. There is some weak evidence, without statistically significant data, to suggest a trend toward higher temperature for females, which has an evolutionary advantage of increasing the speed of egg development. Due to them being a species which is adapted for tropical or subtropical environments, there appear to be few physiological adaptations designed to prevent water loss. This may limit their capacity to thrive in arid or semi-arid environments. Reproductive biology H. frenatus has a similar gonad structure to the remainder of the gecko family. It is possible to differentiate the sex of larger common house geckos, with individuals which are larger than typically displaying differentiated gonads. Differentiated gonads are most clearly seen with a swelling at the entrance to the cloaca caused by the copulatory organs in males. Females lay a maximum of two hard-shelled eggs at any single time, with each descending from a single oviduct. Up to four eggs can exist within the ovaries in differing stages of development. This shortens the potential turn around between egg-laying events in gravid females. Females produce a single egg per ovary per cycle. This means they are considered monoautochronic ovulatory. Within the testes, mature sperm are found in the male geckos year-round and are able to be stored within the oviduct of the female. Sperm can be stored for a period of time as long as 36 weeks. This provides a significantly increased chance of colonisation of new habitats, requiring smaller populations to be transplanted for a chance of success. Longer storage time of sperm within the female is associated with negative survival outcomes and hatching, possibly due to sperm age. Sperm is specifically stored between the uterine and infundibular components of the oviduct. The capacity to store sperm enables a degree of asynchrony between ovulation, copulation and laying of eggs. The capacity to store sperm is useful in island colonisation events, providing females which may be isolated the capacity to reproduce even if they have been separated from a male for some time. In laboratories, one mating event may produce as many as seven viable egg clutches. This eliminates the need for parthenogenesis and allows the young to include both male and female offspring, with one mating event leading to multiple clutches of eggs being laid. This reduced need for asexual reproduction increases the fitness of young through hybrid vigour and increased diversity. As well as this, sexually reproducing geckos are reported to be more robust and have higher survival rates than those which reproduce asexually. There is a positive correlation between size and viability of eggs, with larger geckos having eggs which were more likely to survive. There is also a correlation between warmer year-round temperatures and consistent food supply with reproductive seasonality, with Geckos with constant food and temperatures being less likely to develop fat deposits on their stomach, and more likely to be constantly reproductive. Genetics Two distinct karyotypes of the common house gecko appear to exist, one with 40 chromosomes and one with 46 chromosomes. This could be explained through an intraspecific variation of karyotype, or the possibility of two distinct species being misidentified. Morphological analysis seems particularly congruent with the suggestion that they indeed are different species. Taxonomic revision may be required as a greater understanding of phylogenetic trees and population structures is developed. Captivity House geckos can be kept as pets in a vivarium with a clean substrate, and typically require a heat source and a place to hide in order to regulate their body temperature, and a system of humidifiers and plants to provide them with moisture. The species will cling to vertical or even inverted surfaces when at rest. In a terrarium they will mostly be at rest on the sides or on the top cover rather than placing themselves on plants, decorations or on the substrate, thus being rather conspicuous. Cultural beliefs In the Philippines, geckos making a ticking sound are believed to indicate an imminent arrival of a visitor or a letter. But in Thailand, if a common house gecko chirps when someone leaves the house, that's a bad omen. In Thai idioms, it is called "greeting gecko". An elaborate system of predicting good and bad omens based on the sounds made by geckos, their movement and the rare instances when geckos fall from roofs has evolved over centuries in India. In some parts of India, the sound made by geckos is considered a bad omen; while in parts of India, Assam, Odisha, West Bengal, Bangladesh and Nepal, it is considered to be an endorsement of the truthfulness of a statement made just before, because the sound "tik tik tik" resembles the expression "thik thik thik" (Assamese: ঠিক ঠিক ঠিক), which in many Indian languages (e.g. Bengali and Assamese) means "correct correct correct", i.e., a three-fold confirmation. The cry of a gecko from an east wall as one is about to embark on a journey is considered auspicious, but a cry from any other wall is supposed to be inauspicious. A gecko falling on someone's right shoulder is considered good omen, but a bad omen if it falls on the left shoulder. In Punjab, it is believed that contact with the urine of a gecko will cause leprosy. In some places in India, it is believed that watching a lizard on the eve of Dhanteras is a good omen or a sign of prosperity. In Sri Lanka, it's believed that it's inauspicious if a gecko makes a sound while someone is going out of the house. And there is an art of divination based on a gecko falling onto one's body, with the different body parts indicating different predictions. This art of divination can be observed throughout the Indian subcontinent.
Biology and health sciences
Lizards and other Squamata
Animals
7290120
https://en.wikipedia.org/wiki/Methods%20of%20detecting%20exoplanets
Methods of detecting exoplanets
Any planet is an extremely faint light source compared to its parent star. For example, a star like the Sun is about a billion times as bright as the reflected light from any of the planets orbiting it. In addition to the intrinsic difficulty of detecting such a faint light source, the light from the parent star causes a glare that washes it out. For those reasons, very few of the exoplanets reported have been observed directly, with even fewer being resolved from their host star. Instead, astronomers have generally had to resort to indirect methods to detect extrasolar planets. As of 2016, several different indirect methods have yielded success. Established detection methods The following methods have at least once proved successful for discovering a new planet or detecting an already discovered planet: Radial velocity A star with a planet will move in its own small orbit in response to the planet's gravity. This leads to variations in the speed with which the star moves toward or away from Earth, i.e. the variations are in the radial velocity of the star with respect to Earth. The radial velocity can be deduced from the displacement in the parent star's spectral lines due to the Doppler effect. The radial-velocity method measures these variations in order to confirm the presence of the planet using the binary mass function. The speed of the star around the system's center of mass is much smaller than that of the planet, because the radius of its orbit around the center of mass is so small. (For example, the Sun moves by about 13 m/s due to Jupiter, but only about 9 cm/s due to Earth). However, velocity variations down to 3 m/s or even somewhat less can be detected with modern spectrometers, such as the HARPS (High Accuracy Radial Velocity Planet Searcher) spectrometer at the ESO 3.6 meter telescope in La Silla Observatory, Chile, the HIRES spectrometer at the Keck telescopes or EXPRES at the Lowell Discovery Telescope. An especially simple and inexpensive method for measuring radial velocity is "externally dispersed interferometry". Until around 2012, the radial-velocity method (also known as Doppler spectroscopy) was by far the most productive technique used by planet hunters. (After 2012, the transit method from the Kepler space telescope overtook it in number.) The radial velocity signal is distance independent, but requires high signal-to-noise ratio spectra to achieve high precision, and so is generally used only for relatively nearby stars, out to about 160 light-years from Earth, to find lower-mass planets. It is also not possible to simultaneously observe many target stars at a time with a single telescope. Planets of Jovian mass can be detectable around stars up to a few thousand light years away. This method easily finds massive planets that are close to stars. Modern spectrographs can also easily detect Jupiter-mass planets orbiting 10 astronomical units away from the parent star, but detection of those planets requires many years of observation. Earth-mass planets are currently detectable only in very small orbits around low-mass stars, e.g. Proxima b. It is easier to detect planets around low-mass stars, for two reasons: First, these stars are more affected by gravitational tug from planets. The second reason is that low-mass main-sequence stars generally rotate relatively slowly. Fast rotation makes spectral-line data less clear because half of the star quickly rotates away from observer's viewpoint while the other half approaches. Detecting planets around more massive stars is easier if the star has left the main sequence, because leaving the main sequence slows down the star's rotation. Sometimes Doppler spectrography produces false signals, especially in multi-planet and multi-star systems. Magnetic fields and certain types of stellar activity can also give false signals. When the host star has multiple planets, false signals can also arise from having insufficient data, so that multiple solutions can fit the data, as stars are not generally observed continuously. Some of the false signals can be eliminated by analyzing the stability of the planetary system, conducting photometry analysis on the host star and knowing its rotation period and stellar activity cycle periods. Planets with orbits highly inclined to the line of sight from Earth produce smaller visible wobbles, and are thus more difficult to detect. One of the advantages of the radial velocity method is that eccentricity of the planet's orbit can be measured directly. One of the main disadvantages of the radial-velocity method is that it can only estimate a planet's minimum mass (). The posterior distribution of the inclination angle i depends on the true mass distribution of the planets. However, when there are multiple planets in the system that orbit relatively close to each other and have sufficient mass, orbital stability analysis allows one to constrain the maximum mass of these planets. The radial-velocity method can be used to confirm findings made by the transit method. When both methods are used in combination, then the planet's true mass can be estimated. Although radial velocity of the star only gives a planet's minimum mass, if the planet's spectral lines can be distinguished from the star's spectral lines then the radial velocity of the planet itself can be found, and this gives the inclination of the planet's orbit. This enables measurement of the planet's actual mass. This also rules out false positives, and also provides data about the composition of the planet. The main issue is that such detection is possible only if the planet orbits around a relatively bright star and if the planet reflects or emits a lot of light. Transit photometry Technique, advantages, and disadvantages While the radial velocity method provides information about a planet's mass, the photometric method can determine the planet's radius. If a planet crosses (transits) in front of its parent star's disk, then the observed visual brightness of the star drops by a small amount, depending on the relative sizes of the star and the planet. For example, in the case of HD 209458, the star dims by 1.7%. However, most transit signals are considerably smaller; for example, an Earth-size planet transiting a Sun-like star produces a dimming of only 80 parts per million (0.008 percent). A theoretical transiting exoplanet light curve model predicts the following characteristics of an observed planetary system: transit depth (δ), transit duration (T), the ingress/egress duration (τ), and period of the exoplanet (P). However, these observed quantities are based on several assumptions. For convenience in the calculations, we assume that the planet and star are spherical, the stellar disk is uniform, and the orbit is circular. Depending on the relative position that an observed transiting exoplanet is while transiting a star, the observed physical parameters of the light curve will change. The transit depth (δ) of a transiting light curve describes the decrease in the normalized flux of the star during a transit. This details the radius of an exoplanet compared to the radius of the star. For example, if an exoplanet transits a solar radius size star, a planet with a larger radius would increase the transit depth and a planet with a smaller radius would decrease the transit depth. The transit duration (T) of an exoplanet is the length of time that a planet spends transiting a star. This observed parameter changes relative to how fast or slow a planet is moving in its orbit as it transits the star. The ingress/egress duration (τ) of a transiting light curve describes the length of time the planet takes to fully cover the star (ingress) and fully uncover the star (egress). If a planet transits from the one end of the diameter of the star to the other end, the ingress/egress duration is shorter because it takes less time for a planet to fully cover the star. If a planet transits a star relative to any other point other than the diameter, the ingress/egress duration lengthens as you move further away from the diameter because the planet spends a longer time partially covering the star during its transit. From these observable parameters, a number of different physical parameters (semi-major axis, star mass, star radius, planet radius, eccentricity, and inclination) are determined through calculations. With the combination of radial velocity measurements of the star, the mass of the planet is also determined. This method has two major disadvantages. First, planetary transits are observable only when the planet's orbit happens to be perfectly aligned from the astronomers' vantage point. The probability of a planetary orbital plane being directly on the line-of-sight to a star is the ratio of the diameter of the star to the diameter of the orbit (in small stars, the radius of the planet is also an important factor). About 10% of planets with small orbits have such an alignment, and the fraction decreases for planets with larger orbits. For a planet orbiting a Sun-sized star at 1 AU, the probability of a random alignment producing a transit is 0.47%. Therefore, the method cannot guarantee that any particular star is not a host to planets. However, by scanning large areas of the sky containing thousands or even hundreds of thousands of stars at once, transit surveys can find more extrasolar planets than the radial-velocity method. Several surveys have taken that approach, such as the ground-based MEarth Project, SuperWASP, KELT, and HATNet, as well as the space-based COROT, Kepler and TESS missions. The transit method has also the advantage of detecting planets around stars that are located a few thousand light years away. The most distant planets detected by Sagittarius Window Eclipsing Extrasolar Planet Search are located near the galactic center. However, reliable follow-up observations of these stars are nearly impossible with current technology. The second disadvantage of this method is a high rate of false detections. A 2012 study found that the rate of false positives for transits observed by the Kepler mission could be as high as 40% in single-planet systems. For this reason, a star with a single transit detection requires additional confirmation, typically from the radial-velocity method or orbital brightness modulation method. The radial velocity method is especially necessary for Jupiter-sized or larger planets, as objects of that size encompass not only planets, but also brown dwarfs and even small stars. As the false positive rate is very low in stars with two or more planet candidates, such detections often can be validated without extensive follow-up observations. Some can also be confirmed through the transit timing variation method. Many points of light in the sky have brightness variations that may appear as transiting planets by flux measurements. False-positives in the transit photometry method arise in three common forms: blended eclipsing binary systems, grazing eclipsing binary systems, and transits by planet sized stars. Eclipsing binary systems usually produce deep eclipses that distinguish them from exoplanet transits, since planets are usually smaller than about 2RJ, but eclipses are shallower for blended or grazing eclipsing binary systems. Blended eclipsing binary systems consist of a normal eclipsing binary blended with a third (usually brighter) star along the same line of sight, usually at a different distance. The constant light of the third star dilutes the measured eclipse depth, so the light-curve may resemble that for a transiting exoplanet. In these cases, the target most often contains a large main sequence primary with a small main sequence secondary or a giant star with a main sequence secondary. Grazing eclipsing binary systems are systems in which one object will just barely graze the limb of the other. In these cases, the maximum transit depth of the light curve will not be proportional to the ratio of the squares of the radii of the two stars, but will instead depend solely on the small fraction of the primary that is blocked by the secondary. The small measured dip in flux can mimic that of an exoplanet transit. Some of the false positive cases of this category can be easily found if the eclipsing binary system has a circular orbit, with the two companions having different masses. Due to the cyclic nature of the orbit, there would be two eclipsing events, one of the primary occulting the secondary and vice versa. If the two stars have significantly different masses, and this different radii and luminosities, then these two eclipses would have different depths. This repetition of a shallow and deep transit event can easily be detected and thus allow the system to be recognized as a grazing eclipsing binary system. However, if the two stellar companions are approximately the same mass, then these two eclipses would be indistinguishable, thus making it impossible to demonstrate that a grazing eclipsing binary system is being observed using only the transit photometry measurements. Finally, there are two types of stars that are approximately the same size as gas giant planets, white dwarfs and brown dwarfs. This is due to the fact that gas giant planets, white dwarfs, and brown dwarfs, are all supported by degenerate electron pressure. The light curve does not discriminate between masses as it only depends on the size of the transiting object. When possible, radial velocity measurements are used to verify that the transiting or eclipsing body is of planetary mass, meaning less than 13MJ. Transit Time Variations can also determine MP. Doppler Tomography with a known radial velocity orbit can obtain minimum MP and projected sing-orbit alignment. Red giant branch stars have another issue for detecting planets around them: while planets around these stars are much more likely to transit due to the larger star size, these transit signals are hard to separate from the main star's brightness light curve as red giants have frequent pulsations in brightness with a period of a few hours to days. This is especially notable with subgiants. In addition, these stars are much more luminous, and transiting planets block a much smaller percentage of light coming from these stars. In contrast, planets can completely occult a very small star such as a neutron star or white dwarf, an event which would be easily detectable from Earth. However, due to the small star sizes, the chance of a planet aligning with such a stellar remnant is extremely small. The main advantage of the transit method is that the size of the planet can be determined from the light curve. When combined with the radial-velocity method (which determines the planet's mass), one can determine the density of the planet, and hence learn something about the planet's physical structure. The planets that have been studied by both methods are by far the best-characterized of all known exoplanets. The transit method also makes it possible to study the atmosphere of the transiting planet. When the planet transits the star, light from the star passes through the upper atmosphere of the planet. By studying the high-resolution stellar spectrum carefully, one can detect elements present in the planet's atmosphere. A planetary atmosphere, and planet for that matter, could also be detected by measuring the polarization of the starlight as it passed through or is reflected off the planet's atmosphere. Additionally, the secondary eclipse (when the planet is blocked by its star) allows direct measurement of the planet's radiation and helps to constrain the planet's orbital eccentricity without needing the presence of other planets. If the star's photometric intensity during the secondary eclipse is subtracted from its intensity before or after, only the signal caused by the planet remains. It is then possible to measure the planet's temperature and even to detect possible signs of cloud formations on it. In March 2005, two groups of scientists carried out measurements using this technique with the Spitzer Space Telescope. The two teams, from the Harvard-Smithsonian Center for Astrophysics, led by David Charbonneau, and the Goddard Space Flight Center, led by L. D. Deming, studied the planets TrES-1 and HD 209458b respectively. The measurements revealed the planets' temperatures: 1,060 K (790°C) for TrES-1 and about 1,130 K (860 °C) for HD 209458b. In addition, the hot Neptune Gliese 436 b is known to enter secondary eclipse. However, some transiting planets orbit such that they do not enter secondary eclipse relative to Earth; HD 17156 b is over 90% likely to be one of the latter. History The first exoplanet for which transits were observed for HD 209458 b, which was discovered using radial velocity technique. These transits were observed in 1999 by two teams led David Charbonneau and Gregory W. Henry. The first exoplanet to be discovered with the transit method was OGLE-TR-56b in 2002 by the OGLE project. A French Space Agency mission, CoRoT, began in 2006 to search for planetary transits from orbit, where the absence of atmospheric scintillation allows improved accuracy. This mission was designed to be able to detect planets "a few times to several times larger than Earth" and performed "better than expected", with two exoplanet discoveries (both of the "hot Jupiter" type) as of early 2008. In June 2013, CoRoT's exoplanet count was 32 with several still to be confirmed. The satellite unexpectedly stopped transmitting data in November 2012 (after its mission had twice been extended), and was retired in June 2013. In March 2009, NASA mission Kepler was launched to scan a large number of stars in the constellation Cygnus with a measurement precision expected to detect and characterize Earth-sized planets. The NASA Kepler Mission uses the transit method to scan a hundred thousand stars for planets. It was hoped that by the end of its mission of 3.5 years, the satellite would have collected enough data to reveal planets even smaller than Earth. By scanning a hundred thousand stars simultaneously, it was not only able to detect Earth-sized planets, it was able to collect statistics on the numbers of such planets around Sun-like stars. On 2 February 2011, the Kepler team released a list of 1,235 extrasolar planet candidates, including 54 that may be in the habitable zone. On 5 December 2011, the Kepler team announced that they had discovered 2,326 planetary candidates, of which 207 are similar in size to Earth, 680 are super-Earth-size, 1,181 are Neptune-size, 203 are Jupiter-size and 55 are larger than Jupiter. Compared to the February 2011 figures, the number of Earth-size and super-Earth-size planets increased by 200% and 140% respectively. Moreover, 48 planet candidates were found in the habitable zones of surveyed stars, marking a decrease from the February figure; this was due to the more stringent criteria in use in the December data. By June 2013, the number of planet candidates was increased to 3,278 and some confirmed planets were smaller than Earth, some even Mars-sized (such as Kepler-62c) and one even smaller than Mercury (Kepler-37b). The Transiting Exoplanet Survey Satellite launched in April 2018. Reflection and emission modulations Short-period planets in close orbits around their stars will undergo reflected light variations because, like the Moon, they will go through phases from full to new and back again. In addition, as these planets receive a lot of starlight, it heats them, making thermal emissions potentially detectable. Since telescopes cannot resolve the planet from the star, they see only the combined light, and the brightness of the host star seems to change over each orbit in a periodic manner. Although the effect is small — the photometric precision required is about the same as to detect an Earth-sized planet in transit across a solar-type star – such Jupiter-sized planets with an orbital period of a few days are detectable by space telescopes such as the Kepler Space Observatory. Like with the transit method, it is easier to detect large planets orbiting close to their parent star than other planets as these planets catch more light from their parent star. When a planet has a high albedo and is situated around a relatively luminous star, its light variations are easier to detect in visible light while darker planets or planets around low-temperature stars are more easily detectable with infrared light with this method. In the long run, this method may find the most planets that will be discovered by that mission because the reflected light variation with orbital phase is largely independent of orbital inclination and does not require the planet to pass in front of the disk of the star. It still cannot detect planets with circular face-on orbits from Earth's viewpoint as the amount of reflected light does not change during its orbit. The phase function of the giant planet is also a function of its thermal properties and atmosphere, if any. Therefore, the phase curve may constrain other planet properties, such as the size distribution of atmospheric particles. When a planet is found transiting and its size is known, the phase variations curve helps calculate or constrain the planet's albedo. It is more difficult with very hot planets as the glow of the planet can interfere when trying to calculate albedo. In theory, albedo can also be found in non-transiting planets when observing the light variations with multiple wavelengths. This allows scientists to find the size of the planet even if the planet is not transiting the star. The first-ever direct detection of the spectrum of visible light reflected from an exoplanet was made in 2015 by an international team of astronomers. The astronomers studied light from 51 Pegasi b – the first exoplanet discovered orbiting a main-sequence star (a Sunlike star), using the High Accuracy Radial velocity Planet Searcher (HARPS) instrument at the European Southern Observatory's La Silla Observatory in Chile. Both CoRoT and Kepler have measured the reflected light from planets. However, these planets were already known since they transit their host star. The first planets discovered by this method are Kepler-70b and Kepler-70c, found by Kepler. Relativistic beaming A separate novel method to detect exoplanets from light variations uses relativistic beaming of the observed flux from the star due to its motion. It is also known as Doppler beaming or Doppler boosting. The method was first proposed by Abraham Loeb and Scott Gaudi in 2003. As the planet tugs the star with its gravitation, the density of photons and therefore the apparent brightness of the star changes from observer's viewpoint. Like the radial velocity method, it can be used to determine the orbital eccentricity and the minimum mass of the planet. With this method, it is easier to detect massive planets close to their stars as these factors increase the star's motion. Unlike the radial velocity method, it does not require an accurate spectrum of a star, and therefore can be used more easily to find planets around fast-rotating stars and more distant stars. One of the biggest disadvantages of this method is that the light variation effect is very small. A Jovian-mass planet orbiting 0.025 AU away from a Sun-like star is barely detectable even when the orbit is edge-on. This is not an ideal method for discovering new planets, as the amount of emitted and reflected starlight from the planet is usually much larger than light variations due to relativistic beaming. This method is still useful, however, as it allows for measurement of the planet's mass without the need for follow-up data collection from radial velocity observations. The first discovery of a planet using this method (Kepler-76b) was announced in 2013. Ellipsoidal variations Massive planets can cause slight tidal distortions to their host stars. When a star has a slightly ellipsoidal shape, its apparent brightness varies, depending if the oblate part of the star is facing the observer's viewpoint. Like with the relativistic beaming method, it helps to determine the minimum mass of the planet, and its sensitivity depends on the planet's orbital inclination. The extent of the effect on a star's apparent brightness can be much larger than with the relativistic beaming method, but the brightness changing cycle is twice as fast. In addition, the planet distorts the shape of the star more if it has a low semi-major axis to stellar radius ratio and the density of the star is low. This makes this method suitable for finding planets around stars that have left the main sequence. Pulsar timing A pulsar is a neutron star: the small, ultradense remnant of a star that has exploded as a supernova. Pulsars emit radio waves extremely regularly as they rotate. Because the intrinsic rotation of a pulsar is so regular, slight anomalies in the timing of its observed radio pulses can be used to track the pulsar's motion. Like an ordinary star, a pulsar will move in its own small orbit if it has a planet. Calculations based on pulse-timing observations can then reveal the parameters of that orbit. This method was not originally designed for the detection of planets, but is so sensitive that it is capable of detecting planets far smaller than any other method can, down to less than a tenth the mass of Earth. It is also capable of detecting mutual gravitational perturbations between the various members of a planetary system, thereby revealing further information about those planets and their orbital parameters. In addition, it can easily detect planets which are relatively far away from the pulsar. There are two main drawbacks to the pulsar timing method: pulsars are relatively rare, and special circumstances are required for a planet to form around a pulsar. Therefore, it is unlikely that a large number of planets will be found this way. Additionally, life would likely not survive on planets orbiting pulsars due to the high intensity of ambient radiation. In 1992, Aleksander Wolszczan and Dale Frail used this method to discover planets around the pulsar PSR 1257+12. Their discovery was confirmed by 1994, making it the first confirmation of planets outside the Solar System. Variable star timing Like pulsars, some other types of pulsating variable stars are regular enough that radial velocity could be determined purely photometrically from the Doppler shift of the pulsation frequency, without needing spectroscopy. This method is not as sensitive as the pulsar timing variation method, due to the periodic activity being longer and less regular. The ease of detecting planets around a variable star depends on the pulsation period of the star, the regularity of pulsations, the mass of the planet, and its distance from the host star. The first success with this method came in 2007, when V391 Pegasi b was discovered around a pulsating subdwarf star. Transit timing The transit timing variation method considers whether transits occur with strict periodicity, or if there is a variation. When multiple transiting planets are detected, they can often be confirmed with the transit timing variation method. This is useful in planetary systems far from the Sun, where radial velocity methods cannot detect them due to the low signal-to-noise ratio. If a planet has been detected by the transit method, then variations in the timing of the transit provide an extremely sensitive method of detecting additional non-transiting planets in the system with masses comparable to Earth's. It is easier to detect transit-timing variations if planets have relatively close orbits, and when at least one of the planets is more massive, causing the orbital period of a less massive planet to be more perturbed. The main drawback of the transit timing method is that usually not much can be learnt about the planet itself. Transit timing variation can help to determine the maximum mass of a planet. In most cases, it can confirm if an object has a planetary mass, but it does not put narrow constraints on its mass. There are exceptions though, as planets in the Kepler-36 and Kepler-88 systems orbit close enough to accurately determine their masses. The first significant detection of a non-transiting planet using TTV was carried out with NASA's Kepler space telescope. The transiting planet Kepler-19b shows TTV with an amplitude of five minutes and a period of about 300 days, indicating the presence of a second planet, Kepler-19c, which has a period which is a near-rational multiple of the period of the transiting planet. In circumbinary planets, variations of transit timing are mainly caused by the orbital motion of the stars, instead of gravitational perturbations by other planets. These variations make it harder to detect these planets through automated methods. However, it makes these planets easy to confirm once they are detected. Transit duration variation "Duration variation" refers to changes in how long the transit takes. Duration variations may be caused by an exomoon, apsidal precession for eccentric planets due to another planet in the same system, or general relativity. When a circumbinary planet is found through the transit method, it can be easily confirmed with the transit duration variation method. In close binary systems, the stars significantly alter the motion of the companion, meaning that any transiting planet has significant variation in transit duration. The first such confirmation came from Kepler-16b. Eclipsing binary minima timing When a binary star system is aligned such that – from the Earth's point of view – the stars pass in front of each other in their orbits, the system is called an "eclipsing binary" star system. The time of minimum light, when the star with the brighter surface is at least partially obscured by the disc of the other star, is called the primary eclipse, and approximately half an orbit later, the secondary eclipse occurs when the brighter surface area star obscures some portion of the other star. These times of minimum light, or central eclipses, constitute a time stamp on the system, much like the pulses from a pulsar (except that rather than a flash, they are a dip in brightness). If there is a planet in circumbinary orbit around the binary stars, the stars will be offset around a binary-planet center of mass. As the stars in the binary are displaced back and forth by the planet, the times of the eclipse minima will vary. The periodicity of this offset may be the most reliable way to detect extrasolar planets around close binary systems. With this method, planets are more easily detectable if they are more massive, orbit relatively closely around the system, and if the stars have low masses. The eclipsing timing method allows the detection of planets further away from the host star than the transit method. However, signals around cataclysmic variable stars hinting for planets tend to match with unstable orbits. In 2011, Kepler-16b became the first planet to be definitely characterized via eclipsing binary timing variations. Gravitational microlensing Gravitational microlensing occurs when the gravitational field of a star acts like a lens, magnifying the light of a distant background star. This effect occurs only when the two stars are almost exactly aligned. Lensing events are brief, lasting for weeks or days, as the two stars and Earth are all moving relative to each other. More than a thousand such events have been observed over the past ten years. If the foreground lensing star has a planet, then that planet's own gravitational field can make a detectable contribution to the lensing effect. Since that requires a highly improbable alignment, a very large number of distant stars must be continuously monitored in order to detect planetary microlensing contributions at a reasonable rate. This method is most fruitful for planets between Earth and the center of the galaxy, as the galactic center provides a large number of background stars. In 1991, astronomers Shude Mao and Bohdan Paczyński proposed using gravitational microlensing to look for binary companions to stars, and their proposal was refined by Andy Gould and Abraham Loeb in 1992 as a method to detect exoplanets. Successes with the method date back to 2002, when a group of Polish astronomers (Andrzej Udalski, Marcin Kubiak and Michał Szymański from Warsaw, and Bohdan Paczyński) during project OGLE (the Optical Gravitational Lensing Experiment) developed a workable technique. During one month, they found several possible planets, though limitations in the observations prevented clear confirmation. Since then, several confirmed extrasolar planets have been detected using microlensing. This was the first method capable of detecting planets of Earth-like mass around ordinary main-sequence stars. Unlike most other methods, which have detection bias towards planets with small (or for resolved imaging, large) orbits, the microlensing method is most sensitive to detecting planets around 1-10 astronomical units away from Sun-like stars. A notable disadvantage of the method is that the lensing cannot be repeated, because the chance alignment never occurs again. Also, the detected planets will tend to be several kiloparsecs away, so follow-up observations with other methods are usually impossible. In addition, the only physical characteristic that can be determined by microlensing is the mass of the planet, within loose constraints. Orbital properties also tend to be unclear, as the only orbital characteristic that can be directly determined is its current semi-major axis from the parent star, which can be misleading if the planet follows an eccentric orbit. When the planet is far away from its star, it spends only a tiny portion of its orbit in a state where it is detectable with this method, so the orbital period of the planet cannot be easily determined. It is also easier to detect planets around low-mass stars, as the gravitational microlensing effect increases with the planet-to-star mass ratio. The main advantages of the gravitational microlensing method are that it can detect low-mass planets (in principle down to Mars mass with future space projects such as the Nancy Grace Roman Space Telescope); it can detect planets in wide orbits comparable to Saturn and Uranus, which have orbital periods too long for the radial velocity or transit methods; and it can detect planets around very distant stars. When enough background stars can be observed with enough accuracy, then the method should eventually reveal how common Earth-like planets are in the galaxy. Observations are usually performed using networks of robotic telescopes. In addition to the European Research Council-funded OGLE, the Microlensing Observations in Astrophysics (MOA) group is working to perfect this approach. The PLANET (Probing Lensing Anomalies NETwork)/RoboNet project is even more ambitious. It allows nearly continuous round-the-clock coverage by a world-spanning telescope network, providing the opportunity to pick up microlensing contributions from planets with masses as low as Earth's. This strategy was successful in detecting the first low-mass planet on a wide orbit, designated OGLE-2005-BLG-390Lb. The NASA Nancy Grace Roman Space Telescope scheduled for launch in 2027 includes a microlensing planet survey as one of its three core projects. Direct imaging Planets are extremely faint light sources compared to stars, and what little light comes from them tends to be lost in the glare from their parent star. So in general, it is very difficult to detect and resolve them directly from their host star. Planets orbiting far enough from stars to be resolved reflect very little starlight, so planets are detected through their thermal emission instead. It is easier to obtain images when the planetary system is relatively near to the Sun, and when the planet is especially large (considerably larger than Jupiter), widely separated from its parent star, and hot so that it emits intense infrared radiation; images have then been made in the infrared, where the planet is brighter than it is at visible wavelengths. Coronagraphs are used to block light from the star, while leaving the planet visible. Direct imaging of an Earth-like exoplanet requires extreme optothermal stability. During the accretion phase of planetary formation, the star-planet contrast may be even better in H alpha than it is in infrared – an H alpha survey is currently underway. Direct imaging can give only loose constraints of the planet's mass, which is derived from the age of the star and the temperature of the planet. Mass can vary considerably, as planets can form several million years after the star has formed. The cooler the planet is, the less the planet's mass needs to be. In some cases it is possible to give reasonable constraints to the radius of a planet based on planet's temperature, its apparent brightness, and its distance from Earth. The spectra emitted from planets do not have to be separated from the star, which eases determining the chemical composition of planets. Sometimes observations at multiple wavelengths are needed to rule out the planet being a brown dwarf. Direct imaging can be used to accurately measure the planet's orbit around the star. Unlike the majority of other methods, direct imaging works better with planets with face-on orbits rather than edge-on orbits, as a planet in a face-on orbit is observable during the entirety of the planet's orbit, while planets with edge-on orbits are most easily observable during their period of largest apparent separation from the parent star. The planets detected through direct imaging currently fall into two categories. First, planets are found around stars more massive than the Sun which are young enough to have protoplanetary disks. The second category consists of possible sub-brown dwarfs found around very dim stars, or brown dwarfs which are at least 100 AU away from their parent stars. Planetary-mass objects not gravitationally bound to a star are found through direct imaging as well. Early discoveries In 2004, a group of astronomers used the European Southern Observatory's Very Large Telescope array in Chile to produce an image of 2M1207b, a companion to the brown dwarf 2M1207. In the following year, the planetary status of the companion was confirmed. The planet is estimated to be several times more massive than Jupiter, and to have an orbital radius greater than 40 AU. On 6 November 2008 an object was published that was imaged first in April 2008 at a separation of 330 AU from the star 1RXS J160929.1−210524, already announced on 8 September 2008. But it was not until 2010, that it was confirmed to be a companion planet to the star and not just a chance alignment. It is not confirmed, yet, whether the mass of the companion is above or below the deuterium-burning limit. The first multiplanet system, announced on 13 November 2008, first seen in images of October 2007, using telescopes at both the Keck Observatory and Gemini Observatory. Three planets were directly observed orbiting HR 8799, whose masses are approximately ten, ten, and seven times that of Jupiter. On the same day, 13 November 2008, it was announced that the Hubble Space Telescope directly observed an exoplanet orbiting Fomalhaut, with a mass no more than . Both systems are surrounded by disks not unlike the Kuiper belt. On 21 November 2008, three days after acceptance of a letter to the editor published online on 11 December 2008, it was announced that analysis of images dating back to 2003, revealed a planet orbiting Beta Pictoris. In 2012, it was announced that a "Super-Jupiter" planet with a mass about orbiting Kappa Andromedae was directly imaged using the Subaru Telescope in Hawaii. It orbits its parent star at a distance of about 55 AU, or nearly twice the distance of Neptune from the sun. An additional system, GJ 758, was imaged in November 2009, by a team using the HiCIAO instrument of the Subaru Telescope, but it was a brown dwarf. Other possible exoplanets to have been directly imaged include GQ Lupi b, AB Pictoris b, and SCR 1845 b. As of March 2006, none have been confirmed as planets; instead, they might themselves be small brown dwarfs. Imaging instruments Several planet-imaging-capable instruments are installed on large ground-based telescope, such as Gemini Planet Imager, VLT-SPHERE, the Subaru Coronagraphic Extreme Adaptive Optics (SCExAO) instrument, or Palomar Project 1640. In space, there are currently no dedicated exoplanet imaging instrument. Although the James Webb Space Telescope does have some exoplanet imaging capabilities, it has not specifically been designed and optimised for that purpose. The Nancy Grace Roman Space Telescope will be the first space observatory to include a dedicated exoplanet imaging instrument. This instrument is designed JPL as a demonstrator for a future large observatory in space that will have the imaging of Earth-like exoplanets as one of its primary science goals. Concepts such as the LUVOIR or the HabEx have been proposed in preparation of the 2020 Astronomy and Astrophysics Decadal Survey. In 2010, a team from NASA's Jet Propulsion Laboratory demonstrated that a vortex coronagraph could enable small scopes to directly image planets. They did this by imaging the previously imaged HR 8799 planets, using just a 1.5 meter-wide portion of the Hale Telescope. Another promising approach is nulling interferometry. It has also been proposed that space-telescopes that focus light using zone plates instead of mirrors would provide higher-contrast imaging, and be cheaper to launch into space due to being able to fold up the lightweight foil zone plate. Another possibility would be to use a large occulter in space designed to block the light of nearby stars in order to observe their orbiting planets, such as the New Worlds Mission. Data Reduction Techniques Post-processing of observational data to enhance signal strength of off-axial bodies (i.e. exoplanets) can be accomplished in a variety of ways. All methods are based on the presence of diversity in the data between the central star and the exoplanet companions: this diversity can originate from differences in the spectrum, the angular position, the orbital motion, the polarisation, or the coherence of the light. The most popular technique is Angular Differential Imaging (ADI), where exposures are acquired at different parallactic angle positions and the sky is left to rotate around the observed central star. The exposures are averaged, each exposure undergoes subtraction by the average, and then they are (de-)rotated to stack the faint planetary signal all in one place. Specral Differential Imaging (SDI) performs an analogous procedure, but for radial changes in brightness (as a function of spectra or wavelength) instead of angular changes. Combinations of the two are possible (ASDI, SADI, or Combined Differential Imaging "CODI"). Polarimetry Light given off by a star is un-polarized, i.e. the direction of oscillation of the light wave is random. However, when the light is reflected off the atmosphere of a planet, the light waves interact with the molecules in the atmosphere and become polarized. By analyzing the polarization in the combined light of the planet and star (about one part in a million), these measurements can in principle be made with very high sensitivity, as polarimetry is not limited by the stability of the Earth's atmosphere. Another main advantage is that polarimetry allows for determination of the composition of the planet's atmosphere. The main disadvantage is that it will not be able to detect planets without atmospheres. Larger planets and planets with higher albedo are easier to detect through polarimetry, as they reflect more light. Astronomical devices used for polarimetry, called polarimeters, are capable of detecting polarized light and rejecting unpolarized beams. Groups such as ZIMPOL/CHEOPS and PlanetPol are currently using polarimeters to search for extrasolar planets. The first successful detection of an extrasolar planet using this method came in 2008, when HD 189733 b, a planet discovered three years earlier, was detected using polarimetry. However, no new planets have yet been discovered using this method. Astrometry This method consists of precisely measuring a star's position in the sky, and observing how that position changes over time. Originally, this was done visually, with hand-written records. By the end of the 19th century, this method used photographic plates, greatly improving the accuracy of the measurements as well as creating a data archive. If a star has a planet, then the gravitational influence of the planet will cause the star itself to move in a tiny circular or elliptical orbit. Effectively, star and planet each orbit around their mutual centre of mass (barycenter), as explained by solutions to the two-body problem. Since the star is much more massive, its orbit will be much smaller. Frequently, the mutual centre of mass will lie within the radius of the larger body. Consequently, it is easier to find planets around low-mass stars, especially brown dwarfs. Astrometry is the oldest search method for extrasolar planets, and was originally popular because of its success in characterizing astrometric binary star systems. It dates back at least to statements made by William Herschel in the late 18th century. He claimed that an unseen companion was affecting the position of the star he cataloged as 70 Ophiuchi. The first known formal astrometric calculation for an extrasolar planet was made by William Stephen Jacob in 1855 for this star. Similar calculations were repeated by others for another half-century until finally refuted in the early 20th century. For two centuries claims circulated of the discovery of unseen companions in orbit around nearby star systems that all were reportedly found using this method, culminating in the prominent 1996 announcement, of multiple planets orbiting the nearby star Lalande 21185 by George Gatewood. None of these claims survived scrutiny by other astronomers, and the technique fell into disrepute. Unfortunately, changes in stellar position are so small—and atmospheric and systematic distortions so large—that even the best ground-based telescopes cannot produce precise enough measurements. All claims of a planetary companion of less than 0.1 solar mass, as the mass of the planet, made before 1996 using this method are likely spurious. In 2002, the Hubble Space Telescope did succeed in using astrometry to characterize a previously discovered planet around the star Gliese 876. The space-based observatory Gaia, launched in 2013, is expected to find thousands of planets via astrometry, but prior to the launch of Gaia, no planet detected by astrometry had been confirmed. SIM PlanetQuest was a US project (cancelled in 2010) that would have had similar exoplanet finding capabilities to Gaia. One potential advantage of the astrometric method is that it is most sensitive to planets with large orbits. This makes it complementary to other methods that are most sensitive to planets with small orbits. However, very long observation times will be required — years, and possibly decades, as planets far enough from their star to allow detection via astrometry also take a long time to complete an orbit. Planets orbiting around one of the stars in binary systems are more easily detectable, as they cause perturbations in the orbits of stars themselves. However, with this method, follow-up observations are needed to determine which star the planet orbits around. In 2009, the discovery of VB 10b by astrometry was announced. This planetary object, orbiting the low mass red dwarf star VB 10, was reported to have a mass seven times that of Jupiter. If confirmed, this would be the first exoplanet discovered by astrometry, of the many that have been claimed through the years. However recent radial velocity independent studies rule out the existence of the claimed planet. In 2010, six binary stars were astrometrically measured. One of the star systems, called HD 176051, was found with "high confidence" to have a planet. In 2018, a study comparing observations from the Gaia spacecraft to Hipparcos data for the Beta Pictoris system was able to measure the mass of Beta Pictoris b, constraining it to Jupiter masses. This is in good agreement with previous mass estimations of roughly 13 Jupiter masses. In 2019, data from the Gaia spacecraft and its predecessor Hipparcos was complemented with HARPS data enabling a better description of ε Indi Ab as the second-closest Jupiter-like exoplanet with a mass of 3 Jupiters on a slightly eccentric orbit with an orbital period of 45 years. , especially thanks to Gaia, the combination of radial velocity and astrometry has been used to detect and characterize numerous Jovian planets, including the nearest Jupiter analogues ε Eridani b and ε Indi Ab. In addition, radio astrometry using the VLBA has been used to discover planets in orbit around TVLM 513-46546 and EQ Pegasi A. X-ray eclipse In September 2020, the detection of a candidate planet orbiting the high-mass X-ray binary M51-ULS-1 in the Whirlpool Galaxy was announced. The planet was detected by eclipses of the X-ray source, which consists of a stellar remnant (either a neutron star or a black hole) and a massive star, likely a B-type supergiant. This is the only method capable of detecting a planet in another galaxy. Disc kinematics Planets in formation can be detected by the signatures they produce in their natal protoplanetary disks. The velocities of the gas in a protoplanetary disk can be observed, and their morphology can reveal the presence of planets. Planets perturb the gas velocities by imprinting strong variations from Keplerian motion. This method is now referred to as "disk kinematics." Notable examples of protoplanetary disks around young stars with signatures of embedded planets include HD 97048, HD 163296 and HD 100546. Other possible methods Flare and variability echo detection Non-periodic variability events, such as flares, can produce extremely faint echoes in the light curve if they reflect off an exoplanet or other scattering medium in the star system. More recently, motivated by advances in instrumentation and signal processing technologies, echoes from exoplanets are predicted to be recoverable from high-cadence photometric and spectroscopic measurements of active star systems, such as M dwarfs. These echoes are theoretically observable in all orbital inclinations. Transit imaging An optical/infrared interferometer array (e.g, a 16 interferometer-array of the Big Fringe Telescope) doesn't collect as much light as a single telescope of equivalent size, but has the resolution of a single telescope the size of the array. For bright stars, this resolving power could be used to image a star's surface during a transit event and observe the shadow of the planet transiting. This could provide a direct measurement of the planet's angular radius and, via parallax, its actual radius. This is more accurate than radius estimates based on transit photometry, which are dependent on stellar radius estimates which in turn depend on models of star characteristics. Imaging also provides more accurate determination of the inclination than photometry does. Magnetospheric (auroral) radio emissions Auroral radio emissions from exoplanet magnetospheres could be detected with radio telescopes. The emission may be caused by the exoplanet's magnetic field interacting with a stellar wind, adjacent plasma sources (such as Jupiter's volcanic moon Io travelling through its magnetosphere) or the interaction of the magnetic field with the interstellar medium. Although several discoveries have been claimed, thus far, none have been verified. The most sensitive searches for direct radio emissions from exoplanet magnetic fields, or from exoplanet magnetic fields interacting with those from their host stars, have been conducted with the Arecibo radio telescope. In addition to allowing for a study of exoplanet magnetic fields, radio emissions may be used to measure the interior rotation rate of an exoplanet. Optical interferometry In March 2019, ESO astronomers, employing the GRAVITY instrument on their Very Large Telescope Interferometer (VLTI), announced the first direct detection of an exoplanet, HR 8799 e, using optical interferometry. Modified interferometry By looking at the wiggles of an interferogram using a Fourier-Transform-Spectrometer, enhanced sensitivity could be obtained in order to detect faint signals from Earth-like planets. Detection of dust trapping around Lagrangian points Identification of dust clumps along a protoplanetary disk demonstrate trace accumulation around Lagrangian points. From the detection of this dust, it can be inferred that a planet exists such that it has created those accumulations. Gravitational waves The Laser Interferometer Space Antenna (LISA) for observing gravitational waves is expected to detect the presence of large planets and brown dwarfs orbiting white dwarf binaries. The number of such detections in the Milky Way is estimated to range from 17 in a pessimistic scenario to more than 2000 in an optimistic scenario, and even extragalactic detections in the Magellanic Clouds might be possible, far beyond the current capabilities of other detection methods. Detection of extrasolar asteroids and debris disks Circumstellar disks Disks of space dust (debris disks) surround many stars. The dust can be detected because it absorbs ordinary starlight and re-emits it as infrared radiation. Even if the dust particles have a total mass well less than that of Earth, they can still have a large enough total surface area that they outshine their parent star in infrared wavelengths. The Hubble Space Telescope is capable of observing dust disks with its NICMOS (Near Infrared Camera and Multi-Object Spectrometer) instrument. Even better images have now been taken by its sister instrument, the Spitzer Space Telescope, and by the European Space Agency's Herschel Space Observatory, which can see far deeper into infrared wavelengths than the Hubble can. Dust disks have now been found around more than 15% of nearby sunlike stars. The dust is thought to be generated by collisions among comets and asteroids. Radiation pressure from the star will push the dust particles away into interstellar space over a relatively short timescale. Therefore, the detection of dust indicates continual replenishment by new collisions, and provides strong indirect evidence of the presence of small bodies like comets and asteroids that orbit the parent star. For example, the dust disk around the star Tau Ceti indicates that that star has a population of objects analogous to our own Solar System's Kuiper Belt, but at least ten times thicker. More speculatively, features in dust disks sometimes suggest the presence of full-sized planets. Some disks have a central cavity, meaning that they are really ring-shaped. The central cavity may be caused by a planet "clearing out" the dust inside its orbit. Other disks contain clumps that may be caused by the gravitational influence of a planet. Both these kinds of features are present in the dust disk around Epsilon Eridani, hinting at the presence of a planet with an orbital radius of around 40 AU (in addition to the inner planet detected through the radial-velocity method). These kinds of planet-disk interactions can be modeled numerically using collisional grooming techniques. Contamination of stellar atmospheres Spectral analysis of white dwarfs' atmospheres often finds contamination of heavier elements like magnesium and calcium. These elements cannot originate from the stars' core, and it is probable that the contamination comes from asteroids that got too close (within the Roche limit) to these stars by gravitational interaction with larger planets and were torn apart by star's tidal forces. Up to 50% of young white dwarfs may be contaminated in this manner. Additionally, the dust responsible for the atmospheric pollution may be detected by infrared radiation if it exists in sufficient quantity, similar to the detection of debris discs around main sequence stars. Data from the Spitzer Space Telescope suggests that 1-3% of white dwarfs possess detectable circumstellar dust. In 2015, minor planets were discovered transiting the white dwarf WD 1145+017. This material orbits with a period of around 4.5 hours, and the shapes of the transit light curves suggest that the larger bodies are disintegrating, contributing to the contamination in the white dwarf's atmosphere. Space telescopes Most confirmed extrasolar planets have been found using space-based telescopes (as of 01/2015). Many of the detection methods can work more effectively with space-based telescopes that avoid atmospheric haze and turbulence. COROT (2007-2012) and Kepler were space missions dedicated to searching for extrasolar planets using transits. COROT discovered about 30 new exoplanets. Kepler (2009-2013) and K2 (2013- ) have discovered over 2000 verified exoplanets. Hubble Space Telescope and MOST have also found or confirmed a few planets. The infrared Spitzer Space Telescope has been used to detect transits of extrasolar planets, as well as occultations of the planets by their host star and phase curves. The Gaia mission, launched in December 2013, will use astrometry to determine the true masses of 1000 nearby exoplanets. TESS, launched in 2018, CHEOPS launched in 2019 and PLATO in 2026 will use the transit method. Primary and secondary detection Verification and falsification methods Verification by multiplicity Transit color signature Doppler tomography Dynamical stability testing Distinguishing between planets and stellar activity Transit offset Characterization methods Transmission spectroscopy Emission spectroscopy, phase-resolved Speckle imaging / Lucky imaging to detect companion stars that the planets could be orbiting instead of the primary star, which would alter planet parameters that are derived from stellar parameters. Photoeccentric Effect Rossiter–McLaughlin effect
Physical sciences
Planetary science
Astronomy
16179834
https://en.wikipedia.org/wiki/Rabies%20vaccine
Rabies vaccine
The rabies vaccine is a vaccine used to prevent rabies. There are several rabies vaccines available that are both safe and effective. Vaccinations must be administered prior to rabies virus exposure or within the latent period after exposure to prevent the disease. Transmission of rabies virus to humans typically occurs through a bite or scratch from an infectious animal, but exposure can occur through indirect contact with the saliva from an infectious individual. Doses are usually given by injection into the skin or muscle. After exposure, the vaccination is typically used along with rabies immunoglobulin. It is recommended that those who are at high risk of exposure be vaccinated before potential exposure. Rabies vaccines are effective in humans and other animals, and vaccinating dogs is very effective in preventing the spread of rabies to humans. A long-lasting immunity to the virus develops after a full course of treatment. Rabies vaccines may be used safely by all age groups. About 35 to 45 percent of people develop a brief period of redness and pain at the injection site, and 5 to 15 percent of people may experience fever, headaches, or nausea. After exposure to rabies, there is no contraindication to its use, because the untreated virus is virtually 100% fatal. The first rabies vaccine was introduced in 1885 and was followed by an improved version in 1908. Over 29 million people worldwide receive human rabies vaccine annually. It is on the World Health Organization's List of Essential Medicines. Medical uses Before exposure The World Health Organization (WHO) recommends vaccinating those who are at high risk of the disease, such as children who live in areas where it is common. Other groups may include veterinarians, researchers, or people planning to travel to regions where rabies is common. Three doses of the vaccine are given over a one-month period on days zero, seven, and either twenty-one or twenty-eight. After exposure For individuals who have been potentially exposed to the virus, four doses over two weeks are recommended, as well as an injection of rabies immunoglobulin with the first dose. This is known as post-exposure vaccination. For people who have previously been vaccinated, only a single dose of the rabies vaccine is required. However, vaccination after exposure is neither a treatment nor a cure for rabies; it can only prevent the development of rabies in a person if given before the virus reaches the brain. Because the rabies virus has a relatively long incubation period, post-exposure vaccinations are typically highly effective. Additional doses Immunity following a course of doses is typically long lasting, and additional doses are usually not needed unless the person has a high risk of contracting the virus. Those at risk may have tests done to measure the amount of rabies antibodies in the blood, and then get rabies boosters as needed. Following administration of a booster dose, one study found 97% of immunocompetent individuals demonstrated protective levels of neutralizing antibodies after ten years. Safety Rabies vaccines are safe in all age groups. About 35 to 45 percent of people develop a brief period of redness and pain at the injection site, and 5 to 15 percent of people may experience fever, headaches, or nausea. Because of the certain fatality of the virus, receiving the vaccine is always advisable. Vaccines made from nerve tissue are used in a few countries, mainly in Asia and Latin America, but are less effective and have greater side effects. Their use is thus not recommended by the World Health Organization. Types The human diploid cell rabies vaccine (HDCV) was started in 1967. Human diploid cell rabies vaccines are inactivated vaccines made using the attenuated Pitman-Moore L503 strain of the virus. In addition to these developments, newer and less expensive purified chicken embryo cell vaccines (CCEEV) and purified Vero cell rabies vaccines are now available and are recommended for use by the WHO. The purified Vero cell rabies vaccine uses the attenuated Wistar strain of the rabies virus, and uses the Vero cell line as its host. CCEEVs can be used in both pre- and post-exposure vaccinations. CCEEVs use inactivated rabies virus grown from either embryonated eggs or in cell cultures and are safe for use in humans and animals. The vaccine was attenuated and prepared in the H.D.C. strain WI-38 which was gifted to Hilary Koprowski at the Wistar Institute by Leonard Hayflick, an Associate Member, who developed this normal human diploid cell strain. Verorab, developed by Sanofi-Aventis and Speeda, developed by Liaoning Chengda are purified vero cell rabies vaccine (PVRV). The first is approved by the World Health Organization. Verorab is approved for medical use in Australia and the European Union and is indicated for both pre-exposure and post-exposure prophylaxis against rabies. History Virtually all infections with rabies resulted in death until two French scientists, Louis Pasteur and Émile Roux, developed the first rabies vaccination in 1885. Nine-year-old Joseph Meister (1876–1940), who had been mauled by a rabid dog, was the first human to receive this vaccine. The treatment started with a subcutaneous injection on 6 July 1885, at 8:00pm, which was followed with 12 additional doses administered over the following 10 days. The first injection was derived from the spinal cord of an inoculated rabbit which had died of rabies 15 days earlier. All the doses were obtained by attenuation, but later ones were progressively more virulent. The Pasteur-Roux vaccine attenuated the harvested virus samples by allowing them to dry for five to ten days. Similar nerve tissue-derived vaccines are still used in some countries, and while they are much cheaper than modern cell culture vaccines, they are not as effective. Neural tissue vaccines also carry a certain risk of neurological complications. Society and culture Economics When the modern cell-culture rabies vaccine was first introduced in the early 1980s, it cost $45 per dose, and was considered to be too expensive. The cost of the rabies vaccine continues to be a limitation to acquiring pre-exposure rabies immunization for travelers from developed countries. In 2015, in the United States, a course of three doses could cost over , while in Europe a course costs around . It is possible and more cost-effective to split one intramuscular dose of the vaccine into several intradermal doses. This method is recommended by the World Health Organization (WHO) in areas that are constrained by cost or with supply issues. The route is as safe and effective as intramuscular according to the WHO. Veterinary use Pre-exposure immunization has been used on domesticated and wild populations. In many jurisdictions, domestic dogs, cats, ferrets, and rabbits are required to be vaccinated. There are two main types of vaccines used for domesticated animals and pets (including pets from wildlife species): Inactivated rabies virus (similar technology to that given to humans) administered by injection Modified live viruses administered orally (by mouth): Live rabies virus from attenuated strains. Attenuated means strains that have developed mutations that cause them to be weaker and do not cause disease. Imrab is an example of a veterinary rabies vaccine containing the Pasteur strain of killed rabies virus. Several different types of Imrab exist, including Imrab, Imrab 3, and Imrab Large Animal. Imrab 3 has been approved for ferrets and, in some areas, pet skunks. Dogs Aside from vaccinating humans, another approach was also developed by vaccinating dogs to prevent the spread of the virus. In 1979, the Van Houweling Research Laboratory of the Silliman University Medical Center in Dumaguete in the Philippines developed and produced a dog vaccine that gave a three-year immunity from rabies. The development of the vaccine resulted in the elimination of rabies in many parts of the Visayas and Mindanao Islands. The successful program in the Philippines was later used as a model by other countries, such as Ecuador and the Mexican state of Yucatán, in their fight against rabies conducted in collaboration with the World Health Organization. In Tunisia, a rabies control program was initiated to give dog owners free vaccination to promote mass vaccination which was sponsored by their government. The vaccine is known as Rabisin (Mérial), which is a cell based rabies vaccine only used countrywide. Vaccinations are often administered when owners take in their dogs for check-ups and visits at the vet. Oral rabies vaccines (see below for details) have been trialled on feral/stray dogs in some areas with high rabies incidence, as it could potentially be more efficient than catching and injecting them. However these have not been deployed for dogs at large scale yet. Wild animals Wildlife species, primarily bats, raccoons, skunks, and foxes, act as reservoir species for different variants of the rabies virus in distinct geographic regions of the United States. This results in the general occurrence of rabies as well as outbreaks in animal populations. Approximately 90% of all reported rabies cases in the US are from wildlife. Oral rabies vaccine Oral rabies vaccines are distributed across the landscape, targeting reservoir species, in an effort to produce a herd immunity effect. The idea of wildlife vaccination was conceived during the 1960s, and modified-live rabies viruses were used for the experimental oral vaccination of carnivores by the 1970s. Development of an oral immunization for wildlife began in the United States with laboratory trials using the live, attenuated Evelyn-Rokitnicki-Abselseth (ERA) vaccine, derived from the Street Alabama Dufferin (SAD) strain. The first ORV field trial using the live attenuated vaccine to immunize foxes occurred in Switzerland during 1978. There are currently three different types of oral wildlife rabies vaccine in use: Modified live virus: Attenuated vaccine strains of rabies virus such as SAG2 and SAD B19 Recombinant vaccinia virus expressing rabies glycoprotein (V-RG): This is a strain of the vaccinia virus (originally a smallpox vaccine) that has been engineered to encode the gene for the rabies glycoprotein. V-RG has been proven safe in over 60 animal species including cats and dogs. The idea of wildlife vaccination was conceived during the 1960s, and modified-live rabies viruses were used for the experimental oral vaccination of carnivores by the 1970s. ONRAB: an experimental live recombinant adenovirus vaccine Other oral rabies experimental vaccines in development include recombinant adenovirus vaccines. Oral rabies vaccination (ORV) programs have been used in many countries in an effort to control the spread of rabies and limit the risk of human contact with the rabies virus. ORV programs were initiated in Europe in the 1980s, Canada in 1985, and in the United States in 1990. ORV is a preventive measure to eliminate rabies in wild animal vectors of disease, mainly foxes, raccoons, raccoon dogs, coyotes and jackals, but also can be used for dogs in developing countries. ORV programs typically attractive baits to deliver the vaccine to targeted animals. In the United States, RABORAL V-RG (Boehringer Ingelheim, Duluth, GA, USA) has been the only licensed ORV for rabies virus management since 1997. However, ONRAB "Ultralite" (Artemis Technologies Inc., Guelph, Ontario, Canada) baits have been distributed by the United States Department of Agriculture (USDA) in select areas of the eastern United States under an experimental permit to target raccoons since 2011. RABORAL V-RG baits consist of a small packet containing the oral vaccine which is then either coated in a fishmeal paste or encased in a fishmeal-polymer block. ONRAB "Ultralite" baits consist of a blister pack with a coating matrix of vanilla flavor, green food coloring, vegetable oil and hydrogenated vegetable fat. When an animal bites into the bait, the packets burst and the vaccine is administered. Current research suggests that if adequate amounts of the vaccine is ingested, immunity to the virus should last for upwards of one year. By immunizing wild or stray animals, ORV programs work to create a buffer zone between the rabies virus and potential contact with humans, pets, or livestock. Landscape features such as large bodies of water and mountains are often used to enhance the effectiveness of the buffer. The effectiveness of ORV campaigns in specific areas is determined through trap-and-release methods. Titer tests are performed on the blood drawn from the sample animals in order to measure rabies antibody levels in the blood. Baits are usually distributed by aircraft to more efficiently cover large, rural regions. In order to place baits more precisely and to minimize human and pet contact with baits, they are distributed by hand in suburban or urban regions. The standard bait distribution density is 75 baits/km2 in rural areas and 150 baits/km2 in urban and developed areas. Implementation of ORV programs in the United States has led to the elimination of the coyote rabies virus variant in 2003 and gray fox variant during 2013. Furthermore, ORV has been successful in preventing the westward expansion of the raccoon rabies enzootic front beyond Alabama.
Biology and health sciences
Vaccines
Health
11940676
https://en.wikipedia.org/wiki/Eutriconodonta
Eutriconodonta
Eutriconodonta is an order of early mammals. Eutriconodonts existed in Asia (including pre-contact India), Africa, Europe, North and South America during the Jurassic and the Cretaceous periods. The order was named by Kermack et al. in 1973 as a replacement name for the paraphyletic Triconodonta. Traditionally seen as the classical Mesozoic small mammalian insectivores, discoveries over the years have shown them to be among the best examples of the diversity of mammals in this time period, including a vast variety of bodyplans, ecological niches and locomotion methods. Classification "Triconodonta" had long been used as the name for an order of early mammals which were close relatives of the ancestors of all present-day mammals, characterized by molar teeth with three main cusps on a crown that were arranged in a row. The group originally included only the family Triconodontidae and taxa that were later assigned to the separate family Amphilestidae, but was later expanded to include other taxa such as Morganucodon or Sinoconodon. The phylogenetic analyses found that all these taxa did not form a natural group, and that some traditional "triconodonts" were more closely related to therian mammals than others. Some traditional "triconodonts" do seem to form a natural group (or "clade"), and this was given the name Eutriconodonta (or "true triconodonts"). Most analyses use only dental and mandibular characters. Gao et al. (2010) conducted a second analysis as well, using a modified version of the matrix from the analysis of Luo et al. (2007); this analysis involved a broader range of Mesozoic mammaliaforms and more characters, including postcranial ones. Both Luo et al. (2007) and the second analysis of Gao et al. (2010) recovered a more inclusive monophyletic Eutriconodonta that also contained gobiconodontids and Amphilestes; in the second analysis of Gao et al. it also contained Juchilestes (recovered as amphidontid in their first analysis, the only amphidontid included in their second analysis). However, Gao et al. (2010) stressed that jeholodentids and gobiconodontids are the only eutriconodonts with known postcranial skeletons; according to the authors, it remains uncertain whether the results of their second analysis represent true phylogeny or are merely "a by-product of long branch attraction of jeholodentids and gobiconodontids". Phylogenetic studies conducted by Zheng et al. (2013), Zhou et al. (2013) and Yuan et al. (2013) recovered monophyletic Eutriconodonta containing triconodontids, gobiconodontids, Amphilestes, Jeholodens and Yanoconodon. The exact phylogenetic placement of eutriconodonts within Mammaliaformes is also uncertain. Zhe-Xi Luo, Zofia Kielan-Jaworowska and Richard Cifelli (2002) conducted an analysis that recovered eutriconodonts within the crown group of Mammalia, i.e. the least inclusive clade containing monotremes and therian mammals. The analysis found eutriconodonts to be more closely related to therian mammals than monotremes were, but more distantly than (paraphyletic) amphitheriids, dryolestids, spalacotheriid "symmetrodonts" and multituberculates were. This result was mostly confirmed by Luo et al. (2007), the second analysis of Gao et al. (2010), Zheng et al. (2013), Zhou et al. (2013) and Yuan et al. (2013), although in the phylogenies of Luo et al. (2007) and Yuan et al. (2013) eutriconodonts were in unresolved polytomy with multituberculates and trechnotherians. If confirmed this would make eutriconodonts one of the groups that can be classified as mammals by any definition. Several other extinct groups of Mesozoic animals that are traditionally considered to be mammals (such as Morganucodonta and Docodonta) are now placed just outside Mammalia by those who advocate a 'crown-group' definition of the word "mammal". However, Luo, Kielan-Jaworowska and Cifelli (2002) tested alternative possible phylogenies as well, and found that recovering eutriconodonts outside the crown group of Mammalia required only five additional steps compared to the most parsimonious solution. The authors stated that such placement of eutriconodonts is less likely than their placement within the mammalian crown group, but it cannot be rejected on a statistical basis. The most recent cladogram is by Thomas Martin et al. 2015, in their description of Spinolestes. Eutriconodonts are recovered as a largely monophyletic group within Theriimorpha. A 2020 study found them paraphyletic in regards to crown group Mammalia. Range When eutriconodonts first appeared is unclear. The earliest remains come from the late Early Jurassic (Toarcian), but they already represent a variety of groups: the volaticotherian Argentoconodon, the alticonodontine Victoriaconodon and the gobiconodontid Huasteconodon, as well as the putative eutriconodont "Dyskritodon" indicus. They achieve their peak diversity across the Early Cretaceous, before largely disappearing from the fossil record in the early Late Cretaceous outside of North America. The Maastrichtian genus Indotriconodon is the youngest representative of the group, hailing from the intertrappean beds of India; the Campanian/Maastrichtian Austrotriconodon was originally referred to as a late surviving member of the clade, but has since been moved to Dryolestoidea. Most eutriconodont remains occur in laurasian landmasses. The exceptions are Argentoconodon and slightly younger Condorodon from the Early Jurassic of Argentina, the putative Dyskritodon indicus from the Early Jurassic of India (Kota Formation), the Late Jurassic Tendagurodon from Tanzania (Tendaguru Formation), several Early Cretaceous north African taxa like Ichthyoconodon, Dyskritodon amazighi and Gobiconodon palaios, and Indotriconodon magnus from Late Cretaceous India. Due to the rarity of the Jurassic gondwanan fossil record the presence of eutriconodonts in southern landmasses may be of interest, due to their comparatively early age. Eutriconodonts are among the few Mesozoic mammals present at Arctic locations; docodonts and haramiyidans (generally considered non-mammalian cynodonts) are also present, but not therians, dryolestoids and other groups considered true mammals. Biology Anatomy Like many other non-therian mammals, eutriconodonts retained classical mammalian synapomorphies like epipubic bones (and likely the associated reproductive constrictions), venomous spurs and sprawling limbs. However, the forelimb and shoulder anatomy of at least some species like Jeholodens are similar to those of therian mammals, though the hindlimbs remain more conservative. Eutriconodonts had a modern ear anatomy, the main difference from therians being that the ear ossicles were still somewhat connected to the jaw via the Meckel's cartilage. Uniquely among crown-group mammals, gobiconodontids replaced their molariform teeth by successors of similar complexity, while in other mammals less complex replacements are the norm. Soft tissues Some eutriconodonts like Spinolestes and Volaticotherium were very well preserved, showing evidence of fur, internal organs and, in the latter, of patagia. Spinolestes shows hair similar to that of modern mammals, with compound hair follicles with primary and secondary hair, even preserving traces of a pore infection. It also possesses a clear thoracic diaphragm like modern mammals, as well as spines, dermal scutes and an ossified Meckel's cartilage. Furthermore, Spinolestes may also display signs of dermatophytosis, suggesting that gobiconodontids, like modern mammals, were vulnerable to this type of fungal infection. Triconodon itself has been the subject to cranial endocast studies, revealing a unique brain anatomy. Paleobiology The eutriconodont triconodont dentition has no analogue among living mammals, so comparisons are difficult. There are two main types of occlusion patterns: one present in triconodontids (as well as the unrelated morganucodontan mammals), in which lower cusp "a" occludes anterior to upper cusp "A", between "A" and "B", and one present in amphilestids and gobiconodontids, in which the molars basically alternate, with the lower cusp "a" occluding further forward, near the junction between two upper molars. A study on Priacodon however suggests that only the latter arrangement was present. However, it's clear that most if not all eutriconodonts were primarily carnivorous, given the presence of long, sharp canines, premolars with trenchant main cusps that were well suited to grasp and pierce prey, strong development of the madibular abductor musculature, bone crushing ability in at least some species and several other features. Eutriconodont teeth are known to have had a shearing function, allowing the animal to tear through flesh much like carnassial teeth of therian mammals. In a study about Mesozoic mammalian diets the taxa Repenomamus, Gobiconodon, Argentoconodon, Phascolotherium, Triconodon and Liaoconodon rank among carnivorous mammal species, while Volaticotherium, Liaotherium, Amphilestes and Jeholodens ranked among insectivorous mammals, while Yanoconodon, Priacodon and Trioracodon ranked somewhere in between. A study on Priacodon suggests that the jaw roll was more passive for eutriconodonts than modern therian carnivores. Eutriconodonts displayed a broad size range from small shrew-like insectivores with body masses of as little as , comparable to the smallest known modern mammals, to large forms like Repenomamus, which is estimated to have had a body mass of , comparable to a badger. They were among the first mammals to be specialised for vertebrate prey, and likely occupied the highest trophic levels among mammals in their faunal communities. Several forms like Gobiconodon and Repenomamus show evidence of scavenging, being among the few Mesozoic mammals to have significantly exploited that. Evidence of predation on significantly larger dinosaurs is also known. At least in carnivorous niches, eutriconodonts were probably replaced by deltatheroidean metatherians, which are the dominant carnivorous mammals in Late Cretaceous faunal assemblages. Competition between both groups is unattested, but in Asia the Early Cretaceous gobiconodontid diversity is replaced entirely by a deltatheroidean one, while in North America Nanocuris appears after the absence of Gobiconodon and other larger eutriconodonts. Given that all insectivorous and carnivorous mammals groups suffered heavy losses during the mid-Cretaceous, it seems likely these metatherians simply occupied niches left after the extinction of eutriconodonts in the northern continents. Some eutriconodonts were instead among the most specialised of Mesozoic mammals. Several taxa like Astroconodon, Dyskritodon and Ichthyoconodon may show adaptations for piscivory and occur in aquatic settings with their molars being compared to those of seals and cetaceans. Caution has been advised in these comparisons, however; as many researchers like Zofia Kielan-Jaworowska have noted, eutriconodont molars are more functionally similar to those of terrestrial carnivorans than pinnipeds and cetaceans, occluding in a shearing motion instead of not-occluding and providing a grasping function. However, Dyskritodon and Ichthyoconodon'''s teeth shows no erosion associated with aquatic transportation, meaning that the animals died in situ or close. Studies on Liaoconodon show that it has adaptations for an aquatic lifestyle, possessing a barrel-like body and paddle-like limbs, and analysis of the postcrania of Yanoconodon shows adaptations towards multiple forms of locomotion, with traits in common with fossorial, arboreal, and semiaquatic mammals. Additionally, Volaticotherium and Argentoconodon show adaptations for aerial locomotion. Both genera are closely related, implying a long lived lineage of gliding mammals. At least Spinolestes had xenarthrous vertebrae and osseous scutes, convergent to those of modern xenarthrans and to a lesser extent the hero shrew. This genus may have displayed an ecological role similar to that of modern anteaters, pangolins, echidnas, aardvark, aardwolf and numbat, being the second known Mesozoic mammal after Fruitafossor to have done so. Reproductive biologyTriconodon'' shows dental replacement patterns consistent with milk-drinking mammals.
Biology and health sciences
Stem-mammals
Animals
11941349
https://en.wikipedia.org/wiki/Residential%20treatment%20center
Residential treatment center
A residential treatment center (RTC), sometimes called a rehab, is a live-in health care facility providing therapy for substance use disorders, mental illness, or other behavioral problems. Residential treatment may be considered the "last-ditch" approach to treating abnormal psychology or psychopathology. A residential treatment program encompasses any residential program which treats a behavioural issue, including milder psychopathology such as eating disorders (e.g. weight loss camp) or indiscipline (e.g. fitness boot camps as lifestyle interventions). Sometimes residential facilities provide enhanced access to treatment resources, without those seeking treatment considered residents of a treatment program, such as the sanatoriums of Eastern Europe. Controversial uses of residential programs for behavioural and cultural modification include conversion therapy and mandatory American and Canadian residential schools for indigenous populations. A common feature of residential programs is controlled social access to people outside the program, and limited access for outside parties to witness daily conditions within the program. Within psychiatry, it is understood that it can be almost impossible to change entrenched behaviour without impacting habitual relationships, at least in the short term, but the relatively closed nature of many residential programs also makes it possible to conceal abusive practice. Upon discharge, the patient may be enrolled in an intensive outpatient program for follow-up outside the residential setting. Historical background in the United States In the 1600s, Great Britain established the Poor Law that allowed poor children to become trained in apprenticeships by removing them from their families and forcing them to live in group homes. In the 1800s, the United States copied this system, but often mentally ill children were placed in jail with adults because society did not know what to do with them. There were no RTCs in place to provide the 24-hour care they needed, and they were placed in jail when they could not live in the home. In the 1900s, Anna Freud and her peers were part of the Vienna Psychoanalytic Society, and they worked on how to care for children. They worked to create residential treatment centers for children and adolescents with emotional and behavioral disorders. The year 1944 marked the beginning of Bruno Bettelheim's work at the Orthogenic School in Chicago, and Fritz Redl and David Wineman's work at the Pioneer House in Detroit. Bettelheim helped increase awareness of staff attitudes on children in treatment. He reinforced the idea that a psychiatric hospital was a community, where staff and patients influenced each other and patients were shaped by each other's behaviors. Bettelheim also believed that families should not have frequent contact with their child while he or she was in treatment. This differs from community-based therapy and family therapy of recent years, in which the goal of treatment is for a child to remain in the home. Also, emphasis is placed on the family's role in improving long term outcomes after treatment in a RTC. The Pioneer House created a special-education program to help improve impulse control and sociability in children. After WWII, Bettelheim and the joint efforts of Redl and Wineman were instrumental in establishing residential facilities as therapeutic-treatment alternative for children and adolescents who can not live at home In the 1960s, the second generation of psychoanalytical RTC was created. These programs continued the work of the Vienna Psychoanalytic Society in order to include families and communities in the child's treatment. One example of this is the Walker Home and School which was established by Dr. Albert Treischman in 1961 for adolescent boys with severe emotional or behavioral disorders. He involved families in order to help them develop relationships with their children within homes, public schools and communities. Family and community involvement made this program different from previous programs. Beginning in the 1980s, cognitive behavioral therapy was more commonly used in child psychiatry, as a source of intervention for troubled youth, and was applied in RTCs to produce better long-term results. Attachment theory also developed in response to the rise of children admitted to RTCs who were abused or neglected. These children needed specialized care by caretakers who were knowledgeable about trauma. In the 1990s, the number of children entering RTCs increased dramatically, leading to a policy shift from institution- based services to a family-centered community system of care. This also reflected the lack of appropriate treatment resources. However, residential treatment centers have continued to grow and today house over 50,000 children. The number of residential treatment centers treating individuals of all ages in the United States is currently estimated at 28,900 facilities. Children and teens RTCs for adolescents, sometimes referred to as teen rehab centers if they also deal with addition, provide treatment for issues and disorders such as oppositional defiant disorder, conduct disorder, depression, bipolar disorder, attention deficit hyperactivity disorder (ADHD), educational issues, some personality disorders, and phase-of-life issues, as well as substance use disorders. Most use a behavior modification paradigm. Others are relationally oriented. Some utilize a community or positive peer-culture model. Generalist programs are usually large (80-plus clients and as many as 250) and level-focused in their treatment approach. That is, in order to manage clients' behavior, they frequently put systems of rewards and punishments in place. Specialist programs are usually smaller (less than 100 clients and as few as 10 or 12). Specialist programs typically are not as focused on behavior modification as generalist programs are. Different RTCs work with different types of problems, and the structure and methods of RTCs vary. Some RTCs are lock-down facilities; that is, the residents are locked inside the premises. In a locked residential treatment facility, clients' movements are restricted. By comparison, an unlocked residential treatment facility allows them to move about the facility with relative freedom, but they are only allowed to leave the facility under specific conditions. Residential treatment centers should not be confused with residential education programs, which offer an alternative environment for at-risk children to live and learn together outside their homes. Residential treatment centers for children and adolescents treat multiple conditions from drug and alcohol addictions to emotional and physical disorders as well as mental illnesses. Various studies of youth in residential treatment centers have found that many have a history of family-related issues, often including physical or sexual abuse. Some facilities address specialized disorders, such as reactive attachment disorder (RAD). Residential treatment centers generally are clinically focused and primarily provide behavior management and treatment for adolescents with serious issues. In contrast, therapeutic boarding schools provide therapy and academics in a residential boarding school setting, employing a staff of social workers, psychologists, and psychiatrists to work with the students on a daily basis. This form of treatment has a goal of academic achievement as well as physical and mental stability in children, adolescents, and young adults. Recent trends have ensured that residential treatment facilities have more input from behavioral psychologists to improve outcomes and lessen unethical practices. Behavioral interventions Behavioral interventions have been very helpful in reducing problem behaviors in residential treatment centers. The type of clients receiving services in a facility (children with emotional or behavioral disorders versus intellectual disability versus psychiatric disorders) is a factor in the effectiveness of behavior modification. Behavioral intervention has been found to be successful even when medication interventions fail. However, there is evidence that certain populations may benefit more from interventions that fall outside of the behavior-modification paradigm. For instance, positive outcomes have been reported for neurosequential interventions targeting issues of early childhood trauma and attachment. (Perry, 2006). Although the majority of children who receive services in RTCs present emotional and behavioral disorders (EBDs), such as attention deficit hyperactivity disorder (ADHD), Oppositional Defiant Disorder (ODD), and Conduct Disorder (CD), behavior-modification techniques can be an effective way of decreasing the maladaptive behavior of these clients. Interventions such as response cost, token economies, social skills training groups, and the use of positive social reinforcement can be used to increase prosocial behavior in children (Ormrod, 2009). Behavioral interventions are successful in treating children with behavioral disorders in part because they incorporate two principles that make up the core of how children learn: conceptual understanding and building on their pre-existing knowledge. Research by Resnick (1989) shows that even infants are able to develop basic quantitative frameworks. New information is incorporated into the framework and serves as the basis for the problem-solving skills a child develops as she or he is exposed to different types of stimuli (e.g., new situations, people, or environments). The experiences and environment that a child is exposed to can have either a positive or negative outcome, which, in turn, impacts how he or she remembers, reasons, and adapts when encountering aversive stimuli. Furthermore, when children have acquired extensive knowledge, it affects what they notice and how they organize, represent, and interpret information in their current environment (Bransford, Brown, & Cocking, 2000). Many of the children housed in RTCs have been exposed to negative environmental factors that have contributed to the behavior problems that they are exhibiting. Many interventions build on children's prior knowledge of how reward works. Reinforcing children for pro-social behaviors (i.e., using token economies, in which children earn tokens for appropriate behaviors; response cost (losing previously earned tokens following inappropriate behavior; and implementing social-skills training groups, where participants observe and participate in modeling appropriate social behaviors help them develop a deeper understanding of the positive results of pro=social behavior. Wolfe, Dattilo, & Gast (2003) found that using a token economy in concert with cooperative games increased pro-social behaviors (e.g. statements of encouragement, praise, or appreciation, shaking hands, and giving high fives) while decreasing anti-social ones (swearing, threatening peers with physical harm, name-calling, and physical aggression). The use of a response-cost system has been efficacious in reducing problem behaviors. A single-subject withdrawal design employing non-contingent reinforcement with response cost was used to reduce maladaptive verbal and physical behaviors exhibited by a post-institutional student with ADHD (Nolan & Filter, 2012). Wilhite & Bullock (2012) implemented a social-skills training group to increase the social competence of students with EBDs. Results showed significant differences between pre- and post-intervention disciplinary referrals, as well as several other elements of behavioral-ratings scales. Evidence also exists for the usefulness of social reinforcement as a part of behavioral interventions for children with ADHD. A study by Kohls, Herpertz-Dahlmann, & Kerstin (2009) found that both social and monetary rewards increased inhibition control in both the control and experimental groups. However, results showed that children with ADHD benefitted more from social reinforcement than typical children, indicating that social reinforcement can significantly improve cognitive control in ADHD children. The techniques listed are only a few of the many types of behavioral interventions that can be used to treat children with EBDs. Additional information regarding types of behavioral interventions can be found in the 2003 book Behavioral, Social, and Emotional Assessment of Children and Adolescents by Kenneth Merrell. Types of Family Therapy used in Residential Treatment Center Narrative Therapy: Narrative therapy has shown an increase in popularity in the field of family therapy. Narrative therapy developed out from the postmodern viewpoint, which is expressed in its principles: (a) not one universal reality exists, but socially constructed reality; (b) reality is created by language; (c) narrative maintains reality (d) not all narratives are equivalent (Freedman and Combs, 1996). Narrative family therapy views human issues from those roots as emerging and being sustained by dominant stories that control the life of an individual. Problems arise when individual stories do not match with their experience of living. According to the narrative viewpoint, by offering a new and distinct perspective In a problem-saturated narrative, therapy is a process of rewriting personal narratives. The process of rewriting the narrative of the client involves (a) expressing the problem(s) they are experiencing; (b) breaking down narratives that trigger problems through questioning; (c) recognizing special outcomes or occasions where a person has not been constrained by their situation; (d) connecting specific results to the future and providing an alternate and desired narrative; (e) inviting supports among the community to spectate the new narrative and (f) logging new document Since postmodern viewpoints prioritize concepts rather than techniques, in narrative therapy, formal methods are restricted. However, some researchers have described techniques that are useful in helping an individual rewrite a specific experience, like retelling stories and writing letters. Children admitted to a residential treatment center have behavior problems so extreme that residential treatment is their last hope. Parents seem to think the child is the problem needed to be fixed, and everything will be okay; on the other hand, the child generally sees themselves as a victim. Narrative therapy enables these perspectives to be broken down and troubling behaviors of the child to be externalized, which could encourage both the child and the family members to achieve a new perspective no one feels prosecuted or blamed. Multi Systemic Therapy: The model has shown success in sustaining long-standing improvements in children's and adolescents' antisocial behaviors. Families in MST have demonstrated improved family stability and post-treatment adaptability and growing support, and reduced conflict- hostility The method's ultimate objectives include a) eliminating behavior problems, b) enhancing family functioning, c) strengthening the adolescents' ability to perform better at school and other community settings, and d) decreasing out-of-home placement Controversy Disability rights organizations, such as the Bazelon Center for Mental Health Law, oppose placement in RTC programs, calling into question the appropriateness and efficacy of such placements, noting the failure of such programs to address problems in the child's home and community environment, and calling attention to the limited mental-health services offered and substandard educational programs. Concerns specifically related to a specific type of residential treatment center called therapeutic boarding schools include: inappropriate discipline techniques, medical neglect, restricted communication such as lack of access to child protection and advocacy hotlines, and lack of monitoring and regulation. Bazelon promotes community-based services on the basis that they are more effective and less costly than residential placement. A 2007 Report to Congress by the Government Accountability Office (GAO) found cases involving serious abuse and neglect at some of these programs. From late 2007 through 2008, a broad coalition of grass-roots efforts, as well as prominent medical and psychological organizations such as the Alliance for the Safe, Therapeutic and Appropriate use of Residential Treatment (ASTART) and the Community Alliance for the Ethical Treatment of Youth (CAFETY), provided testimony and support that led to the creation of the Stop Child Abuse in Residential Programs for Teens Act of 2008 by the United States Congress Committee on Education and Labor. Jon Martin-Crawford and Kathryn Whitehead of CAFETY testified at a hearing of the United States Congressional Committee on Education and Labor on April 24, 2008, and described abusive practices they had experienced at the Family Foundation School and Mission Mountain School, both therapeutic boarding schools. In recent years, many states have enacted regulation and oversight of most programs. Due to the absence of regulation of these programs by the federal government and because at that time many were not subject to state licensing or monitoring, the Federal Trade Commission has issued a guide for parents considering such placement. Residential treatment programs are often caught in the cross-fire during custody battles, as parents who are denied custody try to discredit the opposing spouse and the treatment program. Research on effectiveness Studies of different treatment approaches have found that residential treatment is effective for individuals with a long history of addictive behavior or criminal activity. RTCs offer a variety of structured programs designed to address the specific need of the inmates. Despite the controversy surrounding the efficacy of (RTCs), recent research has revealed that community-based residential treatment programs have positive long-term effects for children and youth with behavioral problems. Participants in a pilot program employing family-driven care and positive peer modeling displayed no incidence of elopement, self-injurious behaviors, or physical aggression, and just one case of property destruction when compared to a control group (Holstead, 2010). The success of treatment for children in RTCs depends heavily on their background i.e., their state, situation, circumstances and behavioral status before commencement of treatment. Children who displayed lower rates of internalizing and externalizing behavior problems at intake and had a lower level of exposure to negative environmental factors (e.g., domestic violence, parental substance use, high crime rates), showed better results than children whose symptoms were more severe (den Dunnen, 2012). Additional research demonstrates that planned treatment, or knowing the expected duration of treatment, is strongly correlated with positive treatment outcomes. Long-term results for children using planned treatment showed that they are 21% less likely to engage in criminal behavior and 40% less likely to need hospitalization for mental-health problems (Lindqvist, 2010). Further evidence exists supporting the long-term effectiveness of RTCs for children exhibiting severe mental health issues. Preyde (2011) found that clients showed a statistically significant reduction in symptom severity 12–18 months after leaving an RTC, results which were maintained 36–40 months after their discharge from the facility. However, although there is a great deal of research supporting the validity of RTCs as a way of treating children and youth with behavioral disorders, little is known about the outcomes-monitoring practices of such facilities. Those that track clients after they leave the RTC only do so for an average of six months. In order to continue to provide effective long-term treatment to at-risk populations, further efforts are needed to encourage the monitoring of outcomes after discharge from residential treatment (J.D. Brown, 2011). One problem that hinders the effectiveness of RTCs is elopement or "running". A study by Kashubeck found that runaways from RTCs were "more likely to have a history of elopement, a suspected history of sexual abuse, an affective-disorder diagnosis, and parents whose rights had been terminated." By employing these characteristics of patients in the design of treatment, RTCs may be more successful in reducing elopement and otherwise improving the probability of clients' success.
Biology and health sciences
Health facilities
Health
11943108
https://en.wikipedia.org/wiki/Pinnidae
Pinnidae
The Pinnidae are a taxonomic family of large saltwater clams sometimes known as pen shells. They are marine bivalve molluscs in the order Pteriida. Shell description The shells of bivalves in this family are fragile and have a long and triangular shape, and in life the pointed end is anchored in sediment using a byssus. The shells have a thin but highly iridescent inner layer of nacre in the part of the shell near the umbos (the pointed end). The family Pinnidae includes the fan shell, Atrina fragilis, and Pinna nobilis, the source of sea silk. Some species are also fished for their food value. Human use As Joseph Rosewater commented in 1961: "“The Pinnidae have considerable economic importance in many parts of the world. They produce pearls of moderate value. In the Mediterranean area, material made from the holdfast or byssus of Pinna nobilis Linné has been utilized in the manufacture of clothing for many centuries: gloves, shawls, stockings and cloaks. Apparel made from this material has an attractive golden hue and these items were greatly valued by the ancients. Today, Pinnidae are eaten in Japan, Polynesia, in several other Indo-Pacific island groups, and on the west coast of Mexico. In Polynesia, the valves of Atrina vexillum are carved to form decorative articles, and entire valves of larger specimens are sometimes used as plates.” Genera Genera within the family Pinnidae: Atrina Gray, 1842 (40 species) Pinna Linnaeus, 1758 (27 species) Streptopinna von Martens, 1850 (monotypic)
Biology and health sciences
Bivalvia
Animals
11943863
https://en.wikipedia.org/wiki/Space%20industry
Space industry
Space industry refers to economic activities related to manufacturing components that go into outer space (Earth's orbit or beyond), delivering them to those regions, and related services. Owing to the prominence of satellite-related activities, some sources use the term satellite industry interchangeably with the term space industry. The term space business has also been used. A narrow definition of the space industry typically encompasses only hardware providers (primarily those that manufacture launch vehicles and satellites). This definition does not exclude certain activities, such as space tourism. Therefore, more broadly, the space industry can be described as the activities of the companies and organizations involved in the space economy, and providing goods and services related to space. The space economy has been defined as "all public and private actors involved in developing and providing space-enabled products and services. It comprises a long value-added chaining, starting with research and development actors and manufacturers of space hardware and ending with the providers of space-enabled products and services to final users." Segments and revenues The three major sectors of the space industry are: satellite manufacturing, support ground equipment manufacturing, and the launch industry. The satellite manufacturing sector is composed of satellite developers and integrators, and subsystem manufacturers. The ground equipment sector is composed of companies that manufacture systems such as mobile terminals, gateways, control stations, VSATs, direct broadcast satellite dishes, and other specialized equipment. The launch sector is composed of launch services, vehicle manufacturing and subsystem manufacturing. Every euro spent in the space industry returns around six euros to the economy, according to the European Space Agency. This makes it a critical sector for economic development, competitiveness, and high-tech jobs. With regards to the worldwide satellite industry revenues, in the period 2002 to 2005 those remained at the 35–36 billion USD level. In that, majority of revenue was generated by the ground equipment sector, with the least amount by the launch sector. Space-related services are estimated at US$100 billion. The industry and related sectors employ about 120,000 people in the OECD countries, while the space industry of Russia employs around 250,000 people. Capital stocks estimated the worth of 937 satellites in Earth's orbit in 2005 at around 170 to US$230 billion. In 2005, OECD countries budgeted around US$45 billion for space-related activities; income from space-derived products and services has been estimated at US$110–120 billion in 2006 (worldwide). History and trends The space industry began to develop after World War II, as rockets and then satellites entered into military arsenals, and later found civilian applications. It retains significant ties to the government. In particular, the launch industry features a significant government involvement, with some launch platforms (such as the Space Shuttle) being operated by governments. In recent years, however, private spaceflight is becoming realistic, and even major government agencies, such as NASA, have begun relying on privately operated launch services. Some future developments of the space industry that are increasingly being considered include new services such as space tourism. From 2004 to 2013, total orbital launches by country/region were: Russia: 270, US: 181, China: 108, Europe: 59, Japan: 24, India: 19 and Brazil: 1. Relevant trends in the 2008–2009 for the space industry have been described as: the appearance of new satellite operators; a growing demand for Fixed Service Satellites and developing market for Mobile Satellite Services; a steady amount of commercial satellite orders; steady performance of the launch sector; resilience to the financial crisis; maturing markets for services like Ka-band and remote sensing. The 2019 Space Report estimates that in 2018 total global space activity was $414.75 Billion. Of that, the report estimates that 21%, or $87.09 Billion, was from U.S. Government Space Budgets. A report discussing global space spending in 2021 estimated global spending at approximately $92 billion. The Space Report for Q4 2023 identified 2023 as the busiest year on record for space activities, with 223 launch attempts and 212 successful launches. More than 2,800 satellites were deployed into orbit, a 23% increase from 2022, and commercial launch activity saw a 50% increase compared to 2022.
Technology
Basics_6
null
2222635
https://en.wikipedia.org/wiki/Atmospheric%20electricity
Atmospheric electricity
Atmospheric electricity describes the electrical charges in the Earth's atmosphere (or that of another planet). The movement of charge between the Earth's surface, the atmosphere, and the ionosphere is known as the global atmospheric electrical circuit. Atmospheric electricity is an interdisciplinary topic with a long history, involving concepts from electrostatics, atmospheric physics, meteorology and Earth science. Thunderstorms act as a giant battery in the atmosphere, charging up the electrosphere to about 400,000 volts with respect to the surface. This sets up an electric field throughout the atmosphere, which decreases with increase in altitude. Atmospheric ions created by cosmic rays and natural radioactivity move in the electric field, so a very small current flows through the atmosphere, even away from thunderstorms. Near the surface of the Earth, the magnitude of the field is on average around 100 V/m, oriented such that it drives positive charges down. Atmospheric electricity involves both thunderstorms, which create lightning bolts to rapidly discharge huge amounts of atmospheric charge stored in storm clouds, and the continual electrification of the air due to ionization from cosmic rays and natural radioactivity, which ensure that the atmosphere is never quite neutral. History Sparks drawn from electrical machines and from Leyden jars suggested to early experimenters Hauksbee, Newton, Wall, Nollet, and Gray that lightning was caused by electric discharges. In 1708, Dr. William Wall was one of the first to observe that spark discharges resembled miniature lightning, after observing the sparks from a charged piece of amber. Benjamin Franklin's experiments showed that electrical phenomena of the atmosphere were not fundamentally different from those produced in the laboratory, by listing many similarities between electricity and lightning. By 1749, Franklin observed lightning to possess almost all the properties observable in electrical machines. In July 1750, Franklin hypothesized that electricity could be taken from clouds via a tall metal aerial with a sharp point. Before Franklin could carry out his experiment, in 1752 Thomas-François Dalibard erected a iron rod at Marly-la-Ville, near Paris, drawing sparks from a passing cloud. With ground-insulated aerials, an experimenter could bring a grounded lead with an insulated wax handle close to the aerial, and observe a spark discharge from the aerial to the grounding wire. In May 1752, Dalibard affirmed that Franklin's theory was correct. Around June 1752, Franklin reportedly performed his famous kite experiment. The kite experiment was repeated by Romas, who drew from a metallic string sparks long, and by Cavallo, who made many important observations on atmospheric electricity. Lemonnier (1752) also reproduced Franklin's experiment with an aerial, but substituted the ground wire with some dust particles (testing attraction). He went on to document the fair weather condition, the clear-day electrification of the atmosphere, and its diurnal variation. Beccaria (1775) confirmed Lemonnier's diurnal variation data and determined that the atmosphere's charge polarity was positive in fair weather. Saussure (1779) recorded data relating to a conductor's induced charge in the atmosphere. Saussure's instrument (which contained two small spheres suspended in parallel with two thin wires) was a precursor to the electrometer. Saussure found that the atmospheric electrification under clear weather conditions had an annual variation, and that it also varied with height. In 1785, Coulomb discovered the electrical conductivity of air. His discovery was contrary to the prevailing thought at the time, that the atmospheric gases were insulators (which they are to some extent, or at least not very good conductors when not ionized). Erman (1804) theorized that the Earth was negatively charged, and Peltier (1842) tested and confirmed Erman's idea. Several researchers contributed to the growing body of knowledge about atmospheric electrical phenomena. Francis Ronalds began observing the potential gradient and air-earth currents around 1810, including making continuous automated recordings. He resumed his research in the 1840s as the inaugural Honorary Director of the Kew Observatory, where the first extended and comprehensive dataset of electrical and associated meteorological parameters was created. He also supplied his equipment to other facilities around the world with the goal of delineating atmospheric electricity on a global scale. Kelvin's new water dropper collector and divided-ring electrometer were introduced at Kew Observatory in the 1860s, and atmospheric electricity remained a speciality of the observatory until its closure. For high-altitude measurements, kites were once used, and weather balloons or aerostats are still used, to lift experimental equipment into the air. Early experimenters even went aloft themselves in hot-air balloons. Hoffert (1888) identified individual lightning downward strokes using early cameras. Elster and Geitel, who also worked on thermionic emission, proposed a theory to explain thunderstorms' electrical structure (1885) and, later, discovered atmospheric radioactivity (1899) from the existence of positive and negative ions in the atmosphere. Pockels (1897) estimated lightning current intensity by analyzing lightning flashes in basalt (c. 1900) and studying the left-over magnetic fields caused by lightning. Discoveries about the electrification of the atmosphere via sensitive electrical instruments and ideas on how the Earth's negative charge is maintained were developed mainly in the 20th century, with CTR Wilson playing an important part. Current research on atmospheric electricity focuses mainly on lightning, particularly high-energy particles and transient luminous events, and the role of non-thunderstorm electrical processes in weather and climate. Description Atmospheric electricity is always present, and during fine weather away from thunderstorms, the air above the surface of Earth is positively charged, while the Earth's surface charge is negative. This can be understood in terms of a difference of potential between a point of the Earth's surface, and a point somewhere in the air above it. Because the atmospheric electric field is negatively directed in fair weather, the convention is to refer to the potential gradient, which has the opposite sign and is about 100 V/m at the surface, away from thunderstorms. There is a weak conduction current of atmospheric ions moving in the atmospheric electric field, about 2 picoamperes per square meter, and the air is weakly conductive due to the presence of these atmospheric ions. Variations Global daily cycles in the atmospheric electric field, with a minimum around 03 UT and peaking roughly 16 hours later, were researched by the Carnegie Institution of Washington in the 20th century. This Carnegie curve variation has been described as "the fundamental electrical heartbeat of the planet". Even away from thunderstorms, atmospheric electricity can be highly variable, but, generally, the electric field is enhanced in fogs and dust whereas the atmospheric electrical conductivity is diminished. Links with biology The atmospheric potential gradient leads to an ion flow from the positively charged atmosphere to the negatively charged earth surface. Over a flat field on a day with clear skies, the atmospheric potential gradient is approximately 120 V/m. Objects protruding these fields, e.g. flowers and trees, can increase the electric field strength to several kilovolts per meter. These near-surface electrostatic forces are detected by organisms such as the bumblebee to navigate to flowers and the spider to initiate dispersal by ballooning. The atmospheric potential gradient is also thought to affect sub-surface electro-chemistry and microbial processes. On the other hand, swarming insects and birds can be a source of biogenic charge in the atmosphere, likely contributing to a source of electrical variability in the atmosphere. Near space The electrosphere layer (from tens of kilometers above the surface of the Earth to the ionosphere) has a high electrical conductivity and is essentially at a constant electric potential. The ionosphere is the inner edge of the magnetosphere and is the part of the atmosphere that is ionized by solar radiation. (Photoionization is a physical process in which a photon is incident on an atom, ion or molecule, resulting in the ejection of one or more electrons.) Cosmic radiation The Earth, and almost all living things on it, are constantly bombarded by radiation from outer space. This radiation primarily consists of positively charged ions from protons to iron and larger nuclei derived sources outside the Solar System. This radiation interacts with atoms in the atmosphere to create an air shower of secondary ionising radiation, including X-rays, muons, protons, alpha particles, pions, and electrons. Ionization from this secondary radiation ensures that the atmosphere is weakly conductive, and the slight current flow from these ions over the Earth's surface balances the current flow from thunderstorms. Ions have characteristic parameters such as mobility, lifetime, and generation rate that vary with altitude. Thunderstorms and lightning The potential difference between the ionosphere and the Earth is maintained by thunderstorms, with lightning strikes delivering negative charges from the atmosphere to the ground. Collisions between ice and soft hail (graupel) inside cumulonimbus clouds causes separation of positive and negative charges within the cloud, essential for the generation of lightning. How lightning initially forms is still a matter of debate: Scientists have studied root causes ranging from atmospheric perturbations (wind, humidity, and atmospheric pressure) to the impact of solar wind and energetic particles. An average bolt of lightning carries a negative electric current of 40 kiloamperes (kA) (although some bolts can be up to 120 kA), and transfers a charge of five coulombs and energy of 500 MJ, or enough energy to power a 100-watt lightbulb for just under two months. The voltage depends on the length of the bolt, with the dielectric breakdown of air being three million volts per meter, and lightning bolts often being several hundred meters long. However, lightning leader development is not a simple matter of dielectric breakdown, and the ambient electric fields required for lightning leader propagation can be a few orders of magnitude less than dielectric breakdown strength. Further, the potential gradient inside a well-developed return-stroke channel is on the order of hundreds of volts per meter or less due to intense channel ionization, resulting in a true power output on the order of megawatts per meter for a vigorous return-stroke current of 100 kA . If the quantity of water that is condensed in and subsequently precipitated from a cloud is known, then the total energy of a thunderstorm can be calculated. In an average thunderstorm, the energy released amounts to about 10,000,000 kilowatt-hours (3.6 joule), which is equivalent to a 20-kiloton nuclear warhead. A large, severe thunderstorm might be 10 to 100 times more energetic. Corona discharges St. Elmo's Fire is an electrical phenomenon in which luminous plasma is created by a coronal discharge originating from a grounded object. Ball lightning is often erroneously identified as St. Elmo's Fire, whereas they are separate and distinct phenomena. Although referred to as "fire", St. Elmo's Fire is, in fact, plasma, and is observed, usually during a thunderstorm, at the tops of trees, spires or other tall objects, or on the heads of animals, as a brush or star of light. Corona is caused by the electric field around the object in question ionizing the air molecules, producing a faint glow easily visible in low-light conditions. Approximately 1,000 – 30,000 volts per centimeter is required to induce St. Elmo's Fire; however, this is dependent on the geometry of the object in question. Sharp points tend to require lower voltage levels to produce the same result because electric fields are more concentrated in areas of high curvature, thus discharges are more intense at the end of pointed objects. St. Elmo's Fire and normal sparks both can appear when high electrical voltage affects a gas. St. Elmo's fire is seen during thunderstorms when the ground below the storm is electrically charged, and there is high voltage in the air between the cloud and the ground. The voltage tears apart the air molecules and the gas begins to glow. The nitrogen and oxygen in the Earth's atmosphere causes St. Elmo's Fire to fluoresce with blue or violet light; this is similar to the mechanism that causes neon signs to glow. Earth-Ionosphere cavity The Schumann resonances are a set of spectrum peaks in the extremely low frequency (ELF) portion of the Earth's electromagnetic field spectrum. Schumann resonance is due to the space between the surface of the Earth and the conductive ionosphere acting as a waveguide. The limited dimensions of the earth cause this waveguide to act as a resonant cavity for electromagnetic waves. The cavity is naturally excited by energy from lightning strikes. Electrical system grounding Atmospheric charges can cause undesirable, dangerous, and potentially lethal charge potential buildup in suspended electric wire power distribution systems. Bare wires suspended in the air spanning many kilometers and isolated from the ground can collect very large stored charges at high voltage, even when there is no thunderstorm or lightning occurring. This charge will seek to discharge itself through the path of least insulation, which can occur when a person reaches out to activate a power switch or to use an electric device. To dissipate atmospheric charge buildup, one side of the electrical distribution system is connected to the earth at many points throughout the distribution system, as often as on every support pole. The one earth-connected wire is commonly referred to as the "protective earth", and provides path for the charge potential to dissipate without causing damage, and provides redundancy in case any one of the ground paths is poor due to corrosion or poor ground conductivity. The additional electric grounding wire that carries no power serves a secondary role, providing a high-current short-circuit path to rapidly blow fuses and render a damaged device safe, rather than have an ungrounded device with damaged insulation become "electrically live" via the grid power supply, and hazardous to touch. Each transformer in an alternating current distribution grid segments the grounding system into a new separate circuit loop. These separate grids must also be grounded on one side to prevent charge buildup within them relative to the rest of the system, and which could cause damage from charge potentials discharging across the transformer coils to the other grounded side of the distribution network.
Physical sciences
Storms
Earth science
2223114
https://en.wikipedia.org/wiki/Baryon%20asymmetry
Baryon asymmetry
In physical cosmology, the baryon asymmetry problem, also known as the matter asymmetry problem or the matter–antimatter asymmetry problem, is the observed imbalance in baryonic matter (the type of matter experienced in everyday life) and antibaryonic matter in the observable universe. Neither the standard model of particle physics nor the theory of general relativity provides a known explanation for why this should be so, and it is a natural assumption that the universe is neutral with all conserved charges. The Big Bang should have produced equal amounts of matter and antimatter. Since this does not seem to have been the case, it is likely some physical laws must have acted differently or did not exist for matter and/or antimatter. Several competing hypotheses exist to explain the imbalance of matter and antimatter that resulted in baryogenesis. However, there is as of yet no consensus theory to explain the phenomenon, which has been described as "one of the great mysteries in physics". Sakharov conditions In 1967, Andrei Sakharov proposed a set of three necessary conditions that a baryon-generating interaction must satisfy to produce matter and antimatter at different rates. These conditions were inspired by the recent discoveries of the Cosmic microwave background and CP violation in the neutral kaon system. The three necessary "Sakharov conditions" are: Baryon number violation. C-symmetry and CP-symmetry violation. Interactions out of thermal equilibrium. Baryon number violation Baryon number violation is a necessary condition to produce an excess of baryons over anti-baryons. But C-symmetry violation is also needed so that the interactions which produce more baryons than anti-baryons will not be counterbalanced by interactions which produce more anti-baryons than baryons. CP-symmetry violation is similarly required because otherwise equal numbers of left-handed baryons and right-handed anti-baryons would be produced, as well as equal numbers of left-handed anti-baryons and right-handed baryons. Finally, the interactions must be out of thermal equilibrium, since otherwise CPT symmetry would assure compensation between processes increasing and decreasing the baryon number. Currently, there is no experimental evidence of particle interactions where the conservation of baryon number is broken perturbatively: this would appear to suggest that all observed particle reactions have equal baryon number before and after. Mathematically, the commutator of the baryon number quantum operator with the (perturbative) Standard Model hamiltonian is zero: . However, the Standard Model is known to violate the conservation of baryon number only non-perturbatively: a global U(1) anomaly. To account for baryon violation in baryogenesis, such events (including proton decay) can occur in Grand Unification Theories (GUTs) and supersymmetric (SUSY) models via hypothetical massive bosons such as the X boson. CP-symmetry violation The second condition for generating baryon asymmetry—violation of charge-parity symmetry—is that a process is able to happen at a different rate to its antimatter counterpart. In the Standard Model, CP violation appears as a complex phase in the quark mixing matrix of the weak interaction. There may also be a non-zero CP-violating phase in the neutrino mixing matrix, but this is currently unmeasured. The first in a series of basic physics principles to be violated was parity through Chien-Shiung Wu's experiment. This led to CP violation being verified in the 1964 Fitch–Cronin experiment with neutral kaons, which resulted in the 1980 Nobel Prize in Physics (direct CP violation, that is violation of CP symmetry in a decay process, was discovered later, in 1999). Due to CPT symmetry, violation of CP symmetry demands violation of time inversion symmetry, or T-symmetry. Despite the allowance for CP violation in the Standard Model, it is insufficient to account for the observed baryon asymmetry of the universe (BAU) given the limits on baryon number violation, meaning that beyond-Standard Model sources are needed. A possible new source of CP violation was found at the Large Hadron Collider (LHC) by the LHCb collaboration during the first three years of LHC operations (beginning March 2010). The experiment analyzed the decays of two particles, the bottom Lambda (Λb0) and its antiparticle, and compared the distributions of decay products. The data showed an asymmetry of up to 20% of CP-violation sensitive quantities, implying a breaking of CP-symmetry. This analysis will need to be confirmed by more data from subsequent runs of the LHC. One method to search for additional CP-violation is the search for electric dipole moments of fundamental or composed particles. The existence of electric dipole moments in equilibrium states requires violation of T-symmetry. That way finding a non zero electric dipole moment would imply the existence of T-violating interactions in the vacuum corrections to the measured particle. So far all measurements are consistent with zero putting strong bounds on the properties of the yet unknown new CP-violating interactions. Interactions out of thermal equilibrium In the out-of-equilibrium decay scenario, the last condition states that the rate of a reaction which generates baryon-asymmetry must be less than the rate of expansion of the universe. In this situation the particles and their corresponding antiparticles do not achieve thermal equilibrium due to rapid expansion decreasing the occurrence of pair-annihilation. Other explanations Regions of the universe where antimatter dominates Another possible explanation of the apparent baryon asymmetry is that matter and antimatter are essentially separated into different, widely distant regions of the universe. The formation of antimatter galaxies was originally thought to explain the baryon asymmetry, as from a distance, antimatter atoms are indistinguishable from matter atoms; both produce light (photons) in the same way. Along the boundary between matter and antimatter regions, however, annihilation (and the subsequent production of gamma radiation) would be detectable, depending on its distance and the density of matter and antimatter. Such boundaries, if they exist, would likely lie in deep intergalactic space. The density of matter in intergalactic space is reasonably well established at about one atom per cubic meter. Assuming this is a typical density near a boundary, the gamma ray luminosity of the boundary interaction zone can be calculated. No such zones have been detected, but 30 years of research have placed bounds on how far they might be. On the basis of such analyses, it is now deemed unlikely that any region within the observable universe is dominated by antimatter. Mirror anti-universe The state of the universe, as it is, does not violate the CPT symmetry, because the Big Bang could be considered as a double sided event, both classically and quantum mechanically, consisting of a universe-antiuniverse pair. This means that this universe is the charge (C), parity (P) and time (T) image of the anti-universe. This pair emerged from the Big Bang epochs not directly into a hot, radiation-dominated era. The antiuniverse would flow back in time from the Big Bang, becoming bigger as it does so, and would be also dominated by antimatter. Its spatial properties are inverted if compared to those in our universe, a situation analogous to creating electron–positron pairs in a vacuum. This model, devised by physicists from the Perimeter Institute for Theoretical Physics in Canada, proposes that temperature fluctuations in the cosmic microwave background (CMB) are due to the quantum-mechanical nature of space-time near the Big Bang singularity. This means that a point in the future of our universe and a point in the distant past of the anti-universe would provide fixed classical points, while all possible quantum-based permutations would exist in between. Quantum uncertainty causes the universe and antiuniverse to not be exact mirror images of each other. This model has not shown if it can reproduce certain observations regarding the inflation scenario, such as explaining the uniformity of the cosmos on large scales. However, it provides a natural and straightforward explanation for dark matter. Such a universe-antiuniverse pair would produce large numbers of superheavy neutrinos, also known as sterile neutrinos. These neutrinos might also be the source of recently observed bursts of high-energy cosmic rays. Baryon asymmetry parameter The challenges to the physics theories are then to explain how to produce the predominance of matter over antimatter, and also the magnitude of this asymmetry. An important quantifier is the asymmetry parameter, This quantity relates the overall number density difference between baryons and antibaryons (nB and n, respectively) and the number density of cosmic background radiation photons nγ. According to the Big Bang model, matter decoupled from the cosmic background radiation (CBR) at a temperature of roughly kelvin, corresponding to an average kinetic energy of / () = . After the decoupling, the total number of CBR photons remains constant. Therefore, due to space-time expansion, the photon density decreases. The photon density at equilibrium temperature T per cubic centimeter, is given by with kB as the Boltzmann constant, ħ as the Planck constant divided by 2 and c as the speed of light in vacuum, and ζ(3) as Apéry's constant. At the current CBR photon temperature of , this corresponds to a photon density nγ of around 411 CBR photons per cubic centimeter. Therefore, the asymmetry parameter η, as defined above, is not the "good" parameter. Instead, the preferred asymmetry parameter uses the entropy density s, because the entropy density of the universe remained reasonably constant throughout most of its evolution. The entropy density is with p and ρ as the pressure and density from the energy density tensor Tμν, and g* as the effective number of degrees of freedom for "massless" particles (inasmuch as mc2 ≪ kBT holds) at temperature T, for bosons and fermions with gi and gj degrees of freedom at temperatures Ti and Tj respectively. Presently, s = .
Physical sciences
Particle physics: General
Physics
2223535
https://en.wikipedia.org/wiki/Mass%20flow%20rate
Mass flow rate
In physics and engineering, mass flow rate is the rate at which mass of a substance changes over time. Its unit is kilogram per second (kg/s) in SI units, and slug per second or pound per second in US customary units. The common symbol is (pronounced "m-dot"), although sometimes (Greek lowercase mu) is used. Sometimes, mass flow rate as defined here is termed "mass flux" or "mass current". Confusingly, "mass flow" is also a term for mass flux, the rate of mass flow per unit of area. Formulation Mass flow rate is defined by the limit i.e., the flow of mass through a surface per time . The overdot on is Newton's notation for a time derivative. Since mass is a scalar quantity, the mass flow rate (the time derivative of mass) is also a scalar quantity. The change in mass is the amount that flows after crossing the boundary for some time duration, not the initial amount of mass at the boundary minus the final amount at the boundary, since the change in mass flowing through the area would be zero for steady flow. Alternative equations Mass flow rate can also be calculated by where The above equation is only true for a flat, plane area. In general, including cases where the area is curved, the equation becomes a surface integral: The area required to calculate the mass flow rate is real or imaginary, flat or curved, either as a cross-sectional area or a surface, e.g. for substances passing through a filter or a membrane, the real surface is the (generally curved) surface area of the filter, macroscopically - ignoring the area spanned by the holes in the filter/membrane. The spaces would be cross-sectional areas. For liquids passing through a pipe, the area is the cross-section of the pipe, at the section considered. The vector area is a combination of the magnitude of the area through which the mass passes through, , and a unit vector normal to the area, . The relation is . The reason for the dot product is as follows. The only mass flowing through the cross-section is the amount normal to the area, i.e. parallel to the unit normal. This amount is where is the angle between the unit normal and the velocity of mass elements. The amount passing through the cross-section is reduced by the factor , as increases less mass passes through. All mass which passes in tangential directions to the area, that is perpendicular to the unit normal, doesn't actually pass through the area, so the mass passing through the area is zero. This occurs when : These results are equivalent to the equation containing the dot product. Sometimes these equations are used to define the mass flow rate. Considering flow through porous media, a special quantity, superficial mass flow rate, can be introduced. It is related with superficial velocity, , with the following relationship: The quantity can be used in particle Reynolds number or mass transfer coefficient calculation for fixed and fluidized bed systems. Usage In the elementary form of the continuity equation for mass, in hydrodynamics: In elementary classical mechanics, mass flow rate is encountered when dealing with objects of variable mass, such as a rocket ejecting spent fuel. Often, descriptions of such objects erroneously invoke Newton's second law by treating both the mass and the velocity as time-dependent and then applying the derivative product rule. A correct description of such an object requires the application of Newton's second law to the entire, constant-mass system consisting of both the object and its ejected mass. Mass flow rate can be used to calculate the energy flow rate of a fluid: where is the unit mass energy of a system. Energy flow rate has SI units of kilojoule per second or kilowatt.
Physical sciences
Fluid mechanics
Physics
2223903
https://en.wikipedia.org/wiki/Dermatophytosis
Dermatophytosis
Dermatophytosis, also known as tinea and ringworm, is a fungal infection of the skin (a dermatomycosis), that may affect skin, hair, and nails. Typically it results in a red, itchy, scaly, circular rash. Hair loss may occur in the area affected. Symptoms begin four to fourteen days after exposure. The types of dermatophytosis are typically named for area of the body that they affect. Multiple areas can be affected at a given time. About 40 types of fungus can cause dermatophytosis. They are typically of the Trichophyton, Microsporum, or Epidermophyton type. Risk factors include using public showers, contact sports such as wrestling, excessive sweating, contact with animals, obesity, and poor immune function. Ringworm can spread from other animals or between people. Diagnosis is often based on the appearance and symptoms. It may be confirmed by either culturing or looking at a skin scraping under a microscope. Prevention is by keeping the skin dry, not walking barefoot in public, and not sharing personal items. Treatment is typically with antifungal creams such as clotrimazole or miconazole. If the scalp is involved, antifungals by mouth such as fluconazole may be needed. Dermatophytosis has spread globally, and up to 20% of the world's population may be infected by it at any given time. Infections of the groin are more common in males, while infections of the scalp and body occur equally in both sexes. Infections of the scalp are most common in children while infections of the groin are most common in the elderly. Descriptions of ringworm date back to ancient history. Types A number of different species of fungus are involved in dermatophytosis. Dermatophytes of the genera Trichophyton and Microsporum are the most common causative agents. These fungi attack various parts of the body and lead to the conditions listed below. The Latin names are for the conditions (disease patterns), not the agents that cause them. The disease patterns below identify the type of fungus that causes them only in the cases listed: Dermatophytosis Tinea pedis (athlete's foot): fungal infection of the feet Tinea unguium: fungal infection of the fingernails and toenails, and the nail bed Tinea corporis: fungal infection of the arms, legs, and trunk Tinea cruris (jock itch): fungal infection of the groin area Tinea manuum: fungal infection of the hands and palm area Tinea capitis: fungal infection of the scalp and hair Tinea faciei (face fungus): fungal infection of the face Tinea barbae: fungal infestation of facial hair Other superficial mycoses (not classic ringworm, since not caused by dermatophytes) Tinea versicolor: caused by Malassezia furfur Tinea nigra: caused by Hortaea werneckii Signs and symptoms Infections on the body may give rise to typical enlarging raised red rings of ringworm. Infection on the skin of the feet may cause athlete's foot and in the groin, jock itch. Involvement of the nails is termed onychomycosis. Animals including dogs and cats can also be affected by ringworm, and the disease can be transmitted between animals and humans, making it a zoonotic disease. Specific signs can be: red, scaly, itchy or raised patches patches may be redder on outside edges or resemble a ring patches that begin to ooze or develop a blister bald patches may develop when the scalp is affected Causes Fungi thrive in moist, warm areas, such as locker rooms, tanning beds, swimming pools, and skin folds; accordingly, those that cause dermatophytosis may be spread by using exercise machines that have not been disinfected after use, or by sharing towels, clothing, footwear, or hairbrushes. Diagnosis Dermatophyte infections can be readily diagnosed based on the history, physical examination, and potassium hydroxide (KOH) microscopy. Prevention Advice often given includes: Avoid sharing clothing, sports equipment, towels, or sheets. Wash clothes in hot water with fungicidal soap after suspected exposure to ringworm. Avoid walking barefoot; instead wear appropriate protective shoes in locker rooms and sandals at the beach. Avoid touching pets with bald spots, as they are often carriers of the fungus. Vaccination no approved human vaccine exist against dermatophytosis. For horses, dogs and cats there is available an approved inactivated vaccine called Insol Dermatophyton (Boehringer Ingelheim) which provides time-limited protection against several trichophyton and microsporum fungal strains. With cattle, systemic vaccination has achieved effective control of ringworm. Since 1979 a Russian live vaccine (LFT 130) and later on a Czechoslovakian live vaccine against bovine ringworm has been used. In Scandinavian countries vaccination programmes against ringworm are used as a preventive measure to improve the hide quality. In Russia, fur-bearing animals (silver fox, foxes, polar foxes) and rabbits have also been treated with vaccines. Treatment Antifungal treatments include topical agents such as miconazole, terbinafine, clotrimazole, ketoconazole, or tolnaftate applied twice daily until symptoms resolve — usually within one or two weeks. Topical treatments should then be continued for a further 7 days after resolution of visible symptoms to prevent recurrence. The total duration of treatment is therefore generally two weeks, but may be as long as three. In more severe cases or scalp ringworm, systemic treatment with oral medications (such as itraconazole, terbinafine, and ketoconazole) may be given. To prevent spreading the infection, lesions should not be touched, and good hygiene maintained with washing of hands and the body. Misdiagnosis and treatment of ringworm with a topical steroid, a standard treatment of the superficially similar pityriasis rosea, can result in tinea incognito, a condition where ringworm fungus grows without typical features, such as a distinctive raised border. History Dermatophytosis has been prevalent since before 1906, at which time ringworm was treated with compounds of mercury or sometimes sulfur or iodine. Hairy areas of skin were considered too difficult to treat, so the scalp was treated with X-rays and followed up with antifungal medication. Another treatment from around the same time was application of Araroba powder. Terminology The most common term for the infection, "ringworm", is a misnomer, since the condition is caused by fungi of several different species and not by parasitic worms. Other animals Ringworm caused by Trichophyton verrucosum is a frequent clinical condition in cattle. Young animals are more frequently affected. The lesions are located on the head, neck, tail, and perineum. The typical lesion is a round, whitish crust. Multiple lesions may coalesce in "map-like" appearance. Clinical dermatophytosis is also diagnosed in sheep, dogs, cats, and horses. Causative agents, besides Trichophyton verrucosum, are T. mentagrophytes, T. equinum, Microsporum gypseum, M. canis, and M. nanum. Dermatophytosis may also be present in the holotype of the Cretaceous eutriconodont mammal Spinolestes, suggesting a Mesozoic origin for this disease. Diagnosis Ringworm in pets may often be asymptomatic, resulting in a carrier condition which infects other pets. In some cases, the disease only appears when the animal develops an immunodeficiency condition. Circular bare patches on the skin suggest the diagnosis, but no lesion is truly specific to the fungus. Similar patches may result from allergies, sarcoptic mange, and other conditions. Three species of fungi cause 95% of dermatophytosis in pets: these are Microsporum canis, Microsporum gypseum, and Trichophyton mentagrophytes. Veterinarians have several tests to identify ringworm infection and identify the fungal species that cause it: Woods test: This is an ultraviolet light with a magnifying lens. Only 50% of M. canis will show up as an apple-green fluorescence on hair shafts, under the UV light. The other fungi do not show. The fluorescent material is not the fungus itself (which does not fluoresce), but rather an excretory product of the fungus which sticks to hairs. Infected skin does not fluoresce. Microscopic test: The veterinarian takes hairs from around the infected area and places them in a staining solution to view under the microscope. Fungal spores may be viewed directly on hair shafts. This technique identifies a fungal infection in about 40%–70% of the infections, but cannot identify the species of dermatophyte. Culture test: This is the most effective, but also the most time-consuming, way to determine if ringworm is on a pet. In this test, the veterinarian collects hairs from the pet, or else collects fungal spores from the pet's hair with a toothbrush, or other instrument, and inoculates fungal media for culture. These cultures can be brushed with transparent tape and then read by the veterinarian using a microscope, or can be sent to a pathological lab. The three common types of fungi which commonly cause pet ringworm can be identified by their characteristic spores. These are different-appearing macroconidia in the two common species of Microspora, and typical microconidia in Trichophyton infections. Identifying the species of fungi involved in pet infections can be helpful in controlling the source of infection. M. canis, despite its name, occurs more commonly in domestic cats, and 98% of cat infections are with this organism. It can also infect dogs and humans, however. T. mentagrophytes has a major reservoir in rodents, but can also infect pet rabbits, dogs, and horses. M. gypseum is a soil organism and is often contracted from gardens and other such places. Besides humans, it may infect rodents, dogs, cats, horses, cattle, and swine. Treatment Pet animals Treatment requires both systemic oral treatment with most of the same drugs used in humans—terbinafine, fluconazole, or itraconazole—as well as a topical "dip" therapy. Because of the usually longer hair shafts in pets compared to those of humans, the area of infection and possibly all of the longer hair of the pet must be clipped to decrease the load of fungal spores clinging to the pet's hair shafts. However, close shaving is usually not done because nicking the skin facilitates further skin infection. Twice-weekly bathing of the pet with diluted lime sulfur dip solution is effective in eradicating fungal spores. This must continue for 3 to 8 weeks. Washing of household hard surfaces with 1:10 household sodium hypochlorite bleach solution is effective in killing spores, but it is too irritating to be used directly on hair and skin. Pet hair must be rigorously removed from all household surfaces, and then the vacuum cleaner bag, and perhaps even the vacuum cleaner itself, discarded when this has been done repeatedly. Removal of all hair is important, since spores may survive 12 months or even as long as two years on hair clinging to surfaces. Cattle In bovines, an infestation is difficult to cure, as systemic treatment is uneconomical. Local treatment with iodine compounds is time-consuming, as it needs scraping of crusty lesions. Moreover, it must be carefully conducted using gloves, lest the worker become infested. Epidemiology Worldwide, superficial fungal infections caused by dermatophytes are estimated to infect around 20-25% of the population and it is thought that dermatophytes infect 10-15% of the population during their lifetime. The highest incidence of superficial mycoses result from dermatophytoses which are most prevalent in tropical regions. Onychomycosis, a common infection caused by dermatophytes, is found with varying prevalence rates in many countries. Tinea pedis + onychomycosis, Tinea corporis, Tinea capitis are the most common dermatophytosis found in humans across the world. Tinea capitis has a greater prevalence in children. The increasing prevalence of dermatophytes resulting in Tinea capitis has been causing epidemics throughout Europe and America. In pets, cats are the most affected by dermatophytosis. Pets are susceptible to dermatophytoses caused by Microsporum canis, Microsporum gypseum, and Trichophyton. For dermatophytosis in animals, risk factors depend on age, species, breed, underlying conditions, stress, grooming, and injuries. Numerous studies have found Tinea capitis to be the most prevalent dermatophyte to infect children across the continent of Africa. Dermatophytosis has been found to be most prevalent in children ages 4 to 11, infecting more males than females. Low socioeconomic status was found to be a risk factor for Tinea capitis. Throughout Africa, dermatophytoses are common in hot- humid climates and with areas of overpopulation. Chronicity is a common outcome for dermatophytosis in India. The prevalence of dermatophytosis in India is between 36.6 and 78.4% depending on the area, clinical subtype, and dermatophyte isolate. Individuals ages 21–40 years are most commonly affected. A 2002 study looking at 445 samples of dermatophytes in patients in Goiânia, Brazil found the most prevalent type to be Trichophyton rubrum (49.4%), followed by Trichophyton mentagrophytes (30.8%), and Microsporum canis (12.6%). A 2013 study looking at 5,175 samples of Tinea in patients in Tehran, Iran found the most prevalent type to be Tinea pedis (43.4%), followed by Tinea unguium. (21.3%), and Tinea cruris (20.7%).
Biology and health sciences
Fungal infections
Health
2224213
https://en.wikipedia.org/wiki/English%20units
English units
English units were the units of measurement used in England up to 1826 (when they were replaced by Imperial units), which evolved as a combination of the Anglo-Saxon and Roman systems of units. Various standards have applied to English units at different times, in different places, and for different applications. Use of the term "English units" can be ambiguous, as, in addition to the meaning used in this article, it is sometimes used to refer to the units of the descendant Imperial system as well to those of the descendant system of United States customary units. The two main sets of English units were the Winchester Units, used from 1495 to 1587, as affirmed by King Henry VII, and the Exchequer Standards, in use from 1588 to 1825, as defined by Queen Elizabeth I. In England (and the British Empire), English units were replaced by Imperial units in 1824 (effective as of 1 January 1826) by a Weights and Measures Act, which retained many though not all of the unit names and redefined (standardised) many of the definitions. In the US, being independent from the British Empire decades before the 1824 reforms, English units were standardized and adopted (as "US Customary Units") in 1832. History Very little is known of the units of measurement used in the British Isles prior to Roman colonisation in the 1st century AD. During the Roman period, Roman Britain relied on Ancient Roman units of measurement. During the Anglo-Saxon period, the North German foot of 13.2 inches (335 millimetres) was the nominal basis for other units of linear measurement. The foot was divided into 4 palms or 12 thumbs. A cubit was 2 feet, an elne 4 feet. The rod was 15 Anglo-Saxon feet, the furlong 10 rods. An acre was 4 rods × 40 rods, i.e. 160 square rods or 36,000 square Anglo-Saxon feet. However, Roman units continued to be used in the construction crafts, and reckoning by the Roman mile of 5,000 feet (or 8 stades) continued, in contrast to other Germanic countries which adopted the name "mile" for a longer native length closer to the league (which was 3 Roman miles). From the time of Offa King of Mercia (8th century) until 1526 the Saxon pound, also known as the moneyers' pound (and later known as the Tower pound) was the fundamental unit of weight (by Offa's law, one pound of silver, by weight, was subdivided into 240 silver pennies, hence (in money) 240 pence – twenty shillings – was known as one pound). Prior to the enactment of a law known as the "Composition of Yards and Perches" () some time between 1266 and 1303, the English system of measurement had been based on that of the Anglo-Saxons, who were descended from tribes of northern Germany. The Compositio redefined the yard, foot, inch, and barleycorn to of their previous value. However, it retained the Anglo-Saxon rod of 15 x feet (5.03 metres) and the acre of 4 × 40 square rods. Thus, the rod went from 5 old yards to new yards, or 15 old feet to new feet. The furlong went from 600 old feet (200 old yards) to 660 new feet (220 new yards). The acre went from 36,000 old square feet to 43,560 new square feet. Scholars have speculated that the Compositio may have represented a compromise between the two earlier systems of units, the Anglo-Saxon and the Roman. The Norman conquest of England introduced just one new unit: the bushel. William the Conqueror, in one of his first legislative acts, confirmed existing Anglo-Saxon measurement, a position which was consistent with Norman policy in dealing with occupied peoples. The Magna Carta of 1215 stipulates that there should be a standard measure of volume for wine, ale and corn (the London Quarter), and for weight, but does not define these units. Later development of the English system was by defining the units in laws and by issuing measurement standards. Standards were renewed in 1496, 1588, and 1758. The last Imperial Standard Yard in bronze was made in 1845; it served as the standard in the United Kingdom until the yard was redefined by the international yard and pound agreement (as 0.9144 metres) in 1959 (statutory implementation was in the Weights and Measures Act 1963). Over time, the English system had spread to other parts of the British Empire. Timeline Selected excerpts from the bibliography of Marks and Marking of Weights and Measures of the British Isles 1215 Magna Carta — the earliest statutory declaration for uniformity of weights and measures 1335 8 & 9 Edw. 3. c. 1 — First statutory reference describing goods as avoirdupois 1414 2 Hen. 5. Stat. 2. c. 4 — First statutory mention of the Troy pound 1495 12 Hen. 7. c. 5 — New Exchequer standards were constructed, including Winchester capacity measures defined by Troy weight of their content of threshed wheat by stricken (i.e. level) measure (first statutory mention of Troy weight as standard weight for bullion, bread, spices etc.). 1527 Hen VIII — Abolished the Tower pound 1531 23 Hen. 8. c. 4 — Barrel to contain 36 gallons of beer or 32 of ale; kilderkin is half of this; firkin is half again. 1532 24 Hen. 8. c. 3 — First statutory references to use of avoirdupois weight. 1536 28 Hen. 8. c. 4 — Added the tierce (41 gallons) 1588 (Elizabeth I) — A new series of Avoirdupois standard bronze weights (bell-shaped from 56 lb to 2 lb and flat-pile from 8 lb to a dram), with new Troy standard weights in nested cups, from 256 oz to oz in a binary progression. 1601–1602 — Standard bushels and gallons were constructed based on the standards of Henry VII and a new series of capacity measures were issued. 1660 12 Cha. 2. c. 24 — Barrel of beer to be 36 gallons, taken by the gauge of the Exchequer standard of the ale quart; barrel of ale to be 32 gallons; all other liquors retailed to be sold by the wine gallon 1689 1 Will. & Mar. c. 24 — Barrels of beer and ale outside London to contain 34 gallons 1695 7 Will. 3. c. 24 (I) — Irish Act about grain measures decreed: unit of measure to be Henry VIII's gallon as confirmed by Elizabeth I; i.e. cubic inches; standard measures of the barrel (32 gallons), half-barrel (16 gallons), bushel (8), peck (2), and gallon lodged in the Irish Exchequer; and copies were provided in every county, city, town, etc. 1696 8 & 9 Will. 3. c. 22 — Size of Winchester bushel "every round bushel with a plain and even bottom being ″ wide throughout and 8″ deep" (i.e. a dry measure of 2150 in3 per gallon). 1706 6 Ann. c. 11 — Act of Union decreed the weights and measures of England to be applied in Scotland, whose burgs (towns) were to take charge of the duplicates of the English Standards sent to them. 1706 6 Ann. c. 27 — Wine gallon to be a cylindrical vessel with an even bottom 7″ diameter throughout and 6″ deep from top to bottom of the inside, or holding 231 in3 and no more. 1713 12 Ann. c. 17 — The legal coal bushel to be round with a plain and even bottom, inches from outside to outside and to hold 1 Winchester bushel and 1 quart of water. 1718 5 Geo. 1. c. 18 — Decreed Scots Pint to be exactly 103 in3. 1803 43 Geo. 3. c. 151 — Referred to wine bottles making about 5 to the wine gallon (i.e. Reputed Quarts) 1824 5 Geo. 4. c. 74 — Weights and Measures Act 1824 completely reorganized British metrology and established Imperial weights and measures; defined the yard, troy and avoirdupois pounds and the gallon (as the standard measure for liquids and dry goods not measured by heaped measure), and provided for a 'brass' standard gallon to be constructed. 1825 6 Geo. 4. c. 12 — Delayed introduction of Imperial weights and measures from 1 May 1825 to 1 January 1826. 1835 5 & 6 Will. 4. c. 63 — Weights and Measures Act 1835 abolished local and customary measures, including the Winchester bushel; made heaped measure illegal; required trade to be carried out by avoirdupois weight only, except for bullion, gems and drugs (which were to be sold by troy weight instead); decreed that all forms of coal were to be sold by weight and not measure; legalised the 'stone' as , the 'hundredweight' as , and the (long) ton as 20 hundredweight, or . 1853 16 & 17 Vict. c. 29 — Permitted the use of decimal bullion weights. 1866 29 & 30 Vict. c. 82 — Standards of Weights, Measures, and Coinage Act 1866 transferred all duties and standards from the Exchequer to the newly created Standards Department of the Board of Trade. 1878 41 & 42 Vict. c. 49 — Weights and Measures Act 1878 defined the Imperial standard yard and pound; enumerated the secondary standards of measure and weight derived from the Imperial standards; required all trade by weight or measure to be in terms of one of the Imperial weights or measures or some multiple part thereof; abolished the Troy pound. 1963 c. 31 — Weights and Measures Act 1963 abolished the chaldron of coal, the fluid drachm and minim (effective 1 February 1971), discontinued the use of the quarter, abolished the use of the bushel and peck, and abolished the pennyweight (from 31 January 1969). Length Area Administrative units Hide four to eight bovates. A unit of yield, rather than area, it measured the amount of land able to support a single household for agricultural and taxation purposes. Knight's fee five hides. A knight's fee was expected to produce one fully equipped soldier for a knight's retinue in times of war. Hundred or wapentake 100 hides grouped for administrative purposes. Volume Many measures of capacity were understood as fractions or multiples of a gallon. For example, a quart is a quarter of a gallon, and a pint is half of a quart, or an eighth of a gallon. These ratios applied regardless of the specific size of the gallon. Not only did the definition of the gallon change over time, but there were several different kinds of gallon, which existed at the same time. For example, a wine gallon with a volume of 231 cubic inches (the basis of the U.S. gallon), and an ale gallon of 282 cubic inches, were commonly used for many decades prior to the establishment of the imperial gallon. In other words, a pint of ale and a pint of wine were not the same size. On the other hand, some measures such as the fluid ounce were not defined as a fraction of a gallon. For that reason, it is not always possible to give accurate definitions of units such as pints or quarts, in terms of ounces, prior to the establishment of the imperial gallon. General liquid measures Liquid measures as binary submultiples of their respective gallons (ale or wine): Wine Wine is traditionally measured based on the wine gallon and its related units. Other liquids such as brandy, spirits, mead, cider, vinegar, oil, honey, and so on, were also measured and sold in these units. The wine gallon was re-established by Queen Anne in 1707 after a 1688 survey found the Exchequer no longer possessed the necessary standard but had instead been depending on a copy held by the Guildhall. Defined as 231 cubic inches, it differs from the later imperial gallon, but is equal to the United States customary gallon. Rundlet 18 wine gallons or wine pipe Wine barrel 31.5 wine gallons or wine hogshead Tierce 42 wine gallons, puncheon or wine pipe Wine hogshead 2 wine barrels, 63 wine gallons or wine tun Puncheon or tertian 2 tierce, 84 wine gallons or wine tun Wine pipe or butt 2 wine hogshead, 3 tierce, 7 roundlet or 126 wine gallons Wine tun 2 wine pipe, 3 puncheon or 252 wine gallons Ale and beer Pin 4.5 gallons or beer barrel Firkin 2 pins, 9 gallons (ale, beer or goods) or beer barrel Kilderkin 2 firkins, 18 gallons or beer barrel Beer barrel 2 kilderkins, 36 gallons or beer hogshead Beer hogshead 3 kilderkins, 54 gallons or 1.5 beer barrels Beer pipe or butt 2 beer hogsheads, 3 beer barrels or 108 gallons Beer tun 2 beer pipes or 216 gallons Grain and dry goods The Winchester measure, also known as the corn measure, centered on the bushel of approximately 2,150.42 cubic inches, which had been in use with only minor modifications since at least the late 15th century. The word corn at that time referred to all types of grain. The corn measure was used to measure and sell many types of dry goods, such as grain, salt, ore, and oysters. However, in practice, such goods were often sold by weight. For example, it might be agreed by local custom that a bushel of wheat should weigh 60 pounds, or a bushel of oats should weigh 33 pounds. The goods would be measured out by volume, and then weighed, and the buyer would pay more or less depending on the actual weight. This practice of specifying bushels in weight for each commodity continues today. This was not always the case though, and even the same market that sold wheat and oats by weight might sell barley simply by volume. In fact, the entire system was not well standardized. A sixteenth of a bushel might be called a pottle, hoop, beatment, or quartern, in towns only a short distance apart. In some places potatoes might be sold by the firkin—usually a liquid measure—with one town defining a firkin as 3 bushels, and the next town as 2 1/2 bushels. The pint was the smallest unit in the corn measure. The corn gallon, one eighth of a bushel, was approximately 268.8 cubic inches. Most of the units associated with the corn measure were binary (sub)multiples of the bushel: Other units included the wey (6 or sometimes 5 seams or quarters), and the last (10 seams or quarters). Specific goods Perch 24.75 cubic feet of dry stone, derived from the more commonly known perch, a unit of length equal to 16.5 feet. Cord 128 cubic feet of firewood, a stack of firewood 4 ft × 4 ft × 8 ft Chemistry Fluid-grain The volume of 1 grain of distilled water at 62 °F, 30 inHg pressure. At that reference, water has a density of ≃ 0.9988 (438.0 or 1.001), and thus: = 1.096 imperial minim = .06488 ml or approximately a drop. Weight The Avoirdupois, Troy and Apothecary systems of weights all shared the same finest unit, the grain; however, they differ as to the number of grains there are in a dram, ounce and pound. This grain was legally defined as the weight of a grain seed from the middle of an ear of barley. There also was a smaller wheat grain, said to be (barley) grains or about 48.6 milligrams. The avoirdupois pound was eventually standardised as 7,000 grains and was used for all products not subject to Apothecaries's or Tower weight. Avoirdupois Troy and Tower The Troy and Tower pounds and their subdivisions were used for coins and precious metals. The Tower pound, which was based upon an earlier Anglo-Saxon pound, was replaced by the Troy pound when a proclamation dated 1526 required the Troy pound to be used for mint purposes instead of the Tower pound. No standards of the Tower pound are known to have survived. Established in the 8th century by Offa of Mercia, a pound sterling (or "pound of sterlings") was that weight of sterling silver sufficient to make 240 silver pennies. Troy Grain (gr) = 64.79891 mg Pennyweight (dwt) 24 gr ≈ 1.56 g Ounce (oz t) 20 dwt = 480 gr ≈ 31.1 g Pound (lb t) 12 oz t = 5760 gr ≈ 373 g Mark 8 oz t Tower Grain (gr) = gr t ≈ 45.6 mg Pennyweight (dwt) 32 gr T = gr t ≈ 1.46 g Tower ounce 20 dwt T = 640 gr T = dwt t = 450 gr t ≈ 29.2 g Tower pound 12 oz T = 240 dwt T = 7680 gr T = 225 dwt t = 5400 gr t ≈ 350 g Mark 8 oz T ≈ 233 g Apothecary Grain (gr) = 64.79891 mg Scruple (s ap) 20 gr Dram (dr ap) 3 s ap = 60 gr Ounce (oz ap) 8 dr ap = 480 gr Pound (lb ap) 5760 gr = 1 lb t Others Merchants/Mercantile pound 15 oz tower = 6750 gr ≈ 437.4 g London/Mercantile pound 15 oz troy = 16 oz tower = 7200 gr ≈ 466.6 g Mercantile stone 12 lb L ≈ 5.6 kg Butcher's stone 8 lb ≈ 3.63 kg Sack 26 st = 364 lb ≈ 165 kg The carat was once specified as four grains in the English-speaking world. Some local units in the English dominion were (re-)defined in simple terms of English units, such as the Indian tola of 180 grains. Tod This was an English weight for wool. It has the alternative spelling forms of tode, todd, todde, toad, and tood. It was usually 28 pounds, or two stone. The tod, however, was not a national standard and could vary by English shire, ranging from 28 to 32 pounds. In addition to the traditional definition in terms of pounds, the tod has historically also been considered to be of a sack, of a sarpler, or of a wey.
Physical sciences
Measurement systems
Basics and measurement
2224886
https://en.wikipedia.org/wiki/Short-eared%20dog
Short-eared dog
The short-eared dog (Atelocynus microtis), also known as the short-eared zorro or small-eared dog, is a unique and elusive canid species endemic to the Amazonian basin. This is the only species assigned to the genus Atelocynus. Other names The short-eared dog has many names in the local languages where it is endemic, including in Portuguese, ("short-ear fox") in Spanish, in Chiquitano, in Yucuna, in Guarayu, in Mooré, and achuj in Ninam and Mosetén. Other common names in Spanish include ("blue-eyed fox"), ("savannah fox"), and ("black fox"). Evolution and systematics After the formation of the Isthmus of Panama in the latter part the Tertiary (about 2.5 million years ago in the Pliocene), canids migrated from North America to the southern continent as part of the Great American Interchange. The short-eared dog's ancestors adapted to life in tropical rainforests, developing the requisite morphological and anatomical features. Although it has a superficial resemblance to the bush dog, the short-eared dog's closest living relative is the crab-eating fox. It is one of the most unusual canids. Two subspecies of this canid are recognized, and Occurrence and environment The short-eared dog can be found in the Amazon rainforest region of South America (in Brazil, Bolivia, Peru, Colombia, Ecuador and possibly Venezuela). There is a single report of "three slender, doglike animals" of this species sighted in the Darien region of Panama in 1984 by German biologist Sigi Weisel and a native Embera-nation Panamanian; this rare species' presence in Panama is possible because of "the continuous mass of forest habitat that covers this region". It lives in various parts of the rainforest environment, preferring areas with little human disturbance. It lives in both lowland forests known as Floresta Amazônica and terra firme forest, as well as in swamp forest, stands of bamboo, and cloud forest. It is a solitary animal and prefers to remain under tree-cover, avoiding both human and other animal interaction. Appearance The short-eared dog has short and slender limbs with short and rounded ears. It has a distinctive fox-like muzzle and bushy tail. Its paws are partly webbed, helping adapt it to its partly aquatic habitat. Its fur ranges from dark to reddish-grey, but can also be nearly navy blue, coffee brown, dark grey, or chestnut-grey to black, and the coat is short, with thick and bristly fur. It has a somewhat narrow chest, with dark color variation on the thorax merging to brighter, more reddish tones on the abdominal side of the body. Diet This wild dog is mainly a carnivore, with fish, insects, and small mammals making up the majority of its diet. An investigation led in the Cocha Cashu Biological Station in Peru into the proportions of different kinds of food in this animal's diet produced the following results: {| style="text-align:right;" |- | fish || 28% || || birds || 10% |- | insects || 17% || || crabs || 10% |- | small mammals || 13% || || frogs || 4% |- | various fruits || 10% || || reptiles || 3% |} Reproduction and behavior This species has some unique behaviors not typical to other canids. Females of this species are about one-third larger than males. The excited male sprays a musk produced by the tail glands. It prefers a solitary lifestyle, in forest areas. It avoids humans in its natural environment. Agitated males raise the hairs on their backs. The lifespan and gestation period of the short-eared dog are unknown, although sexual maturity is reached at three years of age, relatively late compared to other canid species. Threats, survival, and ecological concerns Feral dogs pose a prominent threat to the population of short-eared dogs, as they facilitate the spread of diseases such as canine distemper and rabies to the wild population. The short eared dog suffers greatly from loss of habitat. There is a significant amount of disturbance in formerly remote South American forests, and almost no habitat except where daily human settler and prospector traffic destroys or exposes their dens. Humans also contribute to their extermination by degradation of the species' natural habitat and the general destruction of tropical rainforests. Status of conservation The short-eared dog is currently considered near threatened by the IUCN. No comprehensive ecological and genetic research has been carried out on the species.
Biology and health sciences
Canines
Animals
2225073
https://en.wikipedia.org/wiki/Atopic%20dermatitis
Atopic dermatitis
Atopic dermatitis (AD), also known as atopic eczema, is a long-term type of inflammation of the skin. Atopic dermatitis is also often called simply eczema but the same term is also used to refer to dermatitis, the larger group of skin conditions. Atopic dermatitis results in itchy, red, swollen, and cracked skin. Clear fluid may come from the affected areas, which can thicken over time. Atopic dermatitis affects about 20% of people at some point in their lives. It is more common in younger children. Females are affected slightly more often than males. Many people outgrow the condition. While the condition may occur at any age, it typically starts in childhood, with changing severity over the years. In children under one year of age, the face and limbs and much of the body may be affected. As children get older, the areas on the insides of the knees and folds of the elbows and around the neck are most commonly affected. In adults, the hands and feet are commonly affected. Scratching the affected areas worsens the eczema and increases the risk of skin infections. Many people with atopic dermatitis develop hay fever or asthma. The cause is unknown but believed to involve genetics, immune system dysfunction, environmental exposures, and difficulties with the permeability of the skin. If one identical twin is affected, the other has an 85% chance of having the condition. Those who live in cities and dry climates are more commonly affected. Exposure to certain chemicals or frequent hand washing makes symptoms worse. While emotional stress may make the symptoms worse, it is not a cause. The disorder is not contagious. A diagnosis is typically based on the signs, symptoms and family history. Treatment involves avoiding things that make the condition worse, enhancing the skin barrier through skin care and treating the underlying skin inflammation. Moisturising creams are used to make the skin less dry and prevent AD flare-ups. Anti-inflammatory corticosteroid creams are used to control flares-ups. Creams based on calcineurin inhibitors (tacrolimus or pimecrolimus) may also be used to control flares if other measures are not effective. Certain antihistamine pills might help with itchiness. Things that commonly make it worse include house dust mite, stress and seasonal factors. Phototherapy may be useful in some people. Antibiotics (either by mouth or topically) are usually not helpful unless there is secondary bacterial infection or the person is unwell. Dietary exclusion does not benefit most people and it is only needed if food allergies are suspected. More severe AD cases may need systemic medicines such as cyclosporin, methotrexate, dupilumab or baricitinib. Other names of the condition include "infantile eczema", "flexural eczema", "prurigo Besnier", "allergic eczema", and "neurodermatitis". Signs and symptoms Symptoms refer to the sensations that people with AD feel, whereas signs refers to a description of the visible changes that result from AD.The main symptom of AD is itching which can be intense. Some people experience burning or soreness or pain. People with AD often have a generally dry skin that can look greyish in people with darker skin tones of colour. Areas of AD are not well defined, and they are typically inflamed (red in a light coloured skin or purple or dark brown in people with dark skin of colour). Surface changes include: scaling cracking (skin fissures) swelling (oedema) scratch marks (excoriation) bumpiness (papulation) oozing of clear fluid thickening of the skin (lichenification) where the AD has been present for a long time. Eczema often starts on the cheeks and outer limbs and body in infants and frequently settles in the folds of the skin such as behind the knees, folds of the elbows, around the neck, wrists and under the buttock folds as the child grows. Any part of the body can be affected by AD. Atopic dermatitis commonly affects the eyelids, where an extra prominent crease can form under the eyelid due to skin swelling known as Dennie-Morgan infraorbital folds. Cracks can form under the ears which can be painful (infra-auricular fissure). The inflammation from AD often leaves "footprints" known as postinflammatory pigmentation that can be lighter than the normal skin or darker. These marks are not scars and eventually go back to normal over a period of months providing the underlying AD is treated effectively. People with AD often have dry and scaly skin that spans the entire body, except perhaps the diaper area, and intensely itchy red, splotchy, raised lesions to form in the bends of the arms or legs, face, and neck. Causes The cause of AD is not known, although some evidence indicates environmental, immunologic, and potential genetic factors. Pollution Since 1970, the rates of atopic dermatitis in the US and UK have increased 3-6 fold. Even today, people who migrate from developing nations before the age of 4 years to industrialized nations experience a dramatic rise in the risk of atopic dermatitis and have an additional risk when living in urbanized areas of the industrial nation. Recent work has shed light on these and other data strongly suggesting that early life industrial exposures may cause atopic dermatitis. Chemicals such as (di)isocyanates and xylene prevent the skin bacteria from producing ceramide-sphingolipid family lipids. Early life deficiency in these lipids predictive which children will go on to develop atopic dermatitis. These chemicals also directly activate an itch receptor in the skin known as TRPA1. The industrial manufacturing and use of both xylene and diisocyanates greatly increased starting in 1970, which greatly expanded the average exposure to these substances. For example, these chemicals are components of several exposures known to increase the risk of atopic dermatitis or worsen symptoms including: wildfires, automobile exhaust, wallpaper adhesives, paints, non-latex foam furniture, cigarette smoke, and are elements of fabrics like polyester, nylon, and spandex. Climate Low humidity, and low temperature increase the prevalence and risk of flares in people with atopic dermatitis. Genetics Genes that may contribute to AD are mainly those responsible for immune response (e.g. TH2 cytokine and JAK-STAT pathway genes) and skin barrier (e.g. filaggrin, claudin-1, loricrin). Immune response: Many people with AD have a family history or a personal history of atopy. Atopy is a term used to describe individuals who produce substantial amounts of IgE. Such individuals have an increased tendency to develop asthma, hay fever, eczema, urticaria and allergic rhinitis. Up to 80% of people with atopic dermatitis have elevated total or allergen-specific IgE levels. Skin barrier: About 30% of people with AD have mutations in the gene for the production of filaggrin (FLG), which increase the risk for early onset of atopic dermatitis and developing asthma. However, expression of filaggrin protein or breakdown products offer no predictive utility in atopic dermatitis risk. People with atopic dermatitis also have decreased expression of tight junction protein Claudin-1, which deteriorates the bioelectric barrier function in the epidermis. Hygiene hypothesis According to the hygiene hypothesis, early childhood exposure to certain microorganisms (such as gut flora and helminth parasites) protects against allergic diseases by contributing to the development of the immune system. This exposure is limited in a modern "sanitary" environment, and the incorrectly developed immune system is prone to develop allergies to harmless substances. Some support exists for this hypothesis with respect to AD. Those exposed to dogs while growing up have a lower risk of atopic dermatitis. Also, epidemiological studies support a protective role for helminths against AD. Likewise, children with poor hygiene are at a lower risk for developing AD, as are children who drink unpasteurized milk. Allergens In a small percentage of cases, atopic dermatitis is caused by sensitization to foods such as milk, but there is growing consensus that food allergy most likely arises as a result of skin barrier dysfunction resulting from AD, rather than food allergy causing the skin problems. Atopic dermatitis sometimes appears associated with coeliac disease and non-coeliac gluten sensitivity. Because a gluten-free diet (GFD) improves symptoms in these cases, gluten seems to be the cause of AD in these cases. A diet high in fruits seems to have a protective effect against AD, whereas the opposite seems true for heavily processed foods. Exposure to allergens, either from food or the environment, can exacerbate existing atopic dermatitis. Exposure to dust mites, for example, is believed to contribute to the risk of developing AD. Role of Staphylococcus aureus Colonization of the skin by the bacterium S. aureus is extremely prevalent in those with atopic dermatitis. Abnormalities in the skin barrier of persons with AD are exploited by S. aureus to trigger cytokine expression, thus aggravating the condition. However, atopic dermatitis is non-communicable and therefore could not be directly caused by a highly infectious organism. Furthermore, there is insufficient evidence for the effectiveness of anti-staphylococcal treatments for treating S. aureus in infected or uninfected eczema. The role of S. aureus in causing itching in atopic dermatitis has been studied. Hard water The prevalence of atopic dermatitis in children may be linked to the level of calcium carbonate or "hardness" of household drinking water. Living in areas with hard water may also play a part in the development of AD in early life. However, when AD is already established, using water softeners at home does not reduce the severity of the symptoms. Pathophysiology Excessive type 2 inflammation underlies the pathophysiology of atopic dermatitis. Disruption of the epidermal barrier is thought to play an integral role in the pathogenesis of AD. Disruptions of the epidermal barrier allows allergens to penetrate the epidermis to deeper layers of the skin. This leads to activation of epidermal inflammatory dendritic and innate lymphoid cells which subsequently attracts Th2 CD4+ helper T cells to the skin. This dysregulated Th2 inflammatory response is thought to lead to the eczematous lesions. The Th2 helper T cells become activated, leading to the release of inflammatory cytokines including IL-4, IL-13 and IL-31 which activate downstream Janus kinase (Jak) pathways. The active Jak pathways lead to inflammation and downstream activation of plasma cells and B lymphocytes which release antigen specific IgE contributing to further inflammation. Other CD4+ helper T-cell pathways thought to be involved in atopic dermatitis inflammation include the Th1, Th17, and Th22 pathways. Some specific CD4+ helper T-cell inflammatory pathways are more commonly activated in specific ethnic groups with AD (for example, the Th-2 and Th-17 pathways are commonly activated in Asian people) possibly explaining the differences in phenotypic presentation of atopic dermatitis in specific populations. Mutations in the filaggrin gene, FLG, also cause impairment in the skin barrier that contributes to the pathogenesis of AD. Filaggrin is produced by epidermal skin cells (keratinocytes) in the horny layer of the epidermis. Filaggrin stimulates skin cells to release moisturizing factors and lipid matrix material, which cause adhesion of adjacent keratinocytes and contributes to the skin barrier. A loss-of-function mutation of filaggrin causes loss of this lipid matrix and external moisturizing factors, subsequently leading to disruption of the skin barrier. The disrupted skin barrier leads to transdermal water loss (leading to the xerosis or dry skin commonly seen in AD) and antigen and allergen penetration of the epidermal layer. Filaggrin mutations are also associated with a decrease in natural antimicrobial peptides found on the skin; subsequently leading to disruption of skin flora and bacterial overgrowth (commonly Staphylococcus aureus overgrowth or colonization). Atopic dermatitis is also associated with the release of pruritogens (molecules that stimulate pruritus or itching) in the skin. Keratinocytes, mast cells, eosinophils and T-cells release pruritogens in the skin; leading to activation of Aδ fibers and Group C nerve fibers in the epidermis and dermis contributing to sensations of pruritus and pain. The pruritogens include the Th2 cytokines IL-4, IL-13, IL-31, histamine, and various neuropeptides. Mechanical stimulation from scratching lesions can also lead to the release of pruritogens contributing to the itch-scratch cycle whereby there is increased pruritus or itch after scratching a lesion. Chronic scratching of lesions can cause thickening or lichenification of the skin or prurigo nodularis (generalized nodules that are severely itchy). Another factor in the barrier failure and immunological dysregulation in people with atopic dermatitis may be due to decreases in tight junction protein Claudin-1. Inhibiting Claudin-1 expression in human keratinocytes has been show to both reduce tight junction function, as well as increase keratinocyte proliferation in vitro. It has also been discovered that this deteriorates the bioelectric barrier function in the epidermis. Diagnosis Atopic dermatitis is typically diagnosed clinically, meaning it is based on signs and symptoms alone, without special testing. Several different criteria developed for research have also been validated to aid in diagnosis. Of these, the UK Diagnostic Criteria, based on the work of Hanifin and Rajka, has been the most widely validated. Other diseases that must be excluded before making a diagnosis include contact dermatitis, psoriasis, and seborrheic dermatitis. Prevention There are no established clinical methods using dietary or topical strategies to inhibit or prevent atopic dermatitis. Specific dietary plans during pregnancy and in early childhood, such as eating fatty fish (or taking omega-3 supplements), are not effective. Taking probiotics (for example Lactobacillus rhamnosus) during pregnancy and feeding probiotics to infants are strategies under research, with only preliminary evidence that they may be preventative. Using moisturizers daily in infants during the first year of life does not help to prevent atopic dermatitis, and might even increase the risk of skin infections. Treatments No cure for AD is known, although treatments may reduce the severity and frequency of flares. The most commonly used topical treatments for AD are topical corticosteroids (to get control of flare-ups) and moisturisers (emollients) to help keep control. Clinical trials often measure the efficacy of treatments with a severity scale such as the SCORAD index or the Eczema Area and Severity Index. Moisturisers Daily basic care is intended to stabilize the barrier function of the skin to mitigate its sensitivity to irritation and penetration of allergens. Affected persons often report that improvement of skin hydration parallels with improvement in AD symptoms. Moisturisers (or emollients) can improve skin comfort and may reduce disease flares. They can be used as leave-on treatments, bath additives or soap substitutes. There are many different products but the majority of leave-on treatments (least to most greasy) are lotions, creams, gels or ointments. All of the different types of moisturisers are equally effective so people need to choose one or more products based on what suits them, according to their age, body site effected, climate/season and personal preference. Non-medicated prescription moisturisers may also be no more effective than over-the-counter moisturisers. The use of emollient bath additives does not provide any additional benefits. Medication Topical Creams and ointments containing corticosteroids applied directly on skin (topical) are effective in managing atopic dermatitis. Newer (second generation) corticosteroids, such as fluticasone propionate and mometasone furoate, are more effective and safer than older ones. Strong and moderate corticosteroids work better than weaker ones. They are also generally safe and do not cause skin thinning when used in intermittently to treat AD flare-ups. They are also safe when used twice a week for preventing flares (also known as weekend treatment). Applying once daily is as effective as twice or more daily application. In addition to topical corticosteroids, topical calcineurin inhibitors, such as tacrolimus or pimecrolimus, are also recommended as first-line therapies for managing atopic dermatitis. Both tacrolimus and pimecrolimus are effective and safe to use in AD. Crisaborole, an inhibitor of PDE-4, is also effective and safe as a topical treatment for mild-to-moderate AD. Ruxolitinib, a Janus kinase inhibitor, has uncertain efficacy and safety. Systemic When topical (on skin) treatments fail to control severe AD flares, medications taken by mouth (systemic treatment) can be used. Conventional oral medications for AD include systemic immunosuppressants, such as ciclosporin, methotrexate, azathioprine, and mycophenolate. Antidepressants and naltrexone may be used to control pruritus (itchiness). Newer medications, such as monoclonal antibodies and JAK inhibitors, are highly effective for managing atopic dermatitis, but modestly increase the risk of conjunctivitis. These include dupilumab (Dupixent), tralokinumab (Adtralza, Adbry), abrocitinib (Cibinqo), baricitinib (Olumiant) and upadacitinib (Rinvoq). Among monoclonal antibodies, dupilumab and tralokinumab are approved to treat moderate-to-severe eczema in the US and the EU. Lebrikizumab is also approved in the EU for treating moderate-to-severe AD but in the US its approval was declined due to manufacturing issues. Abrocitinib and upadacitinib have also been approved in the US for the treatment of moderate-to-severe eczema. Nemolizumab (Nemluvio) was approved to treat atopic dermatitis in December 2024. Allergen immunotherapy may be effective in relieving symptoms of AD, but it also comes with an increased risk of adverse events. This treatment consists of a series of injections or drops under the tongue of a solution containing the allergen. The skin of people with AD can easily get infected, most commonly by the bacteria Staphylococcus aureus. Signs of this include oozing fluid, a yellow crust on the skin, worsening eczema symptoms and fever. Antibiotics are commonly used to target overgrowth of S. aureus but their benefit is limited, and they increase the risk of antimicrobial resistance. For these reasons, they are only recommended for people who not only present symptoms on the skin but feel systematically unwell. Diet The role of vitamin D on atopic dermatitis is not clear, but vitamin D supplementation may improve its symptoms. There is no clear benefit for pregnant mothers taking omega 3 long-chain polyunsaturated fatty acid (LCPUFA) in preventing the development of AD in their child. Several probiotics seem to have a positive effect, with a roughly 20% reduction in the rate of AD. Probiotics containing multiple strains of bacteria seem to work the best. In people with celiac disease or nonceliac gluten sensitivity, a gluten-free diet improves their symptoms and prevents the occurrence of new outbreaks. Use of blood specific IgE or skin prick tests to guide dietary exclusions with the aim of improving disease severity or control is controversial. Clinicians vary in their use of these tests for this purpose, and there are very limited evidence of any benefit. Lifestyle Health professionals often recommend that people with AD bathe regularly in lukewarm baths, especially in salt water, to moisten their skin. Dilute bleach baths may be helpful for people with moderate and severe eczema, but only for people with Staphylococcus aureus. Avoiding large-diameter woolen clothing or scratchy fibres is usually recommended for people with AD as they can trigger a flare. Safe alternatives are clothes made from fabrics with smaller diameters and smooth fibers. These include super- and ultrafine merino wool and fabrics with anti-microbial textile finishes. Wearing silk is also safe but does not improve symptoms of AD. Self-management Living with AD requires a high level of self-management (for example avoiding triggers) and adherence to treatments (regularly applying medication). Good self-management contributes to better disease outcomes and quality of life. However, worries about topical treatments, misconceptions about the condition, unclear information and unsuitable communication from doctors can make living with AD more difficult. People with AD often do not regard eczema as long-term condition and hope they will outgrow or cure it. This can cause worse adherence to the necessary long-term treatment. Doctors should not imply that it is a short-term condition and should emphasise that even though it cannot be cured it can be controlled effectively. Appropriate communication from doctors can support self-management. Doctors need to address concerns about treatments and provide clear and consistent information about the condition. Treatment regimens can be confusing, and written action plans may support people in knowing which treatments to use where and when. A website supporting self-management has been shown to improve AD symptoms for parents, children, adolescents and young adults. Light Phototherapic treatment involves exposure to broad- or narrow-band ultraviolet (UV) light. UV radiation exposure has been found to have a localized immunomodulatory effect on affected tissues and may be used to decrease the severity and frequency of flares. Among the different types of phototherapies only narrowband (NB) ultraviolet B (UVB) exposure might help with the severity of AD and ease itching. However, UV radiation has also been implicated in various types of skin cancer, and thus UV treatment is not without risk. UV phototherapy is not indicated in young adults and children due to this risk of skin cancer with prolonged use or exposure. Alternative medicine While several Chinese herbal medicines are intended for treating atopic eczema, there is no evidence showing that these treatments, taken by mouth or applied topically, reduce the severity of eczema in children or adults. Impact Atopic dermatitis significantly impairs the quality of life of affected individuals. The impact of AD extends beyond physical symptoms, encompassing substantial humanistic and psychosocial effects. Its burden is significant, especially given the high indirect costs and psychological impacts on quality of life. According to the Global Burden of Disease Study, AD is the skin disease with the highest disability-adjusted life year burden and ranks in the top 15 of all nonfatal diseases. In comparison with other dermatological conditions like psoriasis and urticaria, AD presents a significantly higher burden. While AD remains incurable, reducing its severity can significantly alleviate its burden. Understanding the extent of the burden of AD can aid in better resource allocation and prioritization of interventions, benefiting both people with atopic dermatitis and healthcare systems. Humanistic burden Atopic dermatitis significantly decreases the quality of life by affecting various aspects of people's lives. The psychological impact, often resulting in conditions like depression and anxiety, is a major factor leading to decreased quality of life. Sleep disturbances, commonly reported in people with AD, further contribute to the humanistic burden, affecting daily productivity and concentration. Clinical and economic burden Economically, AD imposes a substantial burden on healthcare systems, with the average direct cost per patient estimated at 4411 USD and the average indirect cost reaching 9068 USD annually. These figures highlight the considerable financial impact of the disease on healthcare systems and people with the condition. Productivity loss Atopic dermatitis also has a marked impact on productivity. The total number of days lost annually due to these factors is about 68.8 days for the general AD population, with presenteeism accounting for the majority of these days. The impact on productivity varies significantly with the severity of AD, with more severe cases resulting in higher numbers of days lost. Burden of disease in the Middle East and Africa Atopic dermatitis leads to the highest loss in disability-adjusted life years compared to other skin diseases in the Middle East and Africa. Patients with AD in these regions lose approximately 0.19 quality-adjusted life years (QALYs) annually due to the disease. Egypt experiences the highest QALY loss and Kuwait the lowest. The average annual healthcare cost per patient varies is highest in the United Arab Emirates, estimated at US $3569, and lowest in Algeria at US $312. These costs are influenced by the economic status of each country and the cost of healthcare. Advanced treatments like targeted therapies and phototherapy are among the main cost drivers. Indirect costs, primarily due to productivity loss from absenteeism and presenteeism average about 67% in these countries. Indirect costs in Saudi Arabia are the highest in the area, estimated at US $364 million. Factors like mental health impact, side effects of treatments, and other indirect costs such as personal care products are not fully accounted for in these estimates, suggesting that the actual burden might be even higher. To mitigate the burden of AD, experts recommend strategic actions across five key domains: capacity building, guidelines, research, public awareness, and patient support and education. Key measures include increasing the number of dermatologists, establishing evidence-based treatment guidelines, investing in patient education, and enhancing public awareness to reduce stigma. Improving access to effective treatments and conducting further research on AD's impact are also crucial for reducing the disease's clinical, economic, and humanistic burdens in the MEA. Epidemiology Since the beginning of the 20th century, many inflammatory skin disorders have become more common; AD is a classic example of such a disease. Although AD was previously considered primarily a childhood disease, it is now recognized as highly prevalent in adults, with an estimated adult prevalence of 3-5% globally. It now affects 15–30% of children and 2–10% of adults in developed countries, and in the United States has nearly tripled in the past 30–40 years. Over 15 million American adults and children have AD. Society and culture Conspiracy theories A number of false and conspiratorial claims about AD have emerged on the internet and have been amplified by social media. These conspiracy theories include, among others, claims that AD is caused by 5G, formaldehyde in food, vaccines, and topical steroids. Various unproven theories also claim that vegan diets, apple cider vinegar, calendula, and witch hazel can cure AD and that air purifiers reduce the risk of developing AD. Research Leukotriene receptor antagonists, such as montelukast, might be a useful for the treatment of AD but their effectiveness has not yet been proven by research.
Biology and health sciences
Specific diseases
Health
3062954
https://en.wikipedia.org/wiki/Wheeler%E2%80%93Feynman%20absorber%20theory
Wheeler–Feynman absorber theory
The Wheeler–Feynman absorber theory (also called the Wheeler–Feynman time-symmetric theory), named after its originators, the physicists Richard Feynman and John Archibald Wheeler, is a theory of electrodynamics based on a relativistic correct extension of action at a distance electron particles. The theory postulates no independent electromagnetic field. Rather, the whole theory is encapsulated by the Lorentz-invariant action of particle trajectories defined as where . The absorber theory is invariant under time-reversal transformation, consistent with the lack of any physical basis for microscopic time-reversal symmetry breaking. Another key principle resulting from this interpretation, and somewhat reminiscent of Mach's principle and the work of Hugo Tetrode, is that elementary particles are not self-interacting. This immediately removes the problem of electron self-energy giving an infinity in the energy of an electromagnetic field. Motivation Wheeler and Feynman begin by observing that classical electromagnetic field theory was designed before the discovery of electrons: charge is a continuous substance in the theory. An electron particle does not naturally fit in to the theory: should a point charge see the effect of its own field? They reconsider the fundamental problem of a collection of point charges, taking up a field-free action at a distance theory developed separately by Karl Schwarzschild, Hugo Tetrode, and Adriaan Fokker. Unlike instantaneous action at a distance theories of the early 1800s these "direct interaction" theories are based on interaction propagation at the speed of light. They differ from the classical field theory in three ways 1) no independent field is postulated; 2) the point charges do not act upon themselves; 3) the equations are time symmetric. Wheeler and Feynman propose to develop these equations into a relativistically correct generalization of electromagnetism based on Newtonian mechanics. Problems with previous direct-interaction theories The Tetrode-Fokker work left unsolved two major problems. First, in a non-instantaneous action at a distance theory, the equal action-reaction of Newton's laws of motion conflicts with causality. If an action propagates forward in time, the reaction would necessarily propagate backwards in time. Second, existing explanations of radiation reaction force or radiation resistance depended upon accelerating electrons interacting with their own field; the direct interaction models explicitly omit self-interaction. Absorber and radiation resistance Wheeler and Feynman postulate the "universe" of all other electrons as an absorber of radiation to overcome these issues and extend the direct interaction theories. Rather than considering an unphysical isolated point charge, they model all charges in the universe with a uniform absorber in a shell around a charge. As the charge moves relative to the absorber, it radiates into the absorber which "pushes back", causing the radiation resistance. Key result Feynman and Wheeler obtained their result in a very simple and elegant way. They considered all the charged particles (emitters) present in our universe and assumed all of them to generate time-reversal symmetric waves. The resulting field is Then they observed that if the relation holds, then , being a solution of the homogeneous Maxwell equation, can be used to obtain the total field The total field is then the observed pure retarded field. The assumption that the free field is identically zero is the core of the absorber idea. It means that the radiation emitted by each particle is completely absorbed by all other particles present in the universe. To better understand this point, it may be useful to consider how the absorption mechanism works in common materials. At the microscopic scale, it results from the sum of the incoming electromagnetic wave and the waves generated from the electrons of the material, which react to the external perturbation. If the incoming wave is absorbed, the result is a zero outgoing field. In the absorber theory the same concept is used, however, in presence of both retarded and advanced waves. Arrow of time ambiguity The resulting wave appears to have a preferred time direction, because it respects causality. However, this is only an illusion. Indeed, it is always possible to reverse the time direction by simply exchanging the labels emitter and absorber. Thus, the apparently preferred time direction results from the arbitrary labelling. Wheeler and Feynman claimed that thermodynamics picked the observed direction; cosmological selections have also been proposed. The requirement of time-reversal symmetry, in general, is difficult to reconcile with the principle of causality. Maxwell's equations and the equations for electromagnetic waves have, in general, two possible solutions: a retarded (delayed) solution and an advanced one. Accordingly, any charged particle generates waves, say at time and point , which will arrive at point at the instant (here is the speed of light), after the emission (retarded solution), and other waves, which will arrive at the same place at the instant , before the emission (advanced solution). The latter, however, violates the causality principle: advanced waves could be detected before their emission. Thus the advanced solutions are usually discarded in the interpretation of electromagnetic waves. In the absorber theory, instead charged particles are considered as both emitters and absorbers, and the emission process is connected with the absorption process as follows: Both the retarded waves from emitter to absorber and the advanced waves from absorber to emitter are considered. The sum of the two, however, results in causal waves, although the anti-causal (advanced) solutions are not discarded a priori. Alternatively, the way that Wheeler/Feynman came up with the primary equation is: They assumed that their Lagrangian only interacted when and where the fields for the individual particles were separated by a proper time of zero. So since only massless particles propagate from emission to detection with zero proper time separation, this Lagrangian automatically demands an electromagnetic like interaction. New interpretation of radiation damping One of the major results of the absorber theory is the elegant and clear interpretation of the electromagnetic radiation process. A charged particle that experiences acceleration is known to emit electromagnetic waves, i.e., to lose energy. Thus, the Newtonian equation for the particle must contain a dissipative force (damping term), which takes into account this energy loss. In the causal interpretation of electromagnetism, Hendrik Lorentz and Max Abraham proposed that such a force, later called Abraham–Lorentz force, is due to the retarded self-interaction of the particle with its own field. This first interpretation, however, is not completely satisfactory, as it leads to divergences in the theory and needs some assumptions on the structure of charge distribution of the particle. Paul Dirac generalized the formula to make it relativistically invariant. While doing so, he also suggested a different interpretation. He showed that the damping term can be expressed in terms of a free field acting on the particle at its own position: However, Dirac did not propose any physical explanation of this interpretation. A clear and simple explanation can instead be obtained in the framework of absorber theory, starting from the simple idea that each particle does not interact with itself. This is actually the opposite of the first Abraham–Lorentz proposal. The field acting on the particle at its own position (the point ) is then If we sum the free-field term of this expression, we obtain and, thanks to Dirac's result, Thus, the damping force is obtained without the need for self-interaction, which is known to lead to divergences, and also giving a physical justification to the expression derived by Dirac. Developments since original formulation Gravity theory Inspired by the Machian nature of the Wheeler–Feynman absorber theory for electrodynamics, Fred Hoyle and Jayant Narlikar proposed their own theory of gravity in the context of general relativity. This model still exists in spite of recent astronomical observations that have challenged the theory. Stephen Hawking had criticized the original Hoyle-Narlikar theory believing that the advanced waves going off to infinity would lead to a divergence, as indeed they would, if the universe were only expanding. Transactional interpretation of quantum mechanics Again inspired by the Wheeler–Feynman absorber theory, the transactional interpretation of quantum mechanics (TIQM) first proposed in 1986 by John G. Cramer, describes quantum interactions in terms of a standing wave formed by retarded (forward-in-time) and advanced (backward-in-time) waves. Cramer claims it avoids the philosophical problems with the Copenhagen interpretation and the role of the observer, and resolves various quantum paradoxes, such as quantum nonlocality, quantum entanglement and retrocausality. Attempted resolution of causality T. C. Scott and R. A. Moore demonstrated that the apparent acausality suggested by the presence of advanced Liénard–Wiechert potentials could be removed by recasting the theory in terms of retarded potentials only, without the complications of the absorber idea. The Lagrangian describing a particle () under the influence of the time-symmetric potential generated by another particle () is where is the relativistic kinetic energy functional of particle , and and are respectively the retarded and advanced Liénard–Wiechert potentials acting on particle and generated by particle . The corresponding Lagrangian for particle is It was originally demonstrated with computer algebra and then proven analytically that is a total time derivative, i.e. a divergence in the calculus of variations, and thus it gives no contribution to the Euler–Lagrange equations. Thanks to this result the advanced potentials can be eliminated; here the total derivative plays the same role as the free field. The Lagrangian for the N-body system is therefore The resulting Lagrangian is symmetric under the exchange of with . For this Lagrangian will generate exactly the same equations of motion of and . Therefore, from the point of view of an outside observer, everything is causal. This formulation reflects particle-particle symmetry with the variational principle applied to the N-particle system as a whole, and thus Tetrode's Machian principle. Only if we isolate the forces acting on a particular body do the advanced potentials make their appearance. This recasting of the problem comes at a price: the N-body Lagrangian depends on all the time derivatives of the curves traced by all particles, i.e. the Lagrangian is infinite-order. However, much progress was made in examining the unresolved issue of quantizing the theory. Also, this formulation recovers the Darwin Lagrangian, from which the Breit equation was originally derived, but without the dissipative terms. This ensures agreement with theory and experiment, up to but not including the Lamb shift. Numerical solutions for the classical problem were also found. Furthermore, Moore showed that a model by Feynman and Albert Hibbs is amenable to the methods of higher than first-order Lagrangians and revealed chaotic-like solutions. Moore and Scott showed that the radiation reaction can be alternatively derived using the notion that, on average, the net dipole moment is zero for a collection of charged particles, thereby avoiding the complications of the absorber theory. This apparent acausality may be viewed as merely apparent, and this entire problem goes away. An opposing view was held by Einstein. Alternative Lamb shift calculation As mentioned previously, a serious criticism against the absorber theory is that its Machian assumption that point particles do not act on themselves does not allow (infinite) self-energies and consequently an explanation for the Lamb shift according to quantum electrodynamics (QED). Ed Jaynes proposed an alternate model where the Lamb-like shift is due instead to the interaction with other particles very much along the same notions of the Wheeler–Feynman absorber theory itself. One simple model is to calculate the motion of an oscillator coupled directly with many other oscillators. Jaynes has shown that it is easy to get both spontaneous emission and Lamb shift behavior in classical mechanics. Furthermore, Jaynes' alternative provides a solution to the process of "addition and subtraction of infinities" associated with renormalization. This model leads to the same type of Bethe logarithm (an essential part of the Lamb shift calculation), vindicating Jaynes' claim that two different physical models can be mathematically isomorphic to each other and therefore yield the same results, a point also apparently made by Scott and Moore on the issue of causality. Relationship to quantum field theory This universal absorber theory is mentioned in the chapter titled "Monster Minds" in Feynman's autobiographical work Surely You're Joking, Mr. Feynman! and in Vol. II of the Feynman Lectures on Physics. It led to the formulation of a framework of quantum mechanics using a Lagrangian and action as starting points, rather than a Hamiltonian, namely the formulation using Feynman path integrals, which proved useful in Feynman's earliest calculations in quantum electrodynamics and quantum field theory in general. Both retarded and advanced fields appear respectively as retarded and advanced propagators and also in the Feynman propagator and the Dyson propagator. In hindsight, the relationship between retarded and advanced potentials shown here is not so surprising in view of the fact that, in quantum field theory, the advanced propagator can be obtained from the retarded propagator by exchanging the roles of field source and test particle (usually within the kernel of a Green's function formalism). In quantum field theory, advanced and retarded fields are simply viewed as mathematical solutions of Maxwell's equations whose combinations are decided by the boundary conditions.
Physical sciences
Quantum mechanics
Physics
3064671
https://en.wikipedia.org/wiki/Suicide%20intervention
Suicide intervention
Suicide intervention is a direct effort to prevent a person or persons from attempting to take their own life or lives intentionally. Asking direct questions is a recommended first step in intervention. These questions may include asking about whether a person is having thoughts of suicide, if they have thought about how they would do it, if they have access to the means to carry out their plan, and if they have a timeframe in mind. Asking these questions builds connection, a key protective factor in preventing suicide. These questions also enable all parties to establish a better understanding of risk. Research shows that asking direct questions about suicide does not increase suicidal ideation, and may decrease it. Most countries have some form of mental health legislation which allows people expressing suicidal thoughts or intent to be detained involuntarily for psychiatric treatment when their judgment is deemed to be impaired. These laws may grant the courts, police, or a medical doctor the power to order an individual to be apprehended to hospital for treatment. This is sometimes referred to as being committed. The review of ongoing involuntary treatment may be conducted by the hospital, the courts, or a quasi-judicial body, depending on the jurisdiction. Legislation normally requires police or court authorities to bring the individual to a hospital for treatment as soon as possible, and not to hold them in locations such as a police station. Mental health professionals and some other health professionals receive training in assessment and treatment of suicidality. Suicide hotlines are widely available for people seeking help. However, some people may be reluctant to discuss their suicidal thoughts, due to stigma, previous negative experiences, fear of detainment, or other reasons. First aid for suicidal ideation There are a number of myths about suicide, for instance that it is usually unpredictable. In 75–80% of cases, the suicidal person has given some sort of warning sign. A key myth to dispel is that talking to someone about suicide increases the risk of suicide. This is simply not true. Someone expressing suicidal thoughts should be encouraged to seek mental health treatment. Friends and family can provide supportive listening, empathy, and encouragement to develop a safety plan. Serious warning signs of imminent suicidal risk include an expressed intent to commit suicide and a specific plan with access to lethal means. If a person expresses these warning signs, emergency services should be contacted immediately. Another myth is if someone is speaking of committing suicide, that they are merely seeking attention. It is important that the person feel they are taken seriously. Safety plans can include sources of support, self-soothing activities, reasons for living (such as commitment to family or pets), safe people to call and safe places to go. When a person is feeling acutely distressed and overwhelmed by suicidal thoughts, it can be helpful to refer back to the safety plan or call a suicide helpline if the safety plan can not be done at that moment. Mental health treatment Comprehensive approaches to suicidality include stabilization and safety, assessment of risk factors, and ongoing management and problem-solving around minimizing risk factors and bolstering protective factors. During the acute phase, admission to a psychiatric ward or involuntary commitment may be used in an attempt to ensure client safety, but the least restrictive means possible should be used. Treatment focuses on reducing suffering and enhancing coping skills, and involves treatment of any underlying illness. DSM-5 axis I disorders, particularly major depressive disorder, and axis II disorders, particularly borderline personality disorder, increase the risk of suicide. Individuals with co-occurring mental illness and substance use disorders are at increased risk compared to individuals with just one of the two disorders. While antidepressants may not directly decrease suicide risk in adults, they are in many cases effective at treating major depressive disorder, and as such are recommended for patients with depression. There is evidence that long-term lithium therapy reduces suicide in individuals with bipolar disorder or major depressive disorder. Electroconvulsive therapy (ECT), or shock therapy, rapidly decreases suicidal thinking. The choice of treatment approach is based on the patient's presenting symptoms and history. In cases where a patient is actively attempting suicide even while in a hospital ward, a fast-acting treatment such as ECT may be first-line. Ideally, families are involved in the ongoing support of the suicidal individual, and they can help to strengthen protective factors and problem-solve around risk factors. Both families and the suicidal person should be supported by health care providers to cope with the societal stigma surrounding mental illness and suicide. Attention should also be given to the suicidal person's cultural background, as this can aid in understanding protective factors and problem-solving approaches. Risk factors may also arise related to membership in an oppressed minority group. For instance, Aboriginal people may benefit from traditional Aboriginal healing techniques that facilitate a change in thinking, connection with tradition, and emotional expression. Psychotherapy, particularly cognitive behavioural therapy, is an important component in the management of suicide risk. According to a 2005 randomized controlled trial by Gregory Brown, Aaron Beck and others, cognitive therapy can reduce repeat suicide attempts by 50%. Suicide prevention Various suicide prevention strategies have been suggested by mental-health professionals: Promoting mental resilience through optimism and connectedness. Education about suicide, including risk factors, warning signs, and the availability of help. Increasing the proficiency of health and welfare services in responding to people in need. This includes better training for health professionals and employing crisis-counseling organizations. Reducing domestic violence, substance abuse, and divorce are long-term strategies to reduce many mental health problems. Reducing access to convenient means of suicide (e.g. toxic substances, handguns, ropes/shoelaces). Reducing the quantity of dosages supplied in packages of non-prescription medicines e.g. aspirin. Interventions targeted at high-risk groups. Research Research into suicide is published across a wide spectrum of journals dedicated to the biological, economic, psychological, medical, and social sciences. In addition to those, a few journals are exclusively devoted to the study of suicide (suicidology), most notably, Crisis, Suicide and Life-Threatening Behavior, and the Archives of Suicide Research.
Biology and health sciences
Mental disorders
Health
3066350
https://en.wikipedia.org/wiki/Microstate%20%28statistical%20mechanics%29
Microstate (statistical mechanics)
In statistical mechanics, a microstate is a specific configuration of a system that describes the precise positions and momenta of all the individual particles or components that make up the system. Each microstate has a certain probability of occurring during the course of the system's thermal fluctuations. In contrast, the macrostate of a system refers to its macroscopic properties, such as its temperature, pressure, volume and density. Treatments on statistical mechanics define a macrostate as follows: a particular set of values of energy, the number of particles, and the volume of an isolated thermodynamic system is said to specify a particular macrostate of it. In this description, microstates appear as different possible ways the system can achieve a particular macrostate. A macrostate is characterized by a probability distribution of possible states across a certain statistical ensemble of all microstates. This distribution describes the probability of finding the system in a certain microstate. In the thermodynamic limit, the microstates visited by a macroscopic system during its fluctuations all have the same macroscopic properties. In a quantum system, the microstate is simply the value of the wave function. Microscopic definitions of thermodynamic concepts Statistical mechanics links the empirical thermodynamic properties of a system to the statistical distribution of an ensemble of microstates. All macroscopic thermodynamic properties of a system may be calculated from the partition function that sums of all its microstates. At any moment a system is distributed across an ensemble of microstates, each labeled by , and having a probability of occupation , and an energy . If the microstates are quantum-mechanical in nature, then these microstates form a discrete set as defined by quantum statistical mechanics, and is an energy level of the system. Internal energy The internal energy of the macrostate is the mean over all microstates of the system's energy This is a microscopic statement of the notion of energy associated with the first law of thermodynamics. Entropy For the more general case of the canonical ensemble, the absolute entropy depends exclusively on the probabilities of the microstates and is defined as where is the Boltzmann constant. For the microcanonical ensemble, consisting of only those microstates with energy equal to the energy of the macrostate, this simplifies to with the number of microstates . This form for entropy appears on Ludwig Boltzmann's gravestone in Vienna. The second law of thermodynamics describes how the entropy of an isolated system changes in time. The third law of thermodynamics is consistent with this definition, since zero entropy means that the macrostate of the system reduces to a single microstate. Heat and work Heat and work can be distinguished if we take the underlying quantum nature of the system into account. For a closed system (no transfer of matter), heat in statistical mechanics is the energy transfer associated with a disordered, microscopic action on the system, associated with jumps in occupation numbers of the quantum energy levels of the system, without change in the values of the energy levels themselves. Work is the energy transfer associated with an ordered, macroscopic action on the system. If this action acts very slowly, then the adiabatic theorem of quantum mechanics implies that this will not cause jumps between energy levels of the system. In this case, the internal energy of the system only changes due to a change of the system's energy levels. The microscopic, quantum definitions of heat and work are the following: so that The two above definitions of heat and work are among the few expressions of statistical mechanics where the thermodynamic quantities defined in the quantum case find no analogous definition in the classical limit. The reason is that classical microstates are not defined in relation to a precise associated quantum microstate, which means that when work changes the total energy available for distribution among the classical microstates of the system, the energy levels (so to speak) of the microstates do not follow this change. The microstate in phase space Classical phase space The description of a classical system of F degrees of freedom may be stated in terms of a 2F dimensional phase space, whose coordinate axes consist of the F generalized coordinates qi of the system, and its F generalized momenta pi. The microstate of such a system will be specified by a single point in the phase space. But for a system with a huge number of degrees of freedom its exact microstate usually is not important. So the phase space can be divided into cells of the size h0 = ΔqiΔpi, each treated as a microstate. Now the microstates are discrete and countable and the internal energy U has no longer an exact value but is between U and U+δU, with . The number of microstates Ω that a closed system can occupy is proportional to its phase space volume: where is an Indicator function. It is 1 if the Hamilton function H(x) at the point x = (q,p) in phase space is between U and U+ δU and 0 if not. The constant makes Ω(U) dimensionless. For an ideal gas is . In this description, the particles are distinguishable. If the position and momentum of two particles are exchanged, the new state will be represented by a different point in phase space. In this case a single point will represent a microstate. If a subset of M particles are indistinguishable from each other, then the M! possible permutations or possible exchanges of these particles will be counted as part of a single microstate. The set of possible microstates are also reflected in the constraints upon the thermodynamic system. For example, in the case of a simple gas of N particles with total energy U contained in a cube of volume V, in which a sample of the gas cannot be distinguished from any other sample by experimental means, a microstate will consist of the above-mentioned N! points in phase space, and the set of microstates will be constrained to have all position coordinates to lie inside the box, and the momenta to lie on a hyperspherical surface in momentum coordinates of radius U. If on the other hand, the system consists of a mixture of two different gases, samples of which can be distinguished from each other, say A and B, then the number of microstates is increased, since two points in which an A and B particle are exchanged in phase space are no longer part of the same microstate. Two particles that are identical may nevertheless be distinguishable based on, for example, their location. (See configurational entropy.) If the box contains identical particles, and is at equilibrium, and a partition is inserted, dividing the volume in half, particles in one box are now distinguishable from those in the second box. In phase space, the N/2 particles in each box are now restricted to a volume V/2, and their energy restricted to U/2, and the number of points describing a single microstate will change: the phase space description is not the same. This has implications in both the Gibbs paradox and correct Boltzmann counting. With regard to Boltzmann counting, it is the multiplicity of points in phase space which effectively reduces the number of microstates and renders the entropy extensive. With regard to Gibbs paradox, the important result is that the increase in the number of microstates (and thus the increase in entropy) resulting from the insertion of the partition is exactly matched by the decrease in the number of microstates (and thus the decrease in entropy) resulting from the reduction in volume available to each particle, yielding a net entropy change of zero.
Physical sciences
Statistical mechanics
Physics
3069677
https://en.wikipedia.org/wiki/Monkey
Monkey
Monkey is a common name that may refer to most mammals of the infraorder Simiiformes, also known as simians. Traditionally, all animals in the group now known as simians are counted as monkeys except the apes. Thus monkeys, in that sense, constitute an incomplete paraphyletic grouping; however, in the broader sense based on cladistics, apes (Hominoidea) are also included, making the terms monkeys and simians synonyms in regard to their scope. In 1812, Étienne Geoffroy grouped the apes and the Cercopithecidae group of monkeys together and established the name Catarrhini, "Old World monkeys" ("singes de l'Ancien Monde" in French). The extant sister of the Catarrhini in the monkey ("singes") group is the Platyrrhini (New World monkeys). Some nine million years before the divergence between the Cercopithecidae and the apes, the Platyrrhini emerged within "monkeys" by migration to South America likely by ocean. Apes are thus deep in the tree of extant and extinct monkeys, and any of the apes is distinctly closer related to the Cercopithecidae than the Platyrrhini are. Many monkey species are tree-dwelling (arboreal), although there are species that live primarily on the ground, such as baboons. Most species are mainly active during the day (diurnal). Monkeys are generally considered to be intelligent, especially the Old World monkeys. Within suborder Haplorhini, the simians are a sister group to the tarsiers – the two members diverged some 70 million years ago. New World monkeys and catarrhine monkeys emerged within the simians roughly 35 million years ago. Old World monkeys and apes emerged within the catarrhine monkeys about 25 million years ago. Extinct basal simians such as Aegyptopithecus or Parapithecus (35–32 million years ago) are also considered monkeys by primatologists. Lemurs, lorises, and galagos are not monkeys, but strepsirrhine primates (suborder Strepsirrhini). The simians' sister group, the tarsiers, are also haplorhine primates; however, they are also not monkeys. Apes emerged within monkeys as sister of the Cercopithecidae in the Catarrhini, so cladistically they are monkeys as well. However, there has been resistance to directly designate apes (and thus humans) as monkeys, so "Old World monkey" may be taken to mean either the Cercopithecoidea (not including apes) or the Catarrhini (including apes). That apes are monkeys was already realized by Georges-Louis Leclerc, Comte de Buffon in the 18th century. Linnaeus placed this group in 1758 together with the tarsiers, in a single genus "Simia" (sans Homo), an ensemble now recognised as the Haplorhini. Monkeys, including apes, can be distinguished from other primates by having only two pectoral nipples, a pendulous penis, and a lack of sensory whiskers. Historical and modern terminology According to the Online Etymology Dictionary, the word "monkey" may originate in a German version of the Reynard the Fox fable, published . In this version of the fable, a character named Moneke is the son of Martin the Ape. In English, no clear distinction was originally made between "ape" and "monkey"; thus the 1911 Encyclopædia Britannica entry for "ape" notes that it is either a synonym for "monkey" or is used to mean a tailless humanlike primate. Colloquially, the terms "monkey" and "ape" are widely used interchangeably. Also, a few monkey species have the word "ape" in their common name, such as the Barbary ape. Later in the first half of the 20th century, the idea developed that there were trends in primate evolution and that the living members of the order could be arranged in a series, leading through "monkeys" and "apes" to humans. Monkeys thus constituted a "grade" on the path to humans and were distinguished from "apes". Scientific classifications are now more often based on monophyletic groups, that is groups consisting of all the descendants of a common ancestor. The New World monkeys and the Old World monkeys are each monophyletic groups, but their combination was not, since it excluded hominoids (apes and humans). Thus, the term "monkey" no longer referred to a recognized scientific taxon. The smallest accepted taxon which contains all the monkeys is the infraorder Simiiformes, or simians. However this also contains the hominoids, so that monkeys are, in terms of currently recognized taxa, non-hominoid simians. Colloquially and pop-culturally, the term is ambiguous and sometimes monkey includes non-human hominoids. In addition, frequent arguments are made for a monophyletic usage of the word "monkey" from the perspective that usage should reflect cladistics. Several science-fiction and fantasy stories have depicted non-human (fantastical or alien) antagonistic characters refer to humans as monkeys, usually in a derogatory manner, as a form of metacommentary. A group of monkeys may be commonly referred to as a tribe or a troop. Two separate groups of primates are referred to as "monkeys": New World monkeys (platyrrhines) from South and Central America and Old World monkeys (catarrhines in the superfamily Cercopithecoidea) from Africa and Asia. Apes (hominoids)—consisting of gibbons, orangutans, gorillas, chimpanzees and bonobos, and humans—are also catarrhines but were classically distinguished from monkeys. Tailless monkeys may be called "apes", incorrectly according to modern usage; thus the tailless Barbary macaque is historically called the "Barbary ape". Description As apes have emerged in the monkey group as sister of the old world monkeys, characteristics that describe monkeys are generally shared by apes as well. Williams et al. outlined evolutionary features, including in stem groupings, contrasted against the other primates such as the tarsiers and the lemuriformes. Monkeys range in size from the pygmy marmoset, which can be as small as with a tail and just over in weight, to the male mandrill, almost long and weighing up to . Some are arboreal (living in trees) while others live on the savanna; diets differ among the various species but may contain any of the following: fruit, leaves, seeds, nuts, flowers, eggs and small animals (including insects and spiders). Some characteristics are shared among the groups; most New World monkeys have long tails, with those in the Atelidae family being prehensile, while Old World monkeys have non-prehensile tails or no visible tail at all. Old World monkeys have trichromatic color vision like that of humans, while New World monkeys may be trichromatic, dichromatic, or—as in the owl monkeys and greater galagos—monochromatic. Although both the New and Old World monkeys, like the apes, have forward-facing eyes, the faces of Old World and New World monkeys look very different, though again, each group shares some features such as the types of noses, cheeks and rumps. Classification The following list shows where the various monkey families (bolded) are placed in the classification of living (extant) primates. Order Primates Suborder Strepsirrhini: lemurs, lorises, and galagos Suborder Haplorhini: tarsiers, monkeys, and apes Infraorder Tarsiiformes Family Tarsiidae: tarsiers Infraorder Simiiformes: simians Parvorder Platyrrhini: New World monkeys Family Callitrichidae: marmosets and tamarins (42 species) Family Cebidae: capuchins and squirrel monkeys (14 species) Family Aotidae: night monkeys (11 species) Family Pitheciidae: titis, sakis, and uakaris (41 species) Family Atelidae: howler, spider, and woolly monkeys (24 species) Parvorder Catarrhini Superfamily Cercopithecoidea Family Cercopithecidae: Old World monkeys (135 species) Superfamily Hominoidea: apes Family Hylobatidae: gibbons ("lesser apes") (20 species) Family Hominidae: great apes (including humans, gorillas, chimpanzees, and orangutans) (8 species) Cladogram with extinct families Below is a cladogram with some extinct monkey families. Generally, extinct non-hominoid simians, including early catarrhines are discussed as monkeys as well as simians or anthropoids, which cladistically means that Hominoidea are monkeys as well, restoring monkeys as a single grouping. It is indicated approximately how many million years ago (Mya) the clades diverged into newer clades. It is thought the New World monkeys started as a drifted "Old World monkey" group from the Old World (probably Africa) to the New World (South America). Relationship with humans The many species of monkey have varied relationships with humans. Some are kept as pets, others used as model organisms in laboratories or in space missions. They may be killed in monkey drives (when they threaten agriculture) or used as service animals for the disabled. In some areas, some species of monkey are considered agricultural pests, and can cause extensive damage to commercial and subsistence crops. This can have important implications for the conservation of endangered species, which may be subject to persecution. In some instances farmers' perceptions of the damage may exceed the actual damage. Monkeys that have become habituated to human presence in tourist locations may also be considered pests, attacking tourists. Public exhibition Many zoos have maintained a facility in which monkeys and other primates are kept within enclosures for public entertainment. Commonly known as a monkey house (primatarium), sometimes styled Monkey House, notable examples include London Zoo's Monkey Valley; Zoo Basel's Monkey house/exhibit; the Monkey Tropic House at Krefeld Zoo; Bronx Zoo's Monkey House; Monkey Jungle, Florida; Lahore Zoo's Monkey House; Monkey World, Dorset, England; and Edinburgh Zoo's Monkey House. Former cinema, The Scala, Kings Cross spent a short time as a primatarium. As service animals for disabled people Some organizations train capuchin monkeys as service animals to assist quadriplegics and other people with severe spinal cord injuries or mobility impairments. After being socialized in a human home as infants, the monkeys undergo extensive training before being placed with disabled people. Around the house, the monkeys assist with daily tasks such as feeding, fetching, manipulating objects, and personal care. Helper monkeys are usually trained in schools by private organizations, taking seven years to train, and are able to serve 25–30 years (two to three times longer than a guide dog). In 2010, the U.S. federal government revised its definition of service animal under the Americans with Disabilities Act (ADA). Non-human primates are no longer recognized as service animals under the ADA. The American Veterinary Medical Association does not support the use of non-human primates as assistance animals because of animal welfare concerns, the potential for serious injury to people, and risks that primates may transfer dangerous diseases to humans. In experiments The most common monkey species found in animal research are the grivet, the rhesus macaque, and the crab-eating macaque, which are either wild-caught or purpose-bred. They are used primarily because of their relative ease of handling, their fast reproductive cycle (compared to apes) and their psychological and physical similarity to humans. Worldwide, it is thought that between 100,000 and 200,000 non-human primates are used in research each year, 64.7% of which are Old World monkeys, and 5.5% New World monkeys. This number makes a very small fraction of all animals used in research. Between 1994 and 2004 the United States has used an average of 54,000 non-human primates, while around 10,000 non-human primates were used in the European Union in 2002. In space A number of countries have used monkeys as part of their space exploration programmes, including the United States and France. The first monkey in space was Albert II, who flew in the US-launched V-2 rocket on June 14, 1949. As food Monkey brains are eaten as a delicacy in parts of South Asia, Africa and China. Monkeys are sometimes eaten in parts of Africa, where they can be sold as "bushmeat". In traditional Islamic dietary laws, the eating of monkeys is forbidden. Literature Sun Wukong (the "Monkey King"), a character who figures prominently in Chinese mythology, is the protagonist in the classic Chinese novel Journey to the West. Monkeys are prevalent in numerous books, television programs, and movies. The television series Monkey and the literary characters Monsieur Eek and Curious George are all examples. Informally, "monkey" may refer to apes, particularly chimpanzees, gibbons, and gorillas. Author Terry Pratchett alludes to this difference in usage in his Discworld novels, in which the Librarian of the Unseen University is an orangutan who gets very violent if referred to as a monkey. Another example is the use of Simians in Chinese poetry. The winged monkeys are prominent characters in L. Frank Baum's Wizard of Oz books and in the 1939 film based on Baum's 1900 novel The Wonderful Wizard of Oz. Religion and worship Monkey is the symbol of fourth Tirthankara in Jainism, Abhinandananatha. Hanuman, a prominent deity in Hinduism, is a human-like monkey god who is believed to bestow courage, strength and longevity to the person who thinks about him or Rama. In Buddhism, the monkey is an early incarnation of Buddha but may also represent trickery and ugliness. The Chinese Buddhist "mind monkey" metaphor refers to the unsettled, restless state of human mind. Monkey is also one of the Three Senseless Creatures, symbolizing greed, with the tiger representing anger and the deer lovesickness. The Sanzaru, or three wise monkeys, are revered in Japanese folklore; together they embody the proverbial principle to "see no evil, hear no evil, speak no evil". The Moche people of ancient Peru worshipped nature. They placed emphasis on animals and often depicted monkeys in their art. The Tzeltal people of Mexico worshipped monkeys as incarnations of their dead ancestors. Zodiac The Monkey (猴) is the ninth in the twelve-year cycle of animals which appear in the Chinese zodiac related to the Chinese calendar. .
Biology and health sciences
Primates
null
3070481
https://en.wikipedia.org/wiki/Nat%20%28unit%29
Nat (unit)
The natural unit of information (symbol: nat), sometimes also nit or nepit, is a unit of information or information entropy, based on natural logarithms and powers of e, rather than the powers of 2 and base 2 logarithms, which define the shannon. This unit is also known by its unit symbol, the nat. One nat is the information content of an event when the probability of that event occurring is 1/e. One nat is equal to  shannons ≈ 1.44 Sh or, equivalently,  hartleys ≈ 0.434 Hart. History Boulton and Wallace used the term nit in conjunction with minimum message length, which was subsequently changed by the minimum description length community to nat to avoid confusion with the nit used as a unit of luminance. Alan Turing used the natural ban. Entropy Shannon entropy (information entropy), being the expected value of the information of an event, is inherently a quantity of the same type and with a unit of information. The International System of Units, by assigning the same unit (joule per kelvin) both to heat capacity and to thermodynamic entropy implicitly treats information entropy as a quantity of dimension one, with . Systems of natural units that normalize the Boltzmann constant to 1 are effectively measuring thermodynamic entropy with the nat as unit. When the shannon entropy is written using a natural logarithm, it is implicitly giving a number measured in nats.
Physical sciences
Information
Basics and measurement
13485805
https://en.wikipedia.org/wiki/Radio-frequency%20engineering
Radio-frequency engineering
Radio-frequency (RF) engineering is a subset of electrical engineering involving the application of transmission line, waveguide, antenna, radar, and electromagnetic field principles to the design and application of devices that produce or use signals within the radio band, the frequency range of about 20 kHz up to 300 GHz. It is incorporated into almost everything that transmits or receives a radio wave, which includes, but is not limited to, mobile phones, radios, Wi-Fi, and two-way radios. RF engineering is a highly specialized field that typically includes the following areas of expertise: Design of antenna systems to provide radiative coverage of a specified geographical area by an electromagnetic field or to provide specified sensitivity to an electromagnetic field impinging on the antenna. Design of coupling and transmission line structures to transport RF energy without radiation. Application of circuit elements and transmission line structures in the design of oscillators, amplifiers, mixers, detectors, combiners, filters, impedance transforming networks and other devices. Verification and measurement of performance of radio frequency devices and systems. To produce quality results, the RF engineer needs to have an in-depth knowledge of mathematics, physics and general electronics theory as well as specialized training in areas such as wave propagation, impedance transformations, filters and microstrip printed circuit board design. Radio electronics Radio electronics is concerned with electronic circuits which receive or transmit radio signals. Typically, such circuits must operate at radio frequency and power levels, which imposes special constraints on their design. These constraints increase in their importance with higher frequencies. At microwave frequencies, the reactance of signal traces becomes a crucial part of the physical layout of the circuit. List of radio electronics topics: RF oscillators: Phase-locked loop, voltage-controlled oscillator Transmitters, transmission lines, transmission line tuners, RF connectors Antennas, antenna theory Receivers, tuners Amplifiers Modulators, demodulators, detectors RF filters RF shielding, ground plane Direct-sequence spread spectrum (DSSS), noise power Digital radio RF power amplifiers Metal–oxide–semiconductor field-effect transistor (MOSFET)s: Power MOSFET, Laterally-diffused metal-oxide semiconductor (LDMOS) Bipolar junction transistors Baseband processors (Complementary metal–oxide–semiconductor (CMOS)) RF CMOS (mixed-signal integrated circuits) Duties Radio-frequency engineers are specialists in their respective field and can take on many different roles, such as design, installation, and maintenance. Radio-frequency engineers require many years of extensive experience in the area of study. This type of engineer has experience with transmission systems, device design, and placement of antennas for optimum performance. The RF engineer job description at a broadcast facility can include maintenance of the station's high-power broadcast transmitters and associated systems. This includes transmitter site emergency power, remote control, main transmission line and antenna adjustments, microwave radio relay STL/TSL links, and more. In addition, a radio-frequency design engineer must be able to understand electronic hardware design, circuit board material, antenna radiation, and the effect of interfering frequencies that prevent optimum performance within the piece of equipment being developed. Mathematics There are many applications of electromagnetic theory to radio-frequency engineering, using conceptual tools such as vector calculus and complex analysis. Topics studied in this area include waveguides and transmission lines, the behavior of radio antennas, and the propagation of radio waves through the Earth's atmosphere. Historically, the subject played a significant role in the development of nonlinear dynamics.
Technology
Disciplines
null
1575813
https://en.wikipedia.org/wiki/Series%20expansion
Series expansion
In mathematics, a series expansion is a technique that expresses a function as an infinite sum, or series, of simpler functions. It is a method for calculating a function that cannot be expressed by just elementary operators (addition, subtraction, multiplication and division). The resulting so-called series often can be limited to a finite number of terms, thus yielding an approximation of the function. The fewer terms of the sequence are used, the simpler this approximation will be. Often, the resulting inaccuracy (i.e., the partial sum of the omitted terms) can be described by an equation involving Big O notation (see also asymptotic expansion). The series expansion on an open interval will also be an approximation for non-analytic functions. Types of series expansions There are several kinds of series expansions, listed below. Taylor series A Taylor series is a power series based on a function's derivatives at a single point. More specifically, if a function is infinitely differentiable around a point , then the Taylor series of f around this point is given by under the convention . The Maclaurin series of f is its Taylor series about . Laurent series A Laurent series is a generalization of the Taylor series, allowing terms with negative exponents; it takes the form and converges in an annulus. In particular, a Laurent series can be used to examine the behavior of a complex function near a singularity by considering the series expansion on an annulus centered at the singularity. Dirichlet series A general Dirichlet series is a series of the form One important special case of this is the ordinary Dirichlet series Used in number theory. Fourier series A Fourier series is an expansion of periodic functions as a sum of many sine and cosine functions. More specifically, the Fourier series of a function of period is given by the expressionwhere the coefficients are given by the formulae Other series In acoustics, e.g., the fundamental tone and the overtones together form an example of a Fourier series. Newtonian series Legendre polynomials: Used in physics to describe an arbitrary electrical field as a superposition of a dipole field, a quadrupole field, an octupole field, etc. Zernike polynomials: Used in optics to calculate aberrations of optical systems. Each term in the series describes a particular type of aberration. The Stirling seriesis an approximation of the log-gamma function. Examples The following is the Taylor series of : The Dirichlet series of the Riemann zeta function is
Mathematics
Basics_2
null
1575825
https://en.wikipedia.org/wiki/Hyperbolic%20partial%20differential%20equation
Hyperbolic partial differential equation
In mathematics, a hyperbolic partial differential equation of order is a partial differential equation (PDE) that, roughly speaking, has a well-posed initial value problem for the first derivatives. More precisely, the Cauchy problem can be locally solved for arbitrary initial data along any non-characteristic hypersurface. Many of the equations of mechanics are hyperbolic, and so the study of hyperbolic equations is of substantial contemporary interest. The model hyperbolic equation is the wave equation. In one spatial dimension, this is The equation has the property that, if and its first time derivative are arbitrarily specified initial data on the line (with sufficient smoothness properties), then there exists a solution for all time . The solutions of hyperbolic equations are "wave-like". If a disturbance is made in the initial data of a hyperbolic differential equation, then not every point of space feels the disturbance at once. Relative to a fixed time coordinate, disturbances have a finite propagation speed. They travel along the characteristics of the equation. This feature qualitatively distinguishes hyperbolic equations from elliptic partial differential equations and parabolic partial differential equations. A perturbation of the initial (or boundary) data of an elliptic or parabolic equation is felt at once by essentially all points in the domain. Although the definition of hyperbolicity is fundamentally a qualitative one, there are precise criteria that depend on the particular kind of differential equation under consideration. There is a well-developed theory for linear differential operators, due to Lars Gårding, in the context of microlocal analysis. Nonlinear differential equations are hyperbolic if their linearizations are hyperbolic in the sense of Gårding. There is a somewhat different theory for first order systems of equations coming from systems of conservation laws. Definition A partial differential equation is hyperbolic at a point provided that the Cauchy problem is uniquely solvable in a neighborhood of for any initial data given on a non-characteristic hypersurface passing through . Here the prescribed initial data consist of all (transverse) derivatives of the function on the surface up to one less than the order of the differential equation. Examples By a linear change of variables, any equation of the form with can be transformed to the wave equation, apart from lower order terms which are inessential for the qualitative understanding of the equation. This definition is analogous to the definition of a planar hyperbola. The one-dimensional wave equation: is an example of a hyperbolic equation. The two-dimensional and three-dimensional wave equations also fall into the category of hyperbolic PDE. This type of second-order hyperbolic partial differential equation may be transformed to a hyperbolic system of first-order differential equations. Hyperbolic systems of first-order equations The following is a system of first-order partial differential equations for unknown functions where where are once continuously differentiable functions, nonlinear in general. Next, for each define the Jacobian matrix The system () is hyperbolic if for all the matrix has only real eigenvalues and is diagonalizable. If the matrix has distinct real eigenvalues, it follows that it is diagonalizable. In this case the system () is called strictly hyperbolic. If the matrix is symmetric, it follows that it is diagonalizable and the eigenvalues are real. In this case the system () is called symmetric hyperbolic. Hyperbolic system and conservation laws There is a connection between a hyperbolic system and a conservation law. Consider a hyperbolic system of one partial differential equation for one unknown function . Then the system () has the form Here, can be interpreted as a quantity that moves around according to the flux given by . To see that the quantity is conserved, integrate () over a domain If and are sufficiently smooth functions, we can use the divergence theorem and change the order of the integration and to get a conservation law for the quantity in the general form which means that the time rate of change of in the domain is equal to the net flux of through its boundary . Since this is an equality, it can be concluded that is conserved within .
Mathematics
Differential equations
null
1576576
https://en.wikipedia.org/wiki/Coywolf
Coywolf
A coywolf is a canid hybrid descended from coyotes (Canis latrans), eastern wolves (Canis lycaon), gray wolves (Canis lupus), and dogs (Canis familiaris). All of these species are members of the genus Canis with 78 chromosomes; they therefore can interbreed. One genetic study indicates that these species genetically diverged relatively recently (around 55,000–117,000 years ago). Genomic studies indicate that nearly all North American gray wolf populations possess some degree of admixture with coyotes following a geographic cline, with the lowest levels occurring in Alaska, and the highest in Ontario and Quebec, as well as Atlantic Canada. Another term for these hybrids is sometimes wolfote. Description Hybrids of any combination tend to be larger than coyotes but smaller than wolves; they show behaviors intermediate between coyotes and the other parent's species. In one captive hybrid experiment, six F1 hybrid pups from a male northwestern gray wolf and a female coyote were measured shortly after birth with an average on their weights, total lengths, head lengths, body lengths, hind foot lengths, shoulder circumferences, and head circumferences compared with those on pure coyote pups at birth. Despite being delivered by a female coyote, the hybrid pups at birth were much larger and heavier than regular coyote pups born and measured around the same time. At six months of age, these hybrids were closely monitored at the Wildlife Science Center. Executive Director Peggy Callahan at the facility states that the howls of these hybrids are said to start off much like regular gray wolves with a deep strong vocalization, but changes partway into a coyote-like high pitched yipping. Compared with pure coyotes, eastern wolf × coyote hybrids form more cooperative social groups and are generally less aggressive with each other while playing. Hybrids also reach sexual maturity when they are two years old, which is much later than occurs in pure coyotes. Varieties Eastern coyotes Eastern coyotes range from New England, New York, New Jersey, Pennsylvania, Ohio, West Virginia, Maryland, Delaware, and Virginia. Their range also occurs in the Canadian provinces of Ontario, Quebec, New Brunswick, Nova Scotia, Prince Edward Island and Newfoundland and Labrador. Coyotes and wolves hybridized in the Great Lakes region, followed by an eastern coyote expansion, creating the largest mammalian hybrid zone known. Extensive hunting of gray wolves over a period of 400 years caused a population decline that reduced the number of suitable mates, thus facilitating coyote genes swamping into the eastern wolf population. This has caused concern over the purity of remaining wolves in the area, and the resulting eastern coyotes are too small to substitute for pure wolves as apex predators of moose and deer. The main nucleus of pure eastern wolves is currently concentrated within Algonquin Provincial Park. This susceptibility to hybridization led to the eastern wolf being listed as Special Concern under the Canadian Committee on the Status of Endangered Wildlife and with the Committee on the Status of Species at Risk in Ontario. By 2001, protection was extended to eastern wolves occurring on the outskirts of the park, thus no longer depriving Park eastern wolves of future pure-blooded mates. By 2012, the genetic composition of the park's eastern wolves was roughly restored to what it was in the mid-1960s, rather than in the 1980s–1990s, when the majority of wolves had large amounts of coyote DNA. Aside from the combinations of coyotes and eastern wolves making up most of the modern day eastern coyote's gene pools, some of the coyotes in the northeastern United States have mild domestic dog (C. lupus familiaris) and western Great Plains gray wolf (C. l. nubilus) influences in their gene pool. This suggests that the eastern coyote is actually a four-in-one hybrid of coyotes, eastern wolves, western gray wolves, and dogs. The hybrids living in areas with higher white-tailed deer density often have higher degrees of wolf genes than those living in urban environments. The addition of domestic dog genes may have played a minor role in facilitating the eastern hybrids' adaptability to survive in human-developed areas. The four-in-one hybrid theory was further explored in 2014, when Monzón and his team reanalyzed the tissue and SNP samples taken from 425 eastern coyotes to determine the degree of wolf and dog introgressions involved in each geographic range. The domestic dog allele averages 10% of the eastern coyote's genepool, while 26% is contributed by a cluster of both eastern wolves and western gray wolves. The remaining 64% matched mostly with coyotes. This analysis suggested that prior to the uniformity of its modern-day genetic makeup, multiple swarms of genetic exchanges between the coyotes, feral dogs, and the two distinct wolf populations present in the Great Lakes region may have occurred. Urban environments often favor coyote genes, while the ones in the rural and deep forest areas maintain higher levels of wolf content. A 2016 meta-analysis of 25 genetics studies from 1995 to 2013 found that the northeastern coywolf is 60% western coyote, 30% eastern wolf, and 10% domestic dog. However, this hybrid canid is only now coming into contact with the southern wave of coyote migration into the southern United States. Red wolves and eastern wolves The taxonomy of the red and eastern wolf of the Southeastern United States and the Great Lakes regions, respectively, has been long debated, with various schools of thought advocating that they represent either unique species or results of varying degrees of gray wolf × coyote admixture. In May 2011, an examination of 48,000 single nucleotide polymorphisms in red wolves, eastern wolves, gray wolves, and dogs indicated that the red and eastern wolves were hybrid species, with the red wolf being 76% coyote and 20% gray wolf, and the eastern wolf being 58% gray wolf and 42% coyote, finding no evidence of being distinct species in either. The study was criticized for having used red wolves with recent coyote ancestry, and a reanalysis in 2012 indicated that it suffered from insufficient sampling. A comprehensive review in 2012 further argued that the study's dog samples were unrepresentative of the species' global diversity, having been limited to boxers and poodles, and that the red wolf samples came from modern rather than historical specimens. The review was itself criticized by a panel of scientists selected for an independent peer review of its findings by the USFWS, which noted that the study's conclusion that the eastern wolf was a full species was based on insufficient evidence — just two unique nonrecombining markers. In 2016, a whole-genome DNA study suggested that all of the North American canids, both wolves and coyotes, diverged from a common ancestor 6,000–117,000 years ago. The whole-genome sequence analysis shows that two endemic species of North American wolf, the red wolf and eastern wolf, are admixtures of the coyote and gray wolf. Mexican wolf × coyote hybrids In a study that analyzed the molecular genetics of coyotes, as well as samples of historical red wolves and Mexican wolves from Texas, a few coyote genetic markers have been found in the historical samples of some isolated Mexican wolf individuals. Likewise, gray wolf Y chromosomes have also been found in a few individual male Texan coyotes. This study suggested that although the Mexican wolf is generally less prone to hybridizations with coyotes, exceptional genetic exchanges with the Texan coyotes may have occurred among individual gray wolves from historical remnants before the population was completely extirpated in Texas. The resulting hybrids would later on melt back into the coyote populations as the wolves disappeared. The same study discussed an alternative possibility that the red wolves, which also once overlapped with both species in central Texas, were involved in circuiting the gene flows between the coyotes and gray wolves, much like how the eastern wolf is suspected to have bridged gene flows between gray wolves and coyotes in the Great Lakes region, since direct hybridizations between coyotes and gray wolves is considered rare. In tests performed on a stuffed carcass of what was initially labelled a chupacabra, mitochondrial DNA analysis conducted by Texas State University showed that it was a coyote, though subsequent tests revealed that it was a coyote × gray wolf hybrid sired by a male Mexican wolf. Northwestern wolf × coyote hybrid experiment In 2013, the U.S. Department of Agriculture Wildlife Services conducted a captive-breeding experiment at their National Wildlife Research Center Predator Research Facility in Logan, Utah. Using gray wolves from British Columbia and western coyotes, they produced six hybrids, making this the first hybridization case between pure coyotes and northwestern wolves. The experiment, which used artificial insemination, was intended to determine whether or not the sperm of the larger gray wolves in the west was capable of fertilizing the egg cells of western coyotes. Aside from the historical hybridizations between coyotes and the smaller Mexican wolves in the south, as well as with eastern wolves and red wolves, gray wolves from the northwestern US and western provinces of Canada were not known to interbreed with coyotes in the wild, thus prompting the experiment. The six resulting hybrids included four males and two females. At six months of age, the hybrids were closely monitored and were shown to display both physical and behavioral characteristics from both species, as well as some physical similarities to the eastern wolves, whose status as a distinct wolf species or as a genetically distinct subspecies of the gray wolf is controversial. Regardless, the result of this experiment concluded that northwestern wolves, much like many other canids, are capable of hybridizing with coyotes. In 2015, a research team from the cell and microbiology department of Anoka-Ramsey Community College revealed that an F2 litter of two pups had been produced from two of the original hybrids. At the same time, despite the six F1's successful delivery from the same coyote, they were not all full siblings because multiple sperm from eight different northwestern wolves were used in their production. The successful production of the F2 litter, nonetheless, confirmed that hybrids of coyotes and northwestern wolves are just as fertile as hybrids of coyotes to eastern and red wolves. Both the F1 and F2 hybrids were found to be phenotypically intermediate between the western gray wolves and coyotes. Unlike the F1 hybrids, which were produced via artificial insemination, the F2 litter was produced from a natural breeding. The study also discovered through sequencing 16S ribosomal RNA encoding genes that the F1 hybrids all have an intestinal microbiome distinct from both parent species, but which was once reported to be present in some gray wolves. Moreover, analysis of their complementary DNA and ribosomal RNA revealed that the hybrids have very differential gene expressions compared to those in gray wolf controls. Coydogs Hybrids between coyotes and domestic dogs have been bred in captivity, dating to pre-Columbian Mexico. Other specimens were later produced by mammal biologists mostly for research purposes. Domestic dogs are included in the gray wolf species. Hence, coydogs are another biological sub-variation of hybrids between coyotes and gray wolves - the dog being considered a domesticated subspecies of Canis lupus.
Biology and health sciences
Canines
Animals
1576696
https://en.wikipedia.org/wiki/Reaction%20rate%20constant
Reaction rate constant
In chemical kinetics, a reaction rate constant or reaction rate coefficient () is a proportionality constant which quantifies the rate and direction of a chemical reaction by relating it with the concentration of reactants. For a reaction between reactants A and B to form a product C, where A and B are reactants C is a product a, b, and c are stoichiometric coefficients, the reaction rate is often found to have the form: Here is the reaction rate constant that depends on temperature, and [A] and [B] are the molar concentrations of substances A and B in moles per unit volume of solution, assuming the reaction is taking place throughout the volume of the solution. (For a reaction taking place at a boundary, one would use moles of A or B per unit area instead.) The exponents m and n are called partial orders of reaction and are not generally equal to the stoichiometric coefficients a and b. Instead they depend on the reaction mechanism and can be determined experimentally. Sum of m and n, that is, (m + n) is called the overall order of reaction. Elementary steps For an elementary step, there is a relationship between stoichiometry and rate law, as determined by the law of mass action. Almost all elementary steps are either unimolecular or bimolecular. For a unimolecular step the reaction rate is described by , where is a unimolecular rate constant. Since a reaction requires a change in molecular geometry, unimolecular rate constants cannot be larger than the frequency of a molecular vibration. Thus, in general, a unimolecular rate constant has an upper limit of k1 ≤ ~1013 s−1. For a bimolecular step the reaction rate is described by , where is a bimolecular rate constant. Bimolecular rate constants have an upper limit that is determined by how frequently molecules can collide, and the fastest such processes are limited by diffusion. Thus, in general, a bimolecular rate constant has an upper limit of k2 ≤ ~1010 M−1s−1. For a termolecular step the reaction rate is described by , where is a termolecular rate constant. There are few examples of elementary steps that are termolecular or higher order, due to the low probability of three or more molecules colliding in their reactive conformations and in the right orientation relative to each other to reach a particular transition state. There are, however, some termolecular examples in the gas phase. Most involve the recombination of two atoms or small radicals or molecules in the presence of an inert third body which carries off excess energy, such as O + + → + . One well-established example is the termolecular step 2 I + → 2 HI in the hydrogen-iodine reaction. In cases where a termolecular step might plausibly be proposed, one of the reactants is generally present in high concentration (e.g., as a solvent or diluent gas). Relationship to other parameters For a first-order reaction (including a unimolecular one-step process), there is a direct relationship between the unimolecular rate constant and the half-life of the reaction: . Transition state theory gives a relationship between the rate constant and the Gibbs free energy of activation a quantity that can be regarded as the free energy change needed to reach the transition state. In particular, this energy barrier incorporates both enthalpic and entropic changes that need to be achieved for the reaction to take place: The result from transition state theory is where h is the Planck constant and R the molar gas constant. As useful rules of thumb, a first-order reaction with a rate constant of 10−4 s−1 will have a half-life (t1/2) of approximately 2 hours. For a one-step process taking place at room temperature, the corresponding Gibbs free energy of activation (ΔG‡) is approximately 23 kcal/mol. Dependence on temperature The Arrhenius equation is an elementary treatment that gives the quantitative basis of the relationship between the activation energy and the reaction rate at which a reaction proceeds. The rate constant as a function of thermodynamic temperature is then given by: The reaction rate is given by: where Ea is the activation energy, and R is the gas constant, and m and n are experimentally determined partial orders in [A] and [B], respectively. Since at temperature T the molecules have energies according to a Boltzmann distribution, one can expect the proportion of collisions with energy greater than Ea to vary with e. The constant of proportionality A is the pre-exponential factor, or frequency factor (not to be confused here with the reactant A) takes into consideration the frequency at which reactant molecules are colliding and the likelihood that a collision leads to a successful reaction. Here, A has the same dimensions as an (m + n)-order rate constant (see Units below). Another popular model that is derived using more sophisticated statistical mechanical considerations is the Eyring equation from transition state theory: where ΔG‡ is the free energy of activation, a parameter that incorporates both the enthalpy and entropy change needed to reach the transition state. The temperature dependence of ΔG‡ is used to compute these parameters, the enthalpy of activation ΔH‡ and the entropy of activation ΔS‡, based on the defining formula ΔG‡ = ΔH‡ − TΔS‡. In effect, the free energy of activation takes into account both the activation energy and the likelihood of successful collision, while the factor kBT/h gives the frequency of molecular collision. The factor (c⊖)1-M ensures the dimensional correctness of the rate constant when the transition state in question is bimolecular or higher. Here, c⊖ is the standard concentration, generally chosen based on the unit of concentration used (usually c⊖ = 1 mol L−1 = 1 M), and M is the molecularity of the transition state. Lastly, κ, usually set to unity, is known as the transmission coefficient, a parameter which essentially serves as a "fudge factor" for transition state theory. The biggest difference between the two theories is that Arrhenius theory attempts to model the reaction (single- or multi-step) as a whole, while transition state theory models the individual elementary steps involved. Thus, they are not directly comparable, unless the reaction in question involves only a single elementary step. Finally, in the past, collision theory, in which reactants are viewed as hard spheres with a particular cross-section, provided yet another common way to rationalize and model the temperature dependence of the rate constant, although this approach has gradually fallen into disuse. The equation for the rate constant is similar in functional form to both the Arrhenius and Eyring equations: where P is the steric (or probability) factor and Z is the collision frequency, and ΔE is energy input required to overcome the activation barrier. Of note, , making the temperature dependence of k different from both the Arrhenius and Eyring models. Comparison of models All three theories model the temperature dependence of k using an equation of the form for some constant C, where α = 0, , and 1 give Arrhenius theory, collision theory, and transition state theory, respectively, although the imprecise notion of ΔE, the energy needed to overcome the activation barrier, has a slightly different meaning in each theory. In practice, experimental data does not generally allow a determination to be made as to which is "correct" in terms of best fit. Hence, all three are conceptual frameworks that make numerous assumptions, both realistic and unrealistic, in their derivations. As a result, they are capable of providing different insights into a system. Units The units of the rate constant depend on the overall order of reaction. If concentration is measured in units of mol·L−1 (sometimes abbreviated as M), then For order (m + n), the rate constant has units of mol1−(m+n)·L(m+n)−1·s−1 (or M1−(m+n)·s−1) For order zero, the rate constant has units of mol·L−1·s−1 (or M·s−1) For order one, the rate constant has units of s−1 For order two, the rate constant has units of L·mol−1·s−1 (or M−1·s−1) For order three, the rate constant has units of L2·mol−2·s−1 (or M−2·s−1) For order four, the rate constant has units of L3·mol−3·s−1 (or M−3·s−1) Plasma and gases Calculation of rate constants of the processes of generation and relaxation of electronically and vibrationally excited particles are of significant importance. It is used, for example, in the computer simulation of processes in plasma chemistry or microelectronics. First-principle based models should be used for such calculation. It can be done with the help of computer simulation software. Rate constant calculations Rate constant can be calculated for elementary reactions by molecular dynamics simulations. One possible approach is to calculate the mean residence time of the molecule in the reactant state. Although this is feasible for small systems with short residence times, this approach is not widely applicable as reactions are often rare events on molecular scale. One simple approach to overcome this problem is Divided Saddle Theory. Such other methods as the Bennett Chandler procedure, and Milestoning have also been developed for rate constant calculations. Divided saddle theory The theory is based on the assumption that the reaction can be described by a reaction coordinate, and that we can apply Boltzmann distribution at least in the reactant state. A new, especially reactive segment of the reactant, called the saddle domain, is introduced, and the rate constant is factored: where α is the conversion factor between the reactant state and saddle domain, while kSD is the rate constant from the saddle domain. The first can be simply calculated from the free energy surface, the latter is easily accessible from short molecular dynamics simulations
Physical sciences
Kinetics
Chemistry
1576787
https://en.wikipedia.org/wiki/Allyl%20isothiocyanate
Allyl isothiocyanate
Allyl isothiocyanate (AITC) is a naturally occurring unsaturated isothiocyanate. The colorless oil is responsible for the pungent taste of cruciferous vegetables such as mustard, radish, horseradish, and wasabi. This pungency and the lachrymatory effect of AITC are mediated through the TRPA1 and TRPV1 ion channels. It is slightly soluble in water, but more soluble in most organic solvents. Biosynthesis and biological functions Allyl isothiocyanate can be obtained from the seeds of black mustard (Rhamphospermum nigrum) or brown Indian mustard (Brassica juncea). When these mustard seeds are broken, the enzyme myrosinase is released and acts on a glucosinolate known as sinigrin to give allyl isothiocyanate. This serves the plant as a defense against herbivores; since it is harmful to the plant itself, it is stored in the harmless form of the glucosinolate, separate from the myrosinase enzyme. When an animal chews the plant, the allyl isothiocyanate is released, repelling the animal. Human appreciation of the pungency is learned. The compound has been shown to strongly repel fire ants (Solenopsis invicta). AITC vapor is also used as an antimicrobial and shelf life extender in food packaging. Production and applications Allyl isothiocyanate is produced commercially by the reaction of allyl chloride and potassium thiocyanate: CH2=CHCH2Cl + KSCN → CH2=CHCH2NCS + KCl The product obtained in this fashion is sometimes known as synthetic mustard oil. Allyl thiocyanate isomerizes to the isothiocyanate: Allyl isothiocyanate can also be liberated by dry distillation of the seeds. The product obtained in this fashion is known as volatile oil of mustard. It is used principally as a flavoring agent in foods. Synthetic allyl isothiocyanate is used as an insecticide, as an anti-mold agent bacteriocide, and nematicide, and is used in certain cases for crop protection. It is also used in fire alarms for the deaf. Hydrolysis of allyl isothiocyanate gives allylamine. Safety Allyl isothiocyanate has an LD50 of 151 mg/kg and is a lachrymator (similar to tear gas or mace). Oncology Based on in vitro experiments and animal models, allyl isothiocyanate exhibits many of the desirable attributes of a cancer chemopreventive agent.
Physical sciences
Concepts: General
Chemistry
1577330
https://en.wikipedia.org/wiki/Portia%20%28spider%29
Portia (spider)
Portia is a genus of jumping spider that feeds on other spiders (i.e., they are araneophagic or arachnophagic). They are remarkable for their intelligent hunting behaviour, which suggests that they are capable of learning and problem solving, traits normally attributed to much larger animals. Taxonomy and evolution The genus was established in 1878 by German arachnologist Friedrich Karsch. The fringed jumping spider (Portia fimbriata) is the type species. Molecular phylogeny, a technique that compares the DNA of organisms to construct the tree of life, indicates that Portia is a member of a basal clade (i.e. quite similar to the ancestors of all jumping spiders) and that the Spartaeus, Phaeacius, and Holcolaetis genera are its closest relatives. Wanless divided the genus Portia into two species groups: the schultzi group, in which males' palps have a fixed tibial apophysis; and the kenti group, in which the apophysis of each palp in the males has a joint separated by a membrane. The schultzi group includes P. schultzi, P. africana, P. fimbriata, and P. labiata. At least some species of Portia are in the state of reproductive isolation: in a laboratory, male P. africana copulated with female P. labiata, but no eggs were laid; during all cases, the female P. labiata twisted and lunged in an attempt to bite. Some specimens found trapped in Oligocene amber were identified as related to Portia. Distribution and ecology The 17 described species are found in Africa, Australia, China, Madagascar, Malaysia, Myanmar, Nepal, India, the Philippines, Sri Lanka, Taiwan, and Vietnam. Portia are vulnerable to larger predators such as birds and frogs, which a Portia often cannot identify because of the predator's size. Some insects prey on Portia, for example, mantises, the assassin bugs Nagusta sp. indet. and Scipinnia repax (that is, Scipinia rapax ). Appearance Portia are relatively small spiders. For example, adult females of Portia africana are in body length and adult males are long. Intelligence Portia often hunt in ways that seem intelligent. All members of Portia have instinctive hunting tactics for their most common prey, but can improvise by trial and error against unfamiliar prey or in unfamiliar situations, and then remember the new approach. They exhibit spatial memory and object permanence, and are capable of trying out a behavior to obtain feedback regarding success or failure, and they can plan ahead (as it seems from their detouring behavior). Portia species can make detours to find the best attack angle against dangerous prey, even when the best detour takes a Portia out of visual contact with the prey, and sometimes the planned route leads to abseiling down a silk thread and biting the prey from behind. Such detours may take up to an hour, and a Portia usually picks the best route even if it needs to walk past an incorrect route. Nonetheless, they seem to be relatively slow thinkers, as is to be expected since they solve tactical problems by using brains vastly smaller than those of mammalian predators. Portia has a brain significantly smaller than the size of the head of a pin, and it likely has less than 100,000 neurons (for comparison, a mouse brain has about 70 million neurons and a human brain has 86 billion). Portia can distinguish their own draglines from conspecifics', recognizing self from others, and also discriminate between known and unknown spiders. Hunting techniques Their favorite prey appears to be web-building spiders between 10% and 200% of their own size. Portia looks like leaf detritus caught in a web, and this is often enough to fool web-building spiders, which have poor eyesight. When stalking web-building spiders, Portia try to make different patterns of vibrations in the web that aggressively mimic the struggle of a trapped insect or the courtship signals of a male spider, repeating any pattern that induces the intended prey to move towards the Portia. Portia fimbriata has been observed to perform vibratory behavior for three days until the victim decided to investigate. They time invasions of webs to coincide with light breezes that blur the vibrations that their approach causes in the target's web; and they back off if the intended victim responds belligerently. Other jumping spiders take detours, but Portia is unusual in its readiness to use long detours that break visual contact. Laboratory studies show that Portia learns very quickly how to overcome web-building spiders that neither it nor its ancestors would have met in the wild. Portias accurate visual recognition of potential prey is an important part of its hunting tactics. For example, in one part of the Philippines, local Portia spiders attack from the rear against the very dangerous spitting spiders, which themselves hunt jumping spiders. This appears to be an instinctive behavior, as laboratory-reared Portia of this species do this the first time they encounter a spitting spider. On the other hand, they will use a head-on approach against spitting spiders that are carrying eggs. However, experiments that pitted Portia against "convincing" artificial spiders with arbitrary but consistent behavior patterns showed that Portias instinctive tactics are only starting points for a trial-and-error approach from which these spiders learn very quickly. Against other jumping spiders, which also have excellent vision, Portia may mimic fragments of leaf litter detritus. When close to biting range, Portia use different combat tactics against different prey spiders. On the other hand, when attacking unarmed prey, such as flies, they simply stalk and rush, and they also capture prey by means of sticky webs. Portia can also rely on movement cues to locate prey. In this specific strategy, when potential prey knows it's been seen and stands still to avoid detection, undirected leaps occur in the vicinity of the prey. As a result, the prey will then react to this visual cue, believing itself to have been seen, providing motion that allows Portia to see and attack it. Portia may also scavenge corpses of dead arthropods they found, and consume nectar. Social behavior Members of the species Portia africana were observed living together and sharing prey. If a mature Portia male meets a sub-mature female, he will try to cohabitate with her. P. labiata females can discriminate between the draglines of familiar and unfamiliar individuals of the same species and between their own draglines and those of conspecifics. The ability to recognize individuals is a necessary prerequisite for social behavior. Vision Portia species have complex eyes that support exceptional spatial acuity. They have eight eyes. Three pairs of eyes positioned along the sides of the cephalothorax (called the secondary eyes) have a combined field-of-view of almost 360° and serve primarily as movement detectors. A pair of forward-facing anterior median eyes (called the principal eyes) are adapted for colour vision and high spatial acuity. The main eyes focus accurately on an object at distances from approximately to infinity, and in practice can see up to about . Like all jumping spiders, its main eyes can take in only a small visual field at one time, as the most acute part of a main eye can see all of a circle up to wide at away, or up to wide at away. Jumping spiders' main eyes can see from red to ultraviolet. The secondary eyes have low spatial resolving power, but a wide field of view. The inter-receptor angles of Portias eyes may be as small as 2.4 minutes of arc, which is only six times worse than in humans, and is six times better than in the most acute insect eye. It is also clearer in daylight than a cat's vision. P. africana relies on visual features of general morphology and colour (or relative brightness) when identifying prey types. P. schultzis hunting is stimulated only by vision, and prey close by but hidden causes no response. P. fimbriata use visual cues to distinguish members of the same species from other salticids. Cross and Jackson (2014) suggest that P. africana is capable of mentally rotating visual objects held in its working memory. However, a Portia takes a relatively long time to see objects, possibly because getting a good image out of such small eyes is a complex process and requires a lot of scanning. This makes a Portia vulnerable to much larger predators such as birds, frogs and mantises, which a Portia often cannot identify because of the predator's size. Movement When not hunting for prey or a mate, Portia species adopt a special posture, called the "cryptic rest posture", pulling their legs in close to the body and their palps back beside the chelicerae ("jaws"), which obscures the outlines of these appendages. When walking, most Portia species have a slow, "choppy" gait that preserves their concealment: pausing often and at irregular intervals; waving their legs continuously and their palps jerkily up and down; moving each appendage out of time with the others; and continuously varying the speed and timing. When disturbed, some Portia species are known to leap upwards about often from the cryptic rest pose, and often over a wide trajectory. Usually the spider then either freezes or runs about and then freezes. Reproduction Portia exhibits a mating behavior and strategy different from that of other jumping spiders. In most jumping spiders, males mount females to mate. The Portia male shows off his legs and extends them stiffly and shakes them to attract the female. The female then drums on the web. After the male mounts her, the female drops a dragline and they mate in mid-air. Mating with Portia spiders can occur off or on the web. The spider also practices cannibalism before and after copulation. The female usually twists and lunges at the mounted male. (P. fimbriata, however, is an exception; it does not usually exhibit such behavior.) If the male is killed before completing copulation, the male sperm is removed and the male is then eaten. If the male finishes mating before being killed, the sperm is kept for fertilization and the male is eaten. A majority of males are killed during sexual encounters. Health Portia species have a life span of about 1.5 years. P. fimbriata can regenerate a lost limb about 7 days after moulting. Portias palps and legs break off very easily, which may be a defense mechanism, and Portias are often seen with missing legs or palps. Species it contains 21 species, found in Africa, Asia, and Australia: Portia africana (Simon, 1886) – West, Central Africa, Ethiopia Portia albimana (Simon, 1900) – India to Vietnam Portia assamensis Wanless, 1978 – India to Malaysia Portia bawang (Xu, Peng & Li, 2021) – China (Hainan) Portia crassipalpis (Peckham & Peckham, 1907) – Singapore, Indonesia (Borneo) Portia erlangping (Xu, Peng & Li, 2021) – China Portia fajing (Xu, Peng & Li, 2021) – China Portia fimbriata (Doleschall, 1859) (type) – Nepal, India, Sri Lanka, Taiwan to Australia Portia heteroidea Xie & Yin, 1991 – China Portia hoggi Zabka, 1985 – Vietnam Portia jianfeng Song & Zhu, 1998 – China Portia labiata (Thorell, 1887) – Sri Lanka to China, Vietnam, Philippines, India Portia orientalis Murphy & Murphy, 1983 – China (Hong Kong) Portia quei Zabka, 1985 – China, Vietnam Portia schultzi Karsch, 1878 – Central, East, Southern Africa, Mayotte, Madagascar Portia songi Tang & Yang, 1997 – China Portia strandi Caporiacco, 1941 – Ethiopia Portia taiwanica Zhang & Li, 2005 – Taiwan Portia wui Peng & Li, 2002 – China Portia xishan (Xu, Peng & Li, 2021) – China Portia zhaoi Peng, Li & Chen, 2003 – China In popular literature Portia jumping spiders as the dominant species evolving on a terraformed planet feature prominently in the science fiction novel Children of Time by the writer Adrian Tchaikovsky.
Biology and health sciences
Spiders
Animals
1578728
https://en.wikipedia.org/wiki/Fancy%20pigeon
Fancy pigeon
Fancy pigeon refers to any breed of domestic pigeon, which is a domesticated form of the wild rock dove (Columba livia). They are bred by pigeon fanciers for various traits relating to size, shape, color, and behavior, and often exhibited at pigeon shows, fairs and other livestock exhibits. There are about 800 pigeon breeds; considering all regional varieties all over the world there may be 1100 breeds. The European list of fancy pigeons alone names about 500 breeds. No other domestic animal has branched out into such a variety of forms and colours. Charles Darwin is known to have crossbred fancy pigeons, particularly the ice pigeon, to study variation within species, this work coming three years before his groundbreaking publication, On the Origin of Species. Pigeon showing Pigeon fanciers from many countries exhibit their birds at local, inter-state or national shows and compete against one another for prizes. One typical country show in Australia in 2008 had hundreds of pigeons on display and prizes for the winners. In England, the Philoperisteron Society conducted annual shows in the mid 1800s. There were also a London Columbarian Society. The extensive variations in the breeds attracted the attention of Charles Darwin and played a major role in developing ideas on evolution. Some fanciers organize exhibitions exclusively for pigeons; one held in Blackpool run by the Royal Pigeon Racing Association is annually attended by about 25,000 people and generates around £80,000 profit, which is donated to charity. The largest pigeon show is held in Nuremberg: the German National Pigeon Show, which had over 33,500 pigeons at the 2006 show. In the United States, there are hundreds of local, state and national pigeon clubs that sponsor shows. The largest shows are the National Young Bird Show, held in Louisville, Kentucky in October, and the National Pigeon Association's Grand National, held in a different city each year and usually in January. Major breed families This grouping system is adapted from Australian Fancy Pigeons National Book of Standards. Consideration was given to the new UK standards book which followed the German and European grouping. This version differs slightly from that of the European grouping; the following system is arbitrary and used solely for organizing breed articles until a grouping can be accepted worldwide. Asian feather and voice pigeons This group includes breeds developed for extensive feathering that originated in the Asian region, as well as breeds cultivated for their trumpeting, or laughing, voice. Fantail Frillback Jacobin Lahore Trumpeter English Trumpeter Colour pigeons Most of these pigeons originate in Germany, and are sometimes listed as German Toys. There are many varieties, with a wide selection of colours and markings. Archangel pigeon Danish Suabian Saxon Field Pigeon Starling Swallow Thuringen Field Pigeon Ice Pigeon Frills and Owls The word "frill" here relates to the reversed feathering on the chest of these varieties. This group is also noted for having short beaks. Aachen Lacquer Shield Owl African Owl Chinese Owl Italian Owl Old German Owl Oriental Frill Turbit Homer and Hen Pigeons Homing pigeons This group includes breeds originally developed for their homing ability, and includes show-type racing pigeons. American Show Racer Dragoon English Carrier German Beauty Homer Homing pigeon Pouters and Croppers This group includes breeds developed for the ability to inflate their crops. English Pouter Brunner Pouter Gaditano Pouter Holle Cropper Horseman Pouter Norwich Cropper Pigmy Pouter Pouter Voorburg shield cropper Old German Croppers Magpi pouter Exhibition Tumblers This group originally consisted of flying/tumbler breeds, but has now been refined to include only purely ornamental/exhibition breeds. English Long-faced Tumbler English Short-faced Tumbler Helmet English Magpie Nun Bucharest Short-faced Tumbler Flying Tumblers and Highfliers This group is dual purpose in that its members can be shown, but also retain acrobatic or sporting ability and can therefore be used in flying competitions. Flying tumbler varieties belong in this group. Although many varieties in this grouping have become primarily show varieties, they are still expected to display characteristics of performing birds. Armenian Tumbler Australian Performing Tumbler Danzig Highflyer Donek Roller Tippler Utility pigeons This group includes breeds originally developed as sources of meat. Carneau French Mondain King pigeon American Giant Runt Strasser pigeon
Biology and health sciences
Pigeons
Animals
1579423
https://en.wikipedia.org/wiki/Reaction%20quotient
Reaction quotient
In chemical thermodynamics, the reaction quotient (Qr or just Q) is a dimensionless quantity that provides a measurement of the relative amounts of products and reactants present in a reaction mixture for a reaction with well-defined overall stoichiometry at a particular point in time. Mathematically, it is defined as the ratio of the activities (or molar concentrations) of the product species over those of the reactant species involved in the chemical reaction, taking stoichiometric coefficients of the reaction into account as exponents of the concentrations. In equilibrium, the reaction quotient is constant over time and is equal to the equilibrium constant. A general chemical reaction in which α moles of a reactant A and β moles of a reactant B react to give ρ moles of a product R and σ moles of a product S can be written as \it \alpha\,\rm A{} + \it \beta\,\rm B{} <=> \it \rho\,\rm R{} + \it \sigma\,\rm S{}. The reaction is written as an equilibrium even though, in many cases, it may appear that all of the reactants on one side have been converted to the other side. When any initial mixture of A, B, R, and S is made, and the reaction is allowed to proceed (either in the forward or reverse direction), the reaction quotient Qr, as a function of time t, is defined as where {X}t denotes the instantaneous activity of a species X at time t. A compact general definition is where Пj denotes the product across all j-indexed variables, aj(t) is the activity of species j at time t, and νj is the stoichiometric number (the stoichiometric coefficient multiplied by +1 for products and –1 for starting materials). Relationship to K (the equilibrium constant) As the reaction proceeds with the passage of time, the species' activities, and hence the reaction quotient, change in a way that reduces the free energy of the chemical system. The direction of the change is governed by the Gibbs free energy of reaction by the relation , where K is a constant independent of initial composition, known as the equilibrium constant. The reaction proceeds in the forward direction (towards larger values of Qr) when ΔrG < 0 or in the reverse direction (towards smaller values of Qr) when ΔrG > 0. Eventually, as the reaction mixture reaches chemical equilibrium, the activities of the components (and thus the reaction quotient) approach constant values. The equilibrium constant is defined to be the asymptotic value approached by the reaction quotient: and . The timescale of this process depends on the rate constants of the forward and reverse reactions. In principle, equilibrium is approached asymptotically at t → ∞; in practice, equilibrium is considered to be reached, in a practical sense, when concentrations of the equilibrating species no longer change perceptibly with respect to the analytical instruments and methods used. If a reaction mixture is initialized with all components having an activity of unity, that is, in their standard states, then and . This quantity, ΔrG°, is called the standard Gibbs free energy of reaction. All reactions, regardless of how favorable, are equilibrium processes, though practically speaking, if no starting material is detected after a certain point by a particular analytical technique in question, the reaction is said to go to completion. In biochemistry In biochemistry, the reaction quotient is often referred to as the mass-action ratio with the symbol . Example The burning of octane, C8H18 + 25/2 O2 → 8CO2 + 9H2O has a ΔrG° ~ –240 kcal/mol, corresponding to an equilibrium constant of 10175, a number so large that it is of no practical significance, since there are only ~5 × 1024 molecules in a kilogram of octane. Significance and applications The reaction quotient plays a crucial role in understanding the direction and extent of a chemical reaction's progress towards equilibrium: Equilibrium condition: At equilibrium, the reaction quotient (Q) is equal to the equilibrium constant (K) for the reaction. This condition is represented as Q = K, indicating that the forward and reverse reaction rates are equal. Predicting reaction direction: If Q < K, the reaction will proceed in the forward direction to establish equilibrium. If Q > K, the reaction will proceed in the reverse direction to reach equilibrium. Extent of reaction: The difference between Q and K provides information about how far the reaction is from equilibrium. A larger difference indicates a greater driving force for the reaction to proceed towards equilibrium. Reaction kinetics: The reaction quotient can be used to study the kinetics of reversible reactions and determine rate laws, as it is related to the concentrations of reactants and products at any given time. Equilibrium constant determination: By measuring the concentrations of reactants and products at equilibrium, the equilibrium constant (K) can be calculated from the reaction quotient (Q = K at equilibrium). The reaction quotient is a powerful concept in chemical kinetics and thermodynamics, enabling the prediction of reaction directions, the extent of reaction progress, and the determination of equilibrium constants. It finds applications in various fields, including chemical engineering, biochemistry, and environmental chemistry, where understanding the behavior of reversible reactions is crucial.
Physical sciences
Thermodynamics
Chemistry
1580554
https://en.wikipedia.org/wiki/Tiltmeter
Tiltmeter
A tiltmeter is a sensitive inclinometer designed to measure very small changes from the vertical level, either on the ground or in structures. Tiltmeters are used extensively for monitoring volcanoes, the response of dams to filling, the small movements of potential landslides, the orientation and volume of hydraulic fractures, and the response of structures to various influences such as loading and foundation settlement. Tiltmeters may be purely mechanical or incorporate vibrating-wire or electrolytic sensors for electronic measurement. A sensitive instrument can detect changes of as little as one arc second. Tiltmeters have a long, diverse history, somewhat parallel to the history of the seismometer. The very first tiltmeter was a long-length stationary pendulum. These were used in the very first large concrete dams, and are still in use today, augmented with newer technology such as laser reflectors. Although they had been used for other applications such as volcano monitoring, they have distinct disadvantages, such as their huge length and sensitivity to air currents. Even in dams, they are slowly being replaced by the modern electronic tiltmeter. Volcano and Earth movement monitoring then used the water-tube, long baseline tiltmeter. In 1919, the physicist, Albert A. Michelson, noted that the most favorable arrangement to obtain high sensitivity and immunity from temperature perturbations is to use the equipotential surface defined by water in a buried half-filled water pipe. This was a simple arrangement of two water pots, connected by a long water-filled tube. Any change in tilt would be registered by a difference in fill-mark of one pot compared to the other. Although extensively used throughout the world for Earth-science research, they have proven to be quite difficult to operate. For example, due to their high sensitivity to temperature differentials, these always have to be read in the middle of the night. The modern electronic tiltmeter, which is slowly replacing all other forms of tiltmeter, uses a simple bubble level principle, as used in the common carpenter level. As shown in the figure, an arrangement of electrodes senses the exact position of the bubble in the electrolytic solution, to a high degree of precision. Any small changes in the level are recorded using a standard datalogger. This arrangement is quite insensitive to temperature, and can be fully compensated, using built-in thermal electronics. A newer technology using microelectromechanical systems (MEMS) sensors enables tilt angle measuring tasks to be performed conveniently in both single and dual axis mode. Ultra-high precision 2-axis MEMS driven digital inclinometer/ tiltmeter instruments are available for speedy angle measurement applications and surface profiling requiring very high resolution and accuracy of one arc second. The 2-axis MEMS driven inclinometers/ tiltmeters can be digitally compensated and precisely calibrated for non-linearity and operating temperature variation, resulting in higher angular accuracy and stability performance over wider angular measurement range and broader operating temperature range. Further, digital display of readings can effectively prevent parallax error as experienced when viewing traditional ‘bubble’ vials located at a distance. The most dramatic application of tiltmeters is in the area of volcanic eruption prediction. As shown in this figure from the USGS, the main volcano in Hawaii (Kilauea) has a pattern of filling the main chamber with magma, and then discharging to a side vent. The graph shows this pattern of swelling of the main chamber (recorded by the tiltmeter), draining of that chamber, and then an eruption of the adjoining vent. Each number at the peak of tilt, on the graph, is a recorded eruption. Gallery
Technology
Surveying tools
null
5595163
https://en.wikipedia.org/wiki/Ceres%20%28dwarf%20planet%29
Ceres (dwarf planet)
Ceres (minor-planet designation: 1 Ceres) is a dwarf planet in the middle main asteroid belt between the orbits of Mars and Jupiter. It was the first known asteroid, discovered on 1 January 1801 by Giuseppe Piazzi at Palermo Astronomical Observatory in Sicily, and announced as a new planet. Ceres was later classified as an asteroid and then a dwarf planet, the only one not beyond Neptune's orbit. Ceres' diameter is about a quarter that of the Moon. Its small size means that even at its brightest it is too dim to be seen by the naked eye, except under extremely dark skies. Its apparent magnitude ranges from 6.7 to 9.3, peaking at opposition (when it is closest to Earth) once every 15- to 16-month synodic period. As a result, its surface features are barely visible even with the most powerful telescopes, and little was known about it until the robotic NASA spacecraft Dawn approached Ceres for its orbital mission in 2015. Dawn found Ceres's surface to be a mixture of water ice and hydrated minerals such as carbonates and clay. Gravity data suggest Ceres to be partially differentiated into a muddy (ice-rock) mantle/core and a less dense but stronger crust that is at most thirty per cent ice by volume. Although Ceres likely lacks an internal ocean of liquid water, brines still flow through the outer mantle and reach the surface, allowing cryovolcanoes such as Ahuna Mons to form roughly every fifty million years. This makes Ceres the closest known cryovolcanically active body to the Sun. Ceres has an extremely tenuous and transient atmosphere of water vapour, vented from localised sources on its surface. History Discovery In the years between the acceptance of heliocentrism in the 18th century and the discovery of Neptune in 1846, several astronomers argued that mathematical laws predicted the existence of a hidden or missing planet between the orbits of Mars and Jupiter. In 1596, theoretical astronomer Johannes Kepler believed that the ratios between planetary orbits would conform to "God's design" only with the addition of two planets: one between Jupiter and Mars and one between Venus and Mercury. Other theoreticians, such as Immanuel Kant, pondered whether the gap had been created by the gravity of Jupiter; in 1761, astronomer and mathematician Johann Heinrich Lambert asked: "And who knows whether already planets are missing which have departed from the vast space between Mars and Jupiter? Does it then hold of celestial bodies as well as of the Earth, that the stronger chafe the weaker, and are Jupiter and Saturn destined to plunder forever?" In 1772, German astronomer Johann Elert Bode, citing Johann Daniel Titius, published a formula later known as the Titius–Bode law that appeared to predict the orbits of the known planets but for an unexplained gap between Mars and Jupiter. This formula predicted that there ought to be another planet with an orbital radius near 2.8 astronomical units (AU), or 420millionkm, from the Sun. The Titius–Bode law gained more credence with William Herschel's 1781 discovery of Uranus near the predicted distance for a planet beyond Saturn. In 1800, a group headed by Franz Xaver von Zach, editor of the German astronomical journal (Monthly Correspondence), sent requests to twenty-four experienced astronomers, whom he dubbed the "celestial police", asking that they combine their efforts and begin a methodical search for the expected planet. Although they did not discover Ceres, they later found the asteroids Pallas, Juno, and Vesta. One of the astronomers selected for the search was Giuseppe Piazzi, a Catholic priest at the academy of Palermo, Sicily. Before receiving his invitation to join the group, Piazzi discovered Ceres on 1 January 1801. He was searching for "the 87th [star] of the Catalogue of the Zodiacal stars of Mr la Caille", but found that "it was preceded by another". Instead of a star, Piazzi had found a moving starlike object, which he first thought was a comet. Piazzi observed Ceres twenty-four times, the final sighting occurring on 11 February 1801, when illness interrupted his work. He announced his discovery on 24 January 1801 in letters to two fellow astronomers, his compatriot Barnaba Oriani of Milan and Bode in Berlin. He reported it as a comet, but "since its movement is so slow and rather uniform, it has occurred to me several times that it might be something better than a comet". In April, Piazzi sent his complete observations to Oriani, Bode, and French astronomer Jérôme Lalande. The information was published in the September 1801 issue of the . By this time, the apparent position of Ceres had changed (primarily due to Earth's motion around the Sun) and was too close to the Sun's glare for other astronomers to confirm Piazzi's observations. Towards the end of the year, Ceres should have been visible again, but after such a long time, it was difficult to predict its exact position. To recover Ceres, mathematician Carl Friedrich Gauss, then twenty-four years old, developed an efficient method of orbit determination. He predicted the path of Ceres within a few weeks and sent his results to von Zach. On 31 December 1801, von Zach and fellow celestial policeman Heinrich W. M. Olbers found Ceres near the predicted position and continued to record its position. At 2.8 AU from the Sun, Ceres appeared to fit the Titius–Bode law almost perfectly; when Neptune was discovered in 1846, eight AU closer than predicted, most astronomers concluded that the law was a coincidence. The early observers were able to calculate the size of Ceres only to within an order of magnitude. Herschel underestimated its diameter at in 1802; in 1811, German astronomer Johann Hieronymus Schröter overestimated it as . In the 1970s, infrared photometry enabled more accurate measurements of its albedo, and Ceres's diameter was determined to within ten percent of its true value of . Name and symbol Piazzi's proposed name for his discovery was Ceres Ferdinandea: Ceres after the Roman goddess of agriculture, whose earthly home, and oldest temple, lay in Sicily; and Ferdinandea in honour of Piazzi's monarch and patron, King FerdinandIII of Sicily. The latter was not acceptable to other nations and was dropped. Before von Zach's recovery of Ceres in December 1801, von Zach referred to the planet as Hera, and Bode referred to it as Juno. Despite Piazzi's objections, those names gained currency in Germany before the object's existence was confirmed. Once it was, astronomers settled on Piazzi's name. The adjectival forms of Ceres are Cererian and Cererean, both pronounced . Cerium, a rare-earth element discovered in 1803, was named after Ceres. The old astronomical symbol of Ceres, still used in astrology, is a sickle, . The sickle was one of the classical symbols of the goddess Ceres and was suggested, apparently independently, by von Zach and Bode in 1802. It is similar in form to the symbol (a circle with a small cross beneath) of the planet Venus, but with a break in the circle. It had various minor graphic variants, including a reversed form typeset as a 'C' (the initial letter of the name Ceres) with a plus sign. The generic asteroid symbol of a numbered disk, ①, was introduced in 1867 and quickly became the norm. Classification The categorisation of Ceres has changed more than once and has been the subject of some disagreement. Bode believed Ceres to be the "missing planet" he had proposed to exist between Mars and Jupiter. Ceres was assigned a planetary symbol and remained listed as a planet in astronomy books and tables (along with Pallas, Juno, and Vesta) for over half a century. As other objects were discovered in the neighbourhood of Ceres, astronomers began to suspect that it represented the first of a new class of objects. When Pallas was discovered in 1802, Herschel coined the term asteroid ("star-like") for these bodies, writing that "they resemble small stars so much as hardly to be distinguished from them, even by very good telescopes". In 1852 Johann Franz Encke, in the Berliner Astronomisches Jahrbuch, declared the traditional system of granting planetary symbols too cumbersome for these new objects and introduced a new method of placing numbers before their names in order of discovery. The numbering system initially began with the fifth asteroid, 5 Astraea, as number1, but in 1867, Ceres was adopted into the new system under the name 1Ceres. By the 1860s, astronomers widely accepted that a fundamental difference existed between the major planets and asteroids such as Ceres, though the word "planet" had yet to be precisely defined. In the 1950s, scientists generally stopped considering most asteroids as planets, but Ceres sometimes retained its status after that because of its planet-like geophysical complexity. Then, in 2006, the debate surrounding Pluto led to calls for a definition of "planet", and the possible reclassification of Ceres, perhaps even its general reinstatement as a planet. A proposal before the International Astronomical Union (IAU), the global body responsible for astronomical nomenclature and classification, defined a planet as "a celestial body that (a) has sufficient mass for its self-gravity to overcome rigid-body forces so that it assumes a hydrostatic equilibrium (nearly round) shape, and (b) is in orbit around a star, and is neither a star nor a satellite of a planet". Had this resolution been adopted, it would have made Ceres the fifth planet in order from the Sun, but on 24 August 2006 the assembly adopted the additional requirement that a planet must have "cleared the neighbourhood around its orbit". Ceres is not a planet because it does not dominate its orbit, sharing it as it does with the thousands of other asteroids in the asteroid belt and constituting only about forty per cent of the belt's total mass. Bodies that met the first proposed definition but not the second, such as Ceres, were instead classified as dwarf planets. Planetary geologists still often ignore this definition and consider Ceres to be a planet anyway. Ceres is a dwarf planet, but there is some confusion about whether it is also an asteroid. A NASA webpage states that Vesta, the belt's second-largest object, is the largest asteroid. The IAU has been equivocal on the subject, though its Minor Planet Center, the organisation charged with cataloguing such objects, notes that dwarf planets may have dual designations, and the joint IAU/USGS/NASA Gazetteer categorises Ceres as both asteroid and a dwarf planet. Orbit Ceres follows an orbit between Mars and Jupiter, near the middle of the asteroid belt, with an orbital period (year) of 4.6 Earth years. Compared to other planets and dwarf planets, Ceres's orbit is moderately tilted relative to that of Earth; its inclination (i) is 10.6°, compared to 7° for Mercury and 17° for Pluto. It is also slightly elongated, with an eccentricity (e) = 0.08, compared to 0.09 for Mars. Ceres is not part of an asteroid family, probably due to its large proportion of ice, as smaller bodies with the same composition would have sublimated to nothing over the age of the Solar System. It was once thought to be a member of the Gefion family, the members of which share similar proper orbital elements, suggesting a common origin through an asteroid collision in the past. Ceres was later found to have a different composition from the Gefion family and appears to be an interloper, having similar orbital elements but not a common origin. Resonances Due to their small masses and large separations, objects within the asteroid belt rarely fall into gravitational resonances with each other. Nevertheless, Ceres is able to capture other asteroids into temporary 1:1 resonances (making them temporary trojans), for periods from a few hundred thousand to more than two million years. Fifty such objects have been identified. Ceres is close to a 1:1 mean-motion orbital resonance with Pallas (their proper orbital periods differ by 0.2%), but not close enough to be significant over astronomical timescales. Rotation and axial tilt The rotation period of Ceres (the Cererian day) is 9hours and 4minutes; the small equatorial crater of Kait is selected as its prime meridian. Ceres has an axial tilt of 4°, small enough for its polar regions to contain permanently shadowed craters that are expected to act as cold traps and accumulate water ice over time, similar to what occurs on the Moon and Mercury. About 0.14% of water molecules released from the surface are expected to end up in the traps, hopping an average of three times before escaping or being trapped. Dawn, the first spacecraft to orbit Ceres, determined that the north polar axis points at right ascension 19h 25m 40.3s (291.418°), declination +66° 45' 50" (about 1.5 degrees from Delta Draconis), which means an axial tilt of 4°. This means that Ceres currently sees little to no seasonal variation in sunlight by latitude. Gravitational influence from Jupiter and Saturn over the course of the last three million years has triggered cyclical shifts in Ceres's axial tilt, ranging from two to twenty degrees, meaning that seasonal variation in sun exposure has occurred in the past, with the last period of seasonal activity estimated at 14,000 years ago. Those craters that remain in shadow during periods of maximum axial tilt are the most likely to retain water ice from eruptions or cometary impacts over the age of the Solar System. Geology Ceres is the largest asteroid in the main asteroid belt. It has been classified as a Ctype or carbonaceous asteroid and, due to the presence of clay minerals, as a G-type asteroid. It has a similar, but not identical, composition to that of carbonaceous chondrite meteorites. It is an oblate spheroid, with an equatorial diameter 8% larger than its polar diameter. Measurements from the Dawn spacecraft found a mean diameter of and a mass of . This gives Ceres a density of , suggesting that a quarter of its mass is water ice. Ceres makes up 40% of the estimated mass of the asteroid belt, and it has times the mass of the next asteroid, Vesta, but it is only 1.3% the mass of the Moon. It is close to being in hydrostatic equilibrium, but some deviations from an equilibrium shape have yet to be explained. Ceres is the only widely accepted dwarf planet with an orbital period less than that of Neptune. Modelling has suggested Ceres's rocky material is partially differentiated, and that it may possess a small core, but the data is also consistent with a mantle of hydrated silicates and no core. Because Dawn lacked a magnetometer, it is not known if Ceres has a magnetic field; it is believed not to. Ceres's internal differentiation may be related to its lack of a natural satellite, as satellites of main belt asteroids are mostly believed to form from collisional disruption, creating an undifferentiated, rubble pile structure. Surface Composition The surface composition of Ceres is homogeneous on a global scale, and it is rich in carbonates and ammoniated phyllosilicates that have been altered by water, though water ice in the regolith varies from approximately 10% in polar latitudes to much drier, even ice-free, in the equatorial regions. Studies using the Hubble Space Telescope show graphite, sulfur, and sulfur dioxide on Ceres's surface. The graphite is evidently the result of space weathering on Ceres's older surfaces; the latter two are volatile under Cererian conditions and would be expected to either escape quickly or settle in cold traps, and so are evidently associated with relatively recent geological activity. Organic compounds were detected in Ernutet Crater, and most of the planet's near surface is rich in carbon, at approximately 20% by mass. The carbon content is more than five times higher than in carbonaceous chondrite meteorites analysed on Earth. The surface carbon shows evidence of being mixed with products of rock-water interactions, such as clays. This chemistry suggests Ceres formed in a cold environment, perhaps outside the orbit of Jupiter, and that it accreted from ultra-carbon-rich materials in the presence of water, which could provide conditions favourable to organic chemistry. Craters Dawn revealed that Ceres has a heavily cratered surface, though with fewer large craters than expected. Models based on the formation of the current asteroid belt had predicted Ceres should have ten to fifteen craters larger than in diameter. The largest confirmed crater on Ceres, Kerwan Basin, is across. The most likely reason for this is viscous relaxation of the crust slowly flattening out larger impacts. Ceres's north polar region shows far more cratering than the equatorial region, with the eastern equatorial region in particular comparatively lightly cratered. The overall size frequency of craters of between twenty and a hundred kilometres (10–60mi) is consistent with their having originated in the Late Heavy Bombardment, with craters outside the ancient polar regions likely erased by early cryovolcanism. Three large shallow basins (planitiae) with degraded rims are likely to be eroded craters. The largest, Vendimia Planitia, at across, is also the largest single geographical feature on Ceres. Two of the three have higher than average ammonium concentrations. Dawn observed 4,423 boulders larger than in diameter on the surface of Ceres. These boulders likely formed through impacts, and are found within or near craters, though not all craters contain boulders. Large boulders are more numerous at higher latitudes. Boulders on Ceres are brittle and degrade rapidly due to thermal stress (at dawn and dusk, the surface temperature changes rapidly) and meteoritic impacts. Their maximum age is estimated to be 150million years, much shorter than the lifetime of boulders on Vesta. Tectonic features Although Ceres lacks plate tectonics, with the vast majority of its surface features linked either to impacts or to cryovolcanic activity, several potentially tectonic features have been tentatively identified on its surface, particularly in its eastern hemisphere. The Samhain Catenae, kilometre-scale linear fractures on Ceres's surface, lack any apparent link to impacts and bear a stronger resemblance to pit crater chains, which are indicative of buried normal faults. Also, several craters on Ceres have shallow, fractured floors consistent with cryomagmatic intrusion. Cryovolcanism Ceres has one prominent mountain, Ahuna Mons; this appears to be a cryovolcano and has few craters, suggesting a maximum age of 240million years. Its relatively high gravitational field suggests it is dense, and thus composed more of rock than ice, and that its placement is likely due to diapirism of a slurry of brine and silicate particles from the top of the mantle. It is roughly antipodal to Kerwan Basin. Seismic energy from the Kerwan-forming impact may have focused on the opposite side of Ceres, fracturing the outer layers of the crust and triggering the movement of high-viscosity cryomagma (muddy water ice softened by its content of salts) onto the surface. Kerwan too shows evidence of the effects of liquid water due to impact-melting of subsurface ice. A 2018 computer simulation suggests that cryovolcanoes on Ceres, once formed, recede due to viscous relaxation over several hundred million years. The team identified 22 features as strong candidates for relaxed cryovolcanoes on Ceres's surface. Yamor Mons, an ancient, impact-cratered peak, resembles Ahuna Mons despite being much older, due to it lying in Ceres's northern polar region, where lower temperatures prevent viscous relaxation of the crust. Models suggest that, over the past billion years, one cryovolcano has formed on Ceres on average every fifty million years. The eruptions may be linked to ancient impact basins but are not uniformly distributed over Ceres. The model suggests that, contrary to findings at Ahuna Mons, Cererian cryovolcanoes must be composed of far less dense material than average for Ceres's crust, or the observed viscous relaxation could not occur. An unexpectedly large number of Cererian craters have central pits, perhaps due to cryovolcanic processes; others have central peaks. Hundreds of bright spots (faculae) have been observed by Dawn, the brightest in the middle of Occator Crater. The bright spot in the centre of Occator is named Cerealia Facula, and the group of bright spots to its east, Vinalia Faculae. Occator possesses a pit 9–10 km wide, partially filled by a central dome. The dome post-dates the faculae and is likely due to freezing of a subterranean reservoir, comparable to pingos in Earth's Arctic region. A haze periodically appears above Cerealia, supporting the hypothesis that some sort of outgassing or sublimating ice formed the bright spots. In March 2016 Dawn found definitive evidence of water ice on the surface of Ceres at Oxo crater. On 9 December 2015, NASA scientists reported that the bright spots on Ceres may be due to a type of salt from evaporated brine containing magnesium sulfate hexahydrate (MgSO4·6H2O); the spots were also found to be associated with ammonia-rich clays. Near-infrared spectra of these bright areas were reported in 2017 to be consistent with a large amount of sodium carbonate () and smaller amounts of ammonium chloride () or ammonium bicarbonate (). These materials have been suggested to originate from the crystallisation of brines that reached the surface. In August 2020 NASA confirmed that Ceres was a water-rich body with a deep reservoir of brine that percolated to the surface in hundreds of locations causing "bright spots", including those in Occator Crater. Internal structure The active geology of Ceres is driven by ice and brines. Water leached from rock is estimated to possess a salinity of around 5%. Altogether, Ceres is approximately 50% water by volume (compared to 0.1% for Earth) and 73% rock by mass. Ceres's largest craters are several kilometres deep, inconsistent with an ice-rich shallow subsurface. The fact that the surface has preserved craters almost in diameter indicates that the outermost layer of Ceres is roughly 1000 times stronger than water ice. This is consistent with a mixture of silicates, hydrated salts and methane clathrates, with no more than 30% water ice by volume. Gravity measurements from Dawn have generated three competing models for Ceres's interior. In the three-layer model, Ceres is thought to consist of an outer, thick crust of ice, salts and hydrated minerals and an inner muddy "mantle" of hydrated rock, such as clays, separated by a layer of a muddy mixture of brine and rock. It is not possible to tell if Ceres's deep interior contains liquid or a core of dense material rich in metal, but the low central density suggests it may retain about 10% porosity. One study estimated the densities of the core and mantle/crust to be 2.46–2.90 and 1.68–1.95g/cm3 respectively, with the mantle and crust together being thick. Only partial dehydration (expulsion of ice) from the core is expected, though the high density of the mantle relative to water ice reflects its enrichment in silicates and salts. That is, the core (if it exists), the mantle and crust all consist of rock and ice, though in different ratios. Ceres's mineral composition can be determined (indirectly) only for its outer . The solid outer crust, thick, is a mixture of ice, salts, and hydrated minerals. Under that is a layer that may contain a small amount of brine. This extends to a depth of at least the limit of detection. Under that is thought to be a mantle dominated by hydrated rocks such as clays. In one two-layer model, Ceres consists of a core of chondrules and a mantle of mixed ice and micron-sized solid particulates ("mud"). Sublimation of ice at the surface would leave a deposit of hydrated particulates perhaps twenty metres thick. The range of the extent of differentiation is consistent with the data, from a large, core of 75% chondrules and 25% particulates and a mantle of 75% ice and 25% particulates, to a small, core consisting nearly entirely of particulates and a mantle of 30% ice and 70% particulates. With a large core, the core–mantle boundary should be warm enough for pockets of brine. With a small core, the mantle should remain liquid below . In the latter case a 2% freezing of the liquid reservoir would compress the liquid enough to force some to the surface, producing cryovolcanism. A second two-layer model suggests a partial differentiation of Ceres into a volatile-rich crust and a denser mantle of hydrated silicates. A range of densities for the crust and mantle can be calculated from the types of meteorite thought to have impacted Ceres. With CI-class meteorites (density 2.46 g/cm3), the crust would be approximately thick and have a density of 1.68 g/cm3; with CM-class meteorites (density 2.9 g/cm3), the crust would be approximately thick and have a density of 1.9 g/cm3. Best-fit modelling yields a crust approximately thick with a density of approximately 1.25 g/cm3, and a mantle/core density of approximately 2.4 g/cm3. Atmosphere In 2017, Dawn confirmed that Ceres has a transient atmosphere of water vapour. Hints of an atmosphere had appeared in early 2014, when the Herschel Space Observatory detected localised mid-latitude sources of water vapour on Ceres, no more than in diameter, which each give off approximately molecules (3kg) of water per second. Two potential source regions, designated Piazzi (123°E, 21°N) and Region A (231°E, 23°N), were visualised in the near infrared as dark areas (Region A also has a bright centre) by the Keck Observatory. Possible mechanisms for the vapour release are sublimation from approximately of exposed surface ice, cryovolcanic eruptions resulting from radiogenic internal heat, or pressurisation of a subsurface ocean due to thickening of an overlying layer of ice. In 2015, David Jewitt included Ceres in his list of active asteroids. Surface water ice is unstable at distances less than 5 AU from the Sun, so it is expected to sublime if exposed directly to solar radiation. Proton emission from solar flares and CMEs can sputter exposed ice patches on the surface, leading to a positive correlation between detections of water vapour and solar activity. Water ice can migrate from the deep layers of Ceres to the surface, but it escapes in a short time. Surface sublimation would be expected to be lower when Ceres is farther from the Sun in its orbit, and internally powered emissions should not be affected by its orbital position. The limited data previously available suggested cometary-style sublimation, but evidence from Dawn suggests geologic activity could be at least partially responsible. Studies using Dawn's gamma ray and neutron detector (GRaND) reveal that Ceres accelerates electrons from the solar wind; the most accepted hypothesis is that these electrons are being accelerated by collisions between the solar wind and a tenuous water vapour exosphere. Bow shocks like these could also be explained by a transient magnetic field, but this is considered less likely, as the interior of Ceres is not thought to be sufficiently electrically conductive. Ceres' thin exosphere is continuously replenished through exposure of water ice patches by impacts, water ice diffusion through the porous ice crust and proton sputtering during solar activity. The rate of this vapour diffusion scales with grain size and is heavily affected by a global dust mantle consisting of an aggregate of approximately 1 micron particles. Exospheric replenishment through sublimation alone is very small, with the current outgassing rate being only 0.003 kg/s. Various models of an extant exosphere have been attempted including ballistic trajectory, DSMC, and polar cap numerical models. Results showed a water exosphere half-life of 7 hours from the ballistic trajectory model, an outgassing rate of 6 kg/s with an optically thin atmosphere sustained for tens of days using a DSMC model, and seasonal polar caps formed from exosphere water delivery using the polar cap model. The mobility of water molecules within the exosphere is dominated by ballistic hops coupled with interaction of the surface, however less is known about direct interactions with planetary regoliths. Origin and evolution Ceres is a surviving protoplanet that formed 4.56billion years ago; alongside Pallas and Vesta, one of only three remaining in the inner Solar System, with the rest either merging to form terrestrial planets, being shattered in collisions or being ejected by Jupiter. Despite Ceres's current location, its composition is not consistent with having formed within the asteroid belt. It seems rather that it formed between the orbits of Jupiter and Saturn, and was deflected into the asteroid belt as Jupiter migrated outward. The discovery of ammonium salts in Occator Crater supports an origin in the outer Solar System, as ammonia is far more abundant in that region. The early geological evolution of Ceres was dependent on the heat sources available during and after its formation: impact energy from planetesimal accretion and decay of radionuclides (possibly including short-lived extinct radionuclides such as aluminium-26). These may have been sufficient to allow Ceres to differentiate into a rocky core and icy mantle, or even a liquid water ocean, soon after its formation. This ocean should have left an icy layer under the surface as it froze. The fact that Dawn found no evidence of such a layer suggests that Ceres's original crust was at least partially destroyed by later impacts thoroughly mixing the ice with the salts and silicate-rich material of the ancient seafloor and the material beneath. Ceres possesses surprisingly few large craters, suggesting that viscous relaxation and cryovolcanism have erased older geological features. The presence of clays and carbonates requires chemical reactions at temperatures above 50°C, consistent with hydrothermal activity. It has become considerably less geologically active over time, with a surface dominated by impact craters; nevertheless, evidence from Dawn reveals that internal processes have continued to sculpt Ceres's surface to a significant extent contrary to predictions that Ceres's small size would have ceased internal geological activity early in its history. Habitability Although Ceres is not as actively discussed as a potential home for microbial extraterrestrial life as Mars, Europa, Enceladus, or Titan are, it has the most water of any body in the inner Solar System after Earth, and the likely brine pockets under its surface could provide habitats for life. Unlike Europa or Enceladus, it does not experience tidal heating, but it is close enough to the Sun, and contains enough long-lived radioactive isotopes, to preserve liquid water in its subsurface for extended periods. The remote detection of organic compounds and the presence of water mixed with 20% carbon by mass in its near surface could provide conditions favourable to organic chemistry. Of the biochemical elements, Ceres is rich in carbon, hydrogen, oxygen and nitrogen, but phosphorus has yet to be detected, and sulfur, despite being suggested by Hubble UV observations, was not detected by Dawn. Observation and exploration Observation When in opposition near its perihelion, Ceres can reach an apparent magnitude of +6.7. This is too dim to be visible to the average naked eye, but under ideal viewing conditions, keen eyes may be able to see it. Vesta is the only other asteroid that can regularly reach a similarly bright magnitude, while Pallas and 7 Iris do so only when both in opposition and near perihelion. When in conjunction, Ceres has a magnitude of around +9.3, which corresponds to the faintest objects visible with 10×50 binoculars; thus, it can be seen with such binoculars in a naturally dark and clear night sky around new moon. An occultation of the star BD+8°471 by Ceres was observed on 13 November 1984 in Mexico, Florida and across the Caribbean, allowing better measurements of its size, shape and albedo. On 25 June 1995, Hubble obtained ultraviolet images of Ceres with resolution. In 2002, the Keck Observatory obtained infrared images with resolution using adaptive optics. Before the Dawn mission, only a few surface features had been unambiguously detected on Ceres. High-resolution ultraviolet Hubble images in 1995 showed a dark spot on its surface, which was nicknamed "Piazzi" in honour of the discoverer of Ceres. It was thought to be a crater. Visible-light images of a full rotation taken by Hubble in 2003 and 2004 showed eleven recognisable surface features, the natures of which were undetermined. One of them corresponded to the Piazzi feature. Near-infrared images over a whole rotation, taken with adaptive optics by the Keck Observatory in 2012, showed bright and dark features moving with Ceres's rotation. Two dark features were circular and were presumed to be craters; one was observed to have a bright central region, and the other was identified as the Piazzi feature. Dawn eventually revealed Piazzi to be a dark region in the middle of Vendimia Planitia, close to the crater Dantu, and the other dark feature to be within Hanami Planitia and close to Occator Crater. Dawn mission In the early 1990s, NASA initiated the Discovery Program, which was intended to be a series of low-cost scientific missions. In 1996, the program's study team proposed a high-priority mission to explore the asteroid belt using a spacecraft with an ion engine. Funding remained problematic for nearly a decade, but by 2004, the Dawn vehicle passed its critical design review. Dawn, the first space mission to visit either Vesta or Ceres, was launched on 27 September 2007. On 3 May 2011, Dawn acquired its first targeting image from Vesta. After orbiting Vesta for thirteen months, Dawn used its ion engine to depart for Ceres, with gravitational capture occurring on 6 March 2015 at a separation of , four months before the New Horizons flyby of Pluto. The spacecraft instrumentation included a framing camera, a visual and infrared spectrometer, and a gamma-ray and neutron detector. These instruments examined Ceres's shape and elemental composition. On 13 January 2015, as Dawn approached Ceres, the spacecraft took its first images at near-Hubble resolution, revealing impact craters and a small high-albedo spot on the surface. Additional imaging sessions, at increasingly better resolution, took place from February to April. Dawns mission profile called for it to study Ceres from a series of circular polar orbits at successively lower altitudes. It entered its first observational orbit ("RC3") around Ceres at an altitude of on 23 April 2015, staying for only one orbit (15 days). The spacecraft then reduced its orbital distance to for its second observational orbit ("survey") for three weeks, then down to ("HAMO;" high altitude mapping orbit) for two months and then down to its final orbit at ("LAMO;" low altitude mapping orbit) for at least three months. In October 2015, NASA released a true-colour portrait of Ceres made by Dawn. In 2017, Dawn'''s mission was extended to perform a series of closer orbits around Ceres until the hydrazine used to maintain its orbit ran out.Dawn soon discovered evidence of cryovolcanism. Two distinct bright spots (or high-albedo features) inside a crater (different from the bright spots observed in earlier Hubble images) were seen in a 19 February 2015 image, leading to speculation about a possible cryovolcanic origin or outgassing. On 2 September 2016, scientists from the Dawn team argued in a Science paper that Ahuna Mons was the strongest evidence yet for cryovolcanic features on Ceres. On 11 May 2015, NASA released a higher-resolution image showing that the spots were composed of multiple smaller spots. On 9 December 2015, NASA scientists reported that the bright spots on Ceres may be related to a type of salt, particularly a form of brine containing magnesium sulfate hexahydrate (MgSO4·6H2O); the spots were also found to be associated with ammonia-rich clays. In June 2016, near-infrared spectra of these bright areas were found to be consistent with a large amount of sodium carbonate (), implying that recent geologic activity was probably involved in the creation of the bright spots. From June to October 2018, Dawn orbited Ceres from as close as to as far away as . The Dawn'' mission ended on 1 November 2018 after the spacecraft ran out of fuel. Future missions In 2020, an ESA team proposed the Calathus Mission concept, a followup mission to Occator Crater, to return a sample of the bright carbonate faculae and dark organics to Earth. The Chinese Space Agency is designing a sample-return mission from Ceres that would take place during the 2020s.
Physical sciences
Solar System
null
5595777
https://en.wikipedia.org/wiki/Luminous%20infrared%20galaxy
Luminous infrared galaxy
Luminous infrared galaxies or LIRGs are galaxies with luminosities, the measurement of brightness, above . They are also referred to as submillimeter galaxies (SMGs) through their normal method of detection. LIRGs are more abundant than starburst galaxies, Seyfert galaxies and quasi-stellar objects at comparable luminosity. Infrared galaxies emit more energy in the infrared than at all other wavelengths combined. A LIRG's luminosity is 100 billion times that of the Sun. Galaxies with luminosities above are ultraluminous infrared galaxies (ULIRGs). Galaxies exceeding are characterised as hyper-luminous infrared galaxies (HyLIRGs). Those exceeding are extremely luminous infrared galaxies (ELIRGs). Many of the LIRGs and ULIRGs are showing interactions and disruptions. Many of these types of galaxies spawn about 100 new stars a year as compared to the Milky Way which spawns one a year; this helps create the high level of luminosity. Discovery and characteristics Infrared galaxies appear to be single, gas-rich spirals whose infrared luminosity is created largely by the formation of stars within them. These types of galaxies were discovered in 1983 with IRAS. A LIRG's excess infrared luminosity may also come from the presence of an active galactic nucleus (AGN) residing at the center. These galaxies emit more energy in the infrared portion of the spectrum, not visible to the naked eye. The energy given off by LIRGs is comparable to that of a quasar (a type of AGN), which formerly was known as the most energetic object in the universe. LIRGs are brighter in the infrared than in the optical spectrum because the visible light is absorbed by the high amounts of gas and dust, and the dust re-emits thermal energy in the infrared spectrum. LIRGs are known to exist in denser parts of the universe than non-LIRGs. ULIRG LIRGs are also capable of becoming Ultra Luminous Infrared Galaxys (ULIRGs) but there is no perfect timetable because not all LIRGs turn into ULIRGs. Studies have shown that ULIRGs are more likely to contain an AGN than LIRGs According to one study a ULIRG is just part of an evolutionary galaxy merger scenario. In essence, two or more spiral galaxies, galaxies that consist of a flat, rotating disk containing stars, gas and dust and a central concentration of stars known as the bulge, merge to form an early stage merger. An early stage merger in this case can also be identified as a LIRG. After that, it becomes a late stage merger, which is a ULIRG. It then becomes a quasar and in the final stage of the evolution it becomes an elliptical galaxy. This can be evidenced by the fact that stars are much older in elliptical galaxies than those found in the earlier stages of the evolution. HyLIRG Hyper luminous Infrared Galaxies (HyLIRG), also referred to as HiLIRGs and HLIRGs, are considered to be some of the most luminous persistent objects in the Universe, exhibiting extremely high star formation rates, and most of which are known to harbour Active Galactic Nuclei (AGN). They are defined as galaxies with luminosities above 1013 L⊙, as distinct from the less luminous population of ULIRGs (L = 1012 – 1013 L⊙). HLIRGs were first identified through follow-up observations of the IRAS mission. IRAS F10214+4724, a HyLIRG being gravitationally lensed by a foreground elliptical galaxy, was considered to be one of the most luminous objects in the Universe having an intrinsic luminosity of ~ 2 × 1013 L⊙. It is believed that the bolometric luminosity of this HLIRG is likely amplified by a factor of ~30 as a result of the gravitational lensing. The majority (~80%) of the mid-infrared spectrum of these objects is found to be dominated by AGN emission. However, the starburst (SB) activity is known to be significant in all known sources with a mean SB contribution of ~30%. Star formation rates in HLIRGs have been shown to reach ~ 3×102 – 3×103 M⊙ yr−1. ELIRG The Extremely Luminous Infrared Galaxy WISE J224607.57-052635.0, with a luminosity of 300 trillion suns was discovered by NASA's Wide-field Infrared Survey Explorer (WISE), and as of May 2015 is the most luminous galaxy found. The galaxy belongs to a new class of objects discovered by WISE, extremely luminous infrared galaxies, or ELIRGs. Light from the WISE J224607.57-052635.0 galaxy has traveled 12.5 billion years. The black hole at its center was billions of times the mass of the Sun when the universe was a tenth (1.3 billion years) of its present age of 13.8 billion years. There are three reasons the black holes in the ELIRGs could be massive. First, the embryonic black holes might be bigger than thought possible. Second, the Eddington limit was exceeded. When a black hole feeds, gas falls in and heats, emitting light. The pressure of the emitted light forces the gas outward, creating a limit to how fast the black hole can continuously absorb matter. If a black hole broke this limit, it could theoretically increase in size at a fast rate. Black holes have previously been observed breaking this limit; the black hole in the study would have had to repeatedly break the limit to grow this large. Third, the black holes might just be bending this limit, absorbing gas faster than thought possible, if the black hole is not spinning fast. If a black hole spins slowly, it will not repel its gas absorption as much. A slow-spinning black hole can absorb more matter than a fast-spinning black hole. The massive black holes in ELIRGs could be absorbing matter for a longer time. Twenty new ELIRGs, including the most luminous galaxy found to date, have been discovered. These galaxies were not found earlier because of their distance, and because dust converts their visible light into infrared light. One has been observed to have three star-forming areas. Observations IRAS The Infrared Astronomical Satellite (IRAS) was the first all-sky survey which used far-infrared wavelengths, in 1983. In that survey, tens of thousands of galaxies were detected, many of which would not have been recorded in previous surveys. It is now clear that the reason the number of detections has risen is that the majority of LIRGs in the universe emitted the bulk of their energy in the far infrared. Using the IRAS, scientists were able to determine the luminosity of the galactic objects discovered. The telescope was a joint project of the United States (NASA), Netherlands (NIVR), and the United Kingdom (SERC). Over 250,000 infrared sources were observed during this 10-month mission. GOALS The Great Observatories All-sky LIRG Survey (GOALS) is a multi-wavelength study of luminous infrared galaxies, incorporating observations with NASA's Great Observatories and other ground and space-based telescopes. Using information from NASA's Spitzer, Hubble, Chandra and Galex observations in a study over 200 of the most luminous infrared selected galaxies in the local universe. Approximately 180 LIRGs were identified along with over 20 ULIRGs. The LIRGs and ULIRGs targeted in GOALS span the full range of nuclear spectral types (type-1 and type 2 Active Galactic Nuclei, LINERS's, and starbursts) and interaction stages (major mergers, minor mergers, and isolated galaxies). List Some examples of extremely notable LIRGs, ULIRGs, HyLIRGs, ELIRGs Image gallery
Physical sciences
Active galactic nucleus
Astronomy
7300733
https://en.wikipedia.org/wiki/Hatzegopteryx
Hatzegopteryx
Hatzegopteryx ("Hațeg basin wing") is a genus of azhdarchid pterosaur found in the late Maastrichtian deposits of the Densuş Ciula Formation, an outcropping in Transylvania, Romania. It is known only from the type species, Hatzegopteryx thambema, named by paleontologists Eric Buffetaut, Dan Grigorescu, and Zoltan Csiki in 2002 based on parts of the skull and humerus. Additional specimens, including a neck vertebra, were later placed in the genus, representing a range of sizes. The largest of these remains indicate it was among the biggest pterosaurs, with an estimated wingspan of . Unusually among giant azhdarchids, Hatzegopteryx had a very wide skull bearing large muscular attachments, bones with a spongy internal texture instead of being hollow, and a short, robust, and heavily muscled neck measuring long, which was about half the length of other azhdarchids with comparable wingspans and was capable of withstanding strong bending forces. Hatzegopteryx inhabited Hațeg Island, an island situated in the Cretaceous subtropics within the prehistoric Tethys Sea. In the absence of large theropods, Hatzegopteryx was likely the apex predator of Hațeg Island, tackling proportionally larger prey (including dwarf titanosaurs and iguanodontians) than other azhdarchids. Discovery and naming The first pterosaur remains from Romania were identified by Franz Nopcsa in 1899, and the first remains of Hatzegopteryx were found during a student dig in the late 1970s from the upper part of the Middle Densuş Ciula Formation of Vălioara, northwestern Hațeg Basin, Transylvania, western Romania, which has been dated to the late Maastrichtian stage of the Late Cretaceous Period, around 66 million years ago. The holotype of Hatzegopteryx, FGGUB R 1083A, consists of two fragments from the back of the skull and the damaged proximal part of a left humerus. One of these fragments, namely the occipital region, was initially referred to a theropod dinosaur when it was first announced in 1991. A long midsection of a femur found nearby, FGGUB R1625, may also belong to Hatzegopteryx. FGGUB R1625 would have belonged to a smaller individual of Hatzegopteryx (assuming it pertains to the genus), with a wingspan. Additional reported specimens from the locality include an unpublished mandible, also from a large individual. Hatzegopteryx was named in 2002 by French paleontologist Eric Buffetaut and Romanian paleontologists Dan Grigorescu and Zoltan Csiki. The generic name is derived from the Hatzeg (or Hațeg) basin of Transylvania, where the bones were found, and from the Greek word pteryx (ἡ πτέρυξ, -υγος (also ἡ πτερύξ, -ῦγος), or “wing”. The specific name thambema is derived from the Greek word for “terror” or “monster” (τό θάμβημα, -ήματος), in reference to its huge size. New specimens of Hatzegopteryx have since been recovered from other localities. In the Sânpetru Formation from the locality of Vadu, Sântămăria-Orlea, a medium-sized scapulocoracoid was found, which probably pertained to an individual with a wingspan of . From the Râpa Roșie locality of the Sebeș Formation, which is contemporary and adjacent to the Densuș Ciula Formation, a single large neck vertebra, the "RR specimen" or EME 215, was found. Although the lack of overlapping elements prevents this specimen from being definitely referred to Hatzegopteryx thambema, its distinctive internal bone structure, as well as the lack of evidence for a second giant azhdarchid in the area, warrant its referral to at least H. sp. Description Size The size of Hatzegopteryx was initially estimated by comparing the humerus fragment with that of Quetzalcoatlus northropi, which has a -long humerus. Observing that the Hatzegopteryx fragment presented less than half of the original bone, Buffetaut and colleagues established that it could possibly have been "slightly longer" than that of Quetzalcoatlus. The wingspan of the latter had been estimated at in 1981. Earlier estimates had strongly exceeded this at . They concluded that an estimate of a wing span for Hatzegopteryx was conservative, "provided that its humerus was longer than that of Q. northropi". In 2010, Mark Witton and Michael Habib concluded that Hatzegopteryx was probably no larger than Q. northropi in wingspan. The initial conclusions did not account for distortion of the bone. The latter is generally estimated at in length. It has been suggested (on the basis of the wide and robust neck vertebra referred to Hatzegopteryx) that the entire vertebral column of the animal was similarly expanded, increasing its overall size. However, this is likely not true, since the neck vertebrae of large pterodactyloids generally tend to be wider and larger than the rest of the vertebrae. Although estimates of pterosaur size based on vertebrae alone are not particularly reliable, the size of this vertebra is consistent with an animal that measured in wingspan. Skull The skull of Hatzegopteryx was gigantic, with an estimated length of based on comparisons with Nyctosaurus and Anhanguera, making it one of the largest skulls among non-marine animals. The skull was broadened in the rear, being wide across the quadrate bones. While most pterosaur skulls are composed of gracile plates and struts, in Hatzegopteryx, the skull bones are stout and robust, with large ridges indicating strong muscular attachments. In 2018, Mátyás Vremir concluded that Hatzegopteryx likely had a shorter and broader skull, the length of which he estimated at , and he also estimated its wingspan to be smaller than others at . The massive jaw bore a distinctive groove at its point of articulation (also seen in some other pterosaurs, including Pteranodon) that would have allowed the mouth to achieve a very wide gape. Unpublished remains attributed to Hatzegopteryx suggest that it had a proportionally short, deep beak, grouping with the "blunt-beaked" azhdarchids rather than the "slender-beaked" azhdarchids, the latter containing Quetzalcoatlus sp. (now known as the species Q. lawsoni). Cervical vertebrae A large neck vertebra attributed to Hatzegopteryx is short and unusually robust. The preserved portion measures long, with the entire vertebra likely measuring long in life. Pterosaurs had nine neck vertebrae. Regression indicates that the third to seventh cervical vertebrae would have collectively measured in length, with the longest vertebra - the fifth - only measuring approximately long. Meanwhile, the same vertebrae in the similarly giant Arambourgiania measured . This indicates that the neck of Hatzegopteryx is about 50–60% the length of what would be expected for a giant azhdarchid of its size. The bottom surface of the neck vertebra was also unusually thick, at . For most other giant azhdarchids, including Arambourgiania, this surface is less than thick. Although the neural spine of the vertebra is not completely preserved, the width of the preserved portion suggests that it was relatively tall and robust relative to those of other pterosaurs. Other aspects of the vertebra converge upon the seventh neck vertebra of the smaller Azhdarcho most closely: The articulating sockets (cotyles) are much shallower than the neural arches, and are four times as wide as they are tall, a process on the bottom of the vertebrae, known as a hypapophysis, is present, the processes at the front of the vertebrae, the prezygapophyses, are splayed, and the vertebra has a tapered "waist" in the middle of the centrum. Although initially identified as a third neck vertebra, these traits supports the identification of the vertebra as coming from the rear of the neck, more specifically as being the seventh vertebra. Classification Similarities between the humerus of Hatzegopteryx and Quetzalcoatlus northropi have been noted, as both of them have a long, smooth deltopectoral crest and a thickened humeral head. These were initially the basis of the taxon's referral to the clade Azhdarchidae, but they are also similar enough to be a basis for the synonymy of Hatzegopteryx and Quetzalcoatlus. However, this is likely due to the relatively non-diagnostic nature of the humerus in giant azhdarchid taxonomy and the lack of a detailed description for the elements of Q. northropi at the time of the assignment. However, the neck and jaw anatomy of Hatzegopteryx is quite clearly distinct from the smaller Q. lawsoni, which warrants the retention of Hatzegopteryx as a taxon separate from Quetzalcoatlus. The neck vertebra referred to Hatzegopteryx sp. contains a number of traits that allow for it to be definitely identified as that of an azhdarchid. The centrum is relatively low, the zygapophyses are large and flattened, and the preserved portions of the neural spine indicate that it is bifid, or split in half. A phylogenetic analysis conducted by paleontologist Nicholas Longrich and colleagues in 2018 had recovered Hatzegopteryx in a derived (advanced) position within Azhdarchidae. This placement is corroborated in subsequent phylogenetic analyses by Brian Andres in 2021 and by Rodrigo Pêgas and colleagues in 2023. They both found Hatzegopteryx within the subfamily Quetzalcoatlinae, albeit in different positions. Andres found it in a clade with Arambourgiania and Quetzalcoatlus, while Pêgas and colleagues recovered it as the sister taxon to Albadraco, another pterosaur found in the Hațeg Basin. Their cladograms are shown below: Topology 1: Andres (2021). Topology 2: Pêgas and colleagues (2023). Paleobiology Bone structure While the skull of Hatzegopteryx was unusually large and robust, its wing bones are comparable to those of other flying pterosaurs, indicating that it was not flightless at all. Buffetaut and colleagues suggested that, in order to fly, the skull weight of Hatzegopteryx must have been reduced in some way. The necessary weight reduction may have been accomplished by the internal structure of the skull bones, which were full of small pits and hollows (alveoli) up to long, separated by a matrix of thin bony struts (trabeculae). The wing bones also bear a similar internal structure. This unusual construction differs from that of other pterosaurs, and more closely resembles the structure of expanded polystyrene (which is used to manufacture Styrofoam). This would have made the skull sturdy and stress-resistant, but also lightweight, enabling the animal to fly. A similar internal structure is also seen in the cervical vertebra referred to Hatzegopteryx. Neck biomechanics As a consequence of its robust, thick-walled vertebrae, the neck of Hatzegopteryx was much stronger than that of Arambourgiania. This can be quantified using relative failure force, which is the bone failure force of a vertebra divided by the body weight of the pterosaur that it belongs to, estimated at for Arambourgiania and Hatzegopteryx. While Arambourgiania'''s neck vertebrae fail at about half of its body weight, the posterior neck vertebrae of Hatzegopteryx can withstand anywhere between five and ten body weights, depending on the loading of the bone. Even the hypothetically longer anterior neck vertebrae of Hatzegopteryx would be able to withstand four to seven body weights. Although the centrum of Hatzegopteryx is much more robust than Arambourgiania, their ratios of bone radius to bone thickness (R/t) are roughly the same (9.45 for Hatzegopteryx and 9.9 for Arambourgiania). This may represent a compromise between increasing bending strength and buckling strength. Higher R/t ratios lead to improved bending strength, but weaker buckling strength. To compensate for this, Hatzegopteryx shows a number of other adaptations to improve buckling strength, namely the distinctive internal structures of the bones and the large articular joints of the vertebrae, the latter of which helps to distribute stress. In order to support the robust head, the neck of Hatzegopteryx was likely strongly muscled. On the occipital bones, the nuchal lines, which serve as muscular attachments, are very well-developed and bear prominent scarring. These conceivably supported the transversospinalis muscles, which aid in extension and flexion of the head and neck. Likewise, the opisthotic process, neural spines, and zygapophyses all appeared to have been large and robust (with the latter bearing many pits and edges that likely represent muscle scars), and the basioccipital tuberosities were long. These all serve as points of attachment for various muscles of the head and neck. Although not entirely unmuscled, the neck of Arambourgiania probably would not have been as extensively muscled as that of Hatzegopteryx. Paleoecology Like all azhdarchid pterosaurs, Hatzegopteryx was probably a terrestrially foraging generalist predator. It is significantly larger than any other terrestrial predator from Maastrichtian Europe. This, due to its large size in an environment otherwise dominated by island dwarf dinosaurs (with no large hypercarnivorous theropods in the region) it has been suggested that Hatzegopteryx played the role of an apex predator in the Hațeg Island ecosystem. The robust anatomy of Hatzegopteryx suggests that it may have tackled larger prey than other azhdarchids, including animals too large to swallow whole. Meanwhile, other giant azhdarchids, like Arambourgiania, would probably have instead fed on small prey (up to the size of a human), including hatchling or small dinosaurs and eggs. Another pterosaur, Thalassodromeus, has similarly been suggested to be raptorial. Apart from Hatzegopteryx, there are various other unusual denizens of the Hațeg Island ecosystem. Co-occurring pterosaurs included the small azhdarchid Eurazhdarcho, with a wingspan of , an unnamed, small-sized short-necked azhdarchid with a wingspan of , a somewhat larger and likewise unnamed azhdarchid, with a wingspan of , and apparently small pteranodontids have been found as well. The robust, flightless, and possibly herbivorous avialan or dromaeosaurid Balaur, which had two enlarged claws on each foot, represents another highly specialized component of the fauna. The ecosystem contained a number of insular dwarfs, namely the titanosaurs Magyarosaurus and Paludititan, the hadrosaurid Telmatosaurus, and the iguanodontian Zalmoxes. Along with the nodosaurid Struthiosaurus, various small, fragmentary maniraptorans were present, including Bradycneme, Elopteryx, and Heptasteornis. Crocodilian remains, belonging to the genera Allodaposuchus, Doratodon, and Acynodon have also been found. Non-archosaurian components include the kogaionid multituberculate mammals Kogaionon, Barbatodon, Litovoi tholocephalos, and Hainina, lizards such as the teiid Bicuspidon and the paramacellodid Becklesius, an unnamed madtsoiid snake, and the lissamphibians Albanerpeton, Eodiscoglossus, and Paradiscoglossus''. The importance of this fauna is a major geological justification for the designation of the area from 2004 to 2005 as Hațeg Country Dinosaurs Geopark, one of the earliest members of the European Geoparks Network, and (when the designation of UNESCO Global Geoparks was ratified in 2015) as Haţeg UNESCO Global Geopark. During the Maastrichtian, southern Europe was an archipelago. The members of the Hațeg Island ecosystem lived on a landmass known as the Tisia–Dacia Block, of which the Hațeg Basin was a small part. This landmass was about in area, and was separated from other terrestrial terrains by stretches of deep ocean in all directions by . Being located at 27°N, the island was located farther south than the present-day latitude of 45°N. As such, the climate was likely subtropical, with distinct dry and wet seasons, and had an average temperature of about . The environment consisted of various alluvial plains, wetlands, and rivers, surrounded by woodlands dominated by ferns and angiosperms. Paleosols indicate a relatively dry Cretaceous climate, with an annual precipitation of less than .
Biology and health sciences
Pterosaurs
Animals
8805625
https://en.wikipedia.org/wiki/Galaxy%20merger
Galaxy merger
Galaxy mergers can occur when two (or more) galaxies collide. They are the most violent type of galaxy interaction. The gravitational interactions between galaxies and the friction between the gas and dust have major effects on the galaxies involved, but the exact effects of such mergers depend on a wide variety of parameters such as collision angles, speeds, and relative size/composition, and are currently an extremely active area of research. Galaxy mergers are important because the merger rate is a fundamental measurement of galaxy evolution and also provides astronomers with clues about how galaxies grew into their current forms over long stretches of time. Description During the merger, stars and dark matter in each galaxy become affected by the approaching galaxy. Toward the late stages of the merger, the gravitational potential begins changing so quickly that star orbits are greatly altered, and lose any trace of their prior orbit. This process is called “violent relaxation”. For example, when two disk galaxies collide they begin with their stars in an orderly rotation in the planes of the two separate disks. During the merger, that ordered motion is transformed into random energy (“thermalized”). The resultant galaxy is dominated by stars that orbit the galaxy in a complicated and random interacting network of orbits, which is what is observed in elliptical galaxies. Mergers are also locations of extreme amounts of star formation. The star formation rate (SFR) during a major merger can reach thousands of solar masses worth of new stars each year, depending on the gas content of each galaxy and its redshift. Typical merger SFRs are less than 100 new solar masses per year. This is large compared to our Galaxy, which makes only a few new stars each year (~2 new stars). Though stars almost never get close enough to actually collide in galaxy mergers, giant molecular clouds rapidly fall to the center of the galaxy where they collide with other molecular clouds. These collisions then induce condensations of these clouds into new stars. We can see this phenomenon in merging galaxies in the nearby universe. Yet, this process was more pronounced during the mergers that formed most elliptical galaxies we see today, which likely occurred 1–10 billion years ago, when there was much more gas (and thus more molecular clouds) in galaxies. Also, away from the center of the galaxy, gas clouds will run into each other, producing shocks which stimulate the formation of new stars in gas clouds. The result of all this violence is that galaxies tend to have little gas available to form new stars after they merge. Thus if a galaxy is involved in a major merger, and then a few billion years pass, the galaxy will have very few young stars (see Stellar evolution) left. This is what we see in today's elliptical galaxies, very little molecular gas and very few young stars. It is thought that this is because elliptical galaxies are the end products of major mergers which use up the majority of gas during the merger, and thus further star formation after the merger is quenched. Galaxy mergers can be simulated in computers, to learn more about galaxy formation. Galaxy pairs initially of any morphological type can be followed, taking into account all gravitational forces, and also the hydrodynamics and dissipation of the interstellar gas, the star formation out of the gas, and the energy and mass released back in the interstellar medium by supernovae. Such a library of galaxy merger simulations can be found on the GALMER website. A study led by Jennifer Lotz of the Space Telescope Science Institute in Baltimore, Maryland created computer simulations in order to better understand images taken by the Hubble Space Telescope. Lotz's team tried to account for a broad range of merger possibilities, from a pair of galaxies with equal masses joining to an interaction between a giant galaxy and a tiny one. The team also analyzed different orbits for the galaxies, possible collision impacts, and how galaxies were oriented to each other. In all, the group came up with 57 different merger scenarios and studied the mergers from 10 different viewing angles. One of the largest galaxy mergers ever observed consisted of four elliptical galaxies in the cluster CL0958+4702. It may form one of the largest galaxies in the Universe. Categories Galaxy mergers can be classified into distinct groups due to the properties of the merging galaxies, such as their number, their comparative size and their gas richness. By number Mergers can be categorized by the number of galaxies engaged in the process: Binary merger Two interacting galaxies merge. Multiple merger Three or more galaxies merge. By size Mergers can be categorized by the extent to which the largest involved galaxy is changed in size or form by the merger: Minor merger A merger is minor if one of the galaxies is significantly larger than the other(s). The larger galaxy will often "eat" the smaller - a phenomenon aptly named “galactic cannibalism” - absorbing most of its gas and stars with little other significant effect on the larger galaxy. Our home galaxy, the Milky Way, is thought to be currently absorbing several smaller galaxies in this fashion, such as the Canis Major Dwarf Galaxy, and possibly the Magellanic Clouds. The Virgo Stellar Stream is thought to be the remains of a dwarf galaxy that has been mostly merged with the Milky Way. Major merger A merger of two spiral galaxies that are approximately the same size is major; if they collide at appropriate angles and speeds, they will likely merge in a fashion that drives away much of the dust and gas through a variety of feedback mechanisms that often include a stage in which there are active galactic nuclei. This is thought to be the driving force behind many quasars. The result is an elliptical galaxy, and many astronomers hypothesize that this is the primary mechanism that creates ellipticals. One study found that large galaxies merged with each other on average once over the past 9 billion years. Small galaxies coalesced with large galaxies more frequently. Note that the Milky Way and the Andromeda Galaxy are predicted to collide in about 4.5 billion years. The expected result of these galaxies merging would be major as they have similar sizes, and will change from two "grand design" spiral galaxies to (probably) a giant elliptical galaxy. By gas richness Mergers can be categorized by the degree to which the gas (if any) carried within and around the merging galaxies interacts: Wet merger A wet merger is between gas-rich galaxies ("blue" galaxies). Wet mergers typically produce a large amount of star formation, transform disc galaxies into elliptical galaxies and trigger quasar activity. Dry merger A merger between gas-poor galaxies ("red" galaxies) is called dry. Dry mergers typically do not greatly change the galaxies' star formation rates, but can play an important role in increasing stellar mass. Damp merger A damp merger occurs between the same two galaxy-types mentioned above ("blue" and "red" galaxies), if there is enough gas to fuel significant star formation but not enough to form globular clusters. Mixed merger A mixed merger occurs when gas-rich and gas-poor galaxies ("blue" and "red" galaxies) merge. Merger history trees In the standard cosmological model, any single galaxy is expected to have formed from a few or many successive mergers of dark matter haloes, in which gas cools and forms stars at the centres of the haloes, becoming the optically visible objects historically identified as galaxies during the twentieth century. Modelling the mathematical graph of the mergers of these dark matter haloes, and in turn, the corresponding star formation, was initially treated either by analysing purely gravitational N-body simulations or by using numerical realisations of statistical ("semi-analytical") formulae. In a 1992 observational cosmology conference in Milan, Roukema, Quinn and Peterson showed the first merger history trees of dark matter haloes extracted from cosmological N-body simulations. These merger history trees were combined with formulae for star formation rates and evolutionary population synthesis, yielding synthetic luminosity functions of galaxies (statistics of how many galaxies are intrinsically bright or faint) at different cosmological epochs. Given the complex dynamics of dark matter halo mergers, a fundamental problem in modelling merger history tree is to define when a halo at one time step is a descendant of a halo at the previous time step. Roukema's group chose to define this relation by requiring the halo at the later time step to contain strictly more than 50 percent of the particles in the halo at the earlier time step; this guaranteed that between two time steps, any halo could have at most a single descendant. This galaxy formation modelling method yields rapidly calculated models of galaxy populations with synthetic spectra and corresponding statistical properties comparable with observations. Independently, Lacey and Cole showed at the same 1992 conference how they used the Press–Schechter formalism combined with dynamical friction to statistically generate Monte Carlo realisations of dark matter halo merger history trees and the corresponding formation of the stellar cores (galaxies) of the haloes. Kauffmann, White and Guiderdoni extended this approach in 1993 to include semi-analytical formulae for gas cooling, star formation, gas reheating from supernovae, and for the hypothesised conversion of disc galaxies into elliptical galaxies. Both the Kauffmann group and Okamoto and Nagashima later took up the N-body simulation derived merger history tree approach. Examples Some of the galaxies that are in the process of merging or are believed to have formed by merging are: Antennae Galaxies Mice Galaxies Centaurus A NGC 7318 Arp 273 Gallery
Physical sciences
Basics_2
Astronomy
17429873
https://en.wikipedia.org/wiki/Focal%20mechanism
Focal mechanism
The focal mechanism of an earthquake describes the deformation in the source region that generates the seismic waves. In the case of a fault-related event, it refers to the orientation of the fault plane that slipped, and the slip vector and is also known as a fault-plane solution. Focal mechanisms are derived from a solution of the moment tensor for the earthquake, which itself is estimated by an analysis of observed seismic waveforms. The focal mechanism can be derived from observing the pattern of "first motions", whether the first arriving P waves break up or down. This method was used before waveforms were recorded and analysed digitally, and this method is still used for earthquakes too small for easy moment tensor solution. Focal mechanisms are now mainly derived using semi-automatic analysis of the recorded waveforms. Moment tensor solutions The moment tensor solution is displayed graphically using a so-called beachball diagram. The pattern of energy radiated during an earthquake with a single direction of motion on a single fault plane may be modelled as a double couple, which is described mathematically as a special case of a second order tensor (similar to those for stress and strain) known as the moment tensor. Earthquakes not caused by fault movement have quite different patterns of energy radiation. In the case of an underground nuclear explosion, for instance, the seismic moment tensor is isotropic, and this difference allows such explosions to be easily discriminated from their seismic response. This is an essential part of monitoring to distinguish between earthquakes and explosions for the Comprehensive Test Ban Treaty. Graphical representation ("beachball plot") The data for an earthquake is plotted using a lower-hemisphere stereographic projection. The azimuth and take-off angle are used to plot the position of an individual seismic record. The take-off angle is the angle from the vertical of a seismic ray as it emerges from the earthquake focus. These angles are calculated from a standard set of tables that describe the relationship between the take-off angle and the distance between the focus and the observing station. By convention, filled symbols plot data from stations where the P wave first motion recorded was up (a compressive wave), hollow symbols for down (a tensional wave), and crosses for stations with arrivals too weak to get a sense of motion. If there are sufficient observations, one may draw two well-constrained orthogonal great circles that divide the compressive from the tensional observations, and these are the nodal planes. Observations from stations with no clear first motion normally lie close to these planes. By convention, the compressional quadrants are colour-filled, and the tensional is left white. The two nodal planes intersect at the N (neutral)-axis. The P and T axes are also often plotted; with the N axis, these three directions respectively match the directions of the maximum, minimum, and intermediate principal compressive stresses associated with the earthquake. The P-axis is plotted in the centre of the white segment, and the T-axis in the centre of the colour-filled segment. The fault plane responsible for the earthquake will parallel one of the nodal planes; the other is called the auxiliary plane. It is impossible to determine solely from a focal mechanism which of the nodal planes is the fault plane. Other geological or geophysical evidence is needed to remove the ambiguity. The slip vector, the direction of motion of one side of the fault relative to the other, lies within the fault plane, 90 degrees from the N-axis. For example, in the 2004 Indian Ocean earthquake, the moment tensor solution gives two nodal planes, one dipping northeast at 6 degrees and one dipping southwest at 84 degrees. In this case, the earthquake can be confidently associated with the plane dipping shallowly to the northeast, as this is the orientation of the subducting slab as defined by historical earthquake locations and plate tectonic models. Fault plane solutions are useful for defining the style of faulting in seismogenic volumes at depth for which no surface expression of the fault plane exists or where an ocean covers the fault trace. A simple example of a successful test of the hypothesis of sea floor spreading was the demonstration that the sense of motion along oceanic transform faults is opposite to what would be expected in classical geologic interpretation of the offset oceanic ridges. This was done by constructing fault plane solutions of earthquakes in oceanic faults, which showed beach ball plots of strike-slip nature (see figures), with one nodal plane parallel to the fault and the slip in the direction required by the idea of seafloor spreading from the ridges. Fault plane solutions also played a crucial role in discovering that the deep earthquake zones in some subducting slabs are under compression while others are under tension. Beach ball calculator There are several programs available to prepare Focal Mechanism Solutions (FMS). BBC, a MATLAB-based toolbox, is available to prepare the beach ball diagrams. This software plots the first motion polarity data as it arrives at different stations. The compression and dilation are separated using mouse help. A final diagram is prepared automatically.
Physical sciences
Seismology
Earth science
53879
https://en.wikipedia.org/wiki/Orogeny
Orogeny
Orogeny () is a mountain-building process that takes place at a convergent plate margin when plate motion compresses the margin. An or develops as the compressed plate crumples and is uplifted to form one or more mountain ranges. This involves a series of geological processes collectively called orogenesis. These include both structural deformation of existing continental crust and the creation of new continental crust through volcanism. Magma rising in the orogen carries less dense material upwards while leaving more dense material behind, resulting in compositional differentiation of Earth's lithosphere (crust and uppermost mantle). A synorogenic (or synkinematic) process or event is one that occurs during an orogeny. The word orogeny comes . Although it was used before him, the American geologist G. K. Gilbert used the term in 1890 to mean the process of mountain-building, as distinguished from epeirogeny. Tectonics Orogeny takes place on the convergent margins of continents. The convergence may take the form of subduction (where a continent rides forcefully over an oceanic plate to form a noncollisional orogeny) or continental collision (convergence of two or more continents to form a collisional orogeny). Orogeny typically produces orogenic belts or orogens, which are elongated regions of deformation bordering continental cratons (the stable interiors of continents). Young orogenic belts, in which subduction is still taking place, are characterized by frequent volcanic activity and earthquakes. Older orogenic belts are typically deeply eroded to expose displaced and deformed strata. These are often highly metamorphosed and include vast bodies of intrusive igneous rock called batholiths. Subduction zones consume oceanic crust, thicken lithosphere, and produce earthquakes and volcanoes. Not all subduction zones produce orogenic belts; mountain building takes place only when the subduction produces compression in the overriding plate. Whether subduction produces compression depends on such factors as the rate of plate convergence and the degree of coupling between the two plates, while the degree of coupling may in turn rely on such factors as the angle of subduction and rate of sedimentation in the oceanic trench associated with the subduction zone. The Andes Mountains are an example of a noncollisional orogenic belt, and such belts are sometimes called Andean-type orogens. As subduction continues, island arcs, continental fragments, and oceanic material may gradually accrete onto the continental margin. This is one of the main mechanisms by which continents have grown. An orogen built of crustal fragments (terranes) accreted over a long period of time, without any indication of a major continent-continent collision, is called an accretionary orogen. The North American Cordillera and the Lachlan Orogen of southeast Australia are examples of accretionary orogens. The orogeny may culminate with continental crust from the opposite side of the subducting oceanic plate arriving at the subduction zone. This ends subduction and transforms the accretional orogen into a Himalayan-type collisional orogen. The collisional orogeny may produce extremely high mountains, as has been taking place in the Himalayas for the last 65 million years. The processes of orogeny can take tens of millions of years and build mountains from what were once sedimentary basins. Activity along an orogenic belt can be extremely long-lived. For example, much of the basement underlying the United States belongs to the Transcontinental Proterozoic Provinces, which accreted to Laurentia (the ancient heart of North America) over the course of 200 million years in the Paleoproterozoic. The Yavapai and Mazatzal orogenies were peaks of orogenic activity during this time. These were part of an extended period of orogenic activity that included the Picuris orogeny and culminated in the Grenville orogeny, lasting at least 600 million years. A similar sequence of orogenies has taken place on the west coast of North America, beginning in the late Devonian (about 380 million years ago) with the Antler orogeny and continuing with the Sonoma orogeny and Sevier orogeny and culminating with the Laramide orogeny. The Laramide orogeny alone lasted 40 million years, from 75 million to 35 million years ago. Intraplate orogeny Stresses transmitted from plate boundaries can also lead to episodes of intracontinental transpressional orogeny. Examples in Australia include the Neoproterozoic Petermann Orogeny (630-520 Ma), and the Sprigg Orogeny (Miocene – present). Orogens Orogens show a great range of characteristics, but they may be broadly divided into collisional orogens and noncollisional orogens (Andean-type orogens). Collisional orogens can be further divided by whether the collision is with a second continent or a continental fragment or island arc. Repeated collisions of the later type, with no evidence of collision with a major continent or closure of an ocean basin, result in an accretionary orogen. Examples of orogens arising from collision of an island arc with a continent include Taiwan and the collision of Australia with the Banda arc. Orogens arising from continent-continent collisions can be divided into those involving ocean closure (Himalayan-type orogens) and those involving glancing collisions with no ocean basin closure (as is taking place today in the Southern Alps of New Zealand). Orogens have a characteristic structure, though this shows considerable variation. A foreland basin forms ahead of the orogen due mainly to loading and resulting flexure of the lithosphere by the developing mountain belt. A typical foreland basin is subdivided into a wedge-top basin above the active orogenic wedge, the foredeep immediately beyond the active front, a forebulge high of flexural origin and a back-bulge area beyond, although not all of these are present in all foreland-basin systems. The basin migrates with the orogenic front and early deposited foreland basin sediments become progressively involved in folding and thrusting. Sediments deposited in the foreland basin are mainly derived from the erosion of the actively uplifting rocks of the mountain range, although some sediments derive from the foreland. The fill of many such basins shows a change in time from deepwater marine (flysch-style) through shallow water to continental (molasse-style) sediments. While active orogens are found on the margins of present-day continents, older inactive orogenies, such as the Algoman, Penokean and Antler, are represented by deformed and metamorphosed rocks with sedimentary basins further inland. Orogenic cycle Long before the acceptance of plate tectonics, geologists had found evidence within many orogens of repeated cycles of deposition, deformation, crustal thickening and mountain building, and crustal thinning to form new depositional basins. These were named orogenic cycles, and various theories were proposed to explain them. Canadian geologist Tuzo Wilson first put forward a plate tectonic interpretation of orogenic cycles, now known as Wilson cycles. Wilson proposed that orogenic cycles represented the periodic opening and closing of an ocean basin, with each stage of the process leaving its characteristic record on the rocks of the orogen. Continental rifting The Wilson cycle begins when previously stable continental crust comes under tension from a shift in mantle convection. Continental rifting takes place, which thins the crust and creates basins in which sediments accumulate. As the basins deepen, the ocean invades the rift zone, and as the continental crust rifts completely apart, shallow marine sedimentation gives way to deep marine sedimentation on the thinned marginal crust of the two continents. Seafloor spreading As the two continents rift apart, seafloor spreading commences along the axis of a new ocean basin. Deep marine sediments continue to accumulate along the thinned continental margins, which are now passive margins. Subduction At some point, subduction is initiated along one or both of the continental margins of the ocean basin, producing a volcanic arc and possibly an Andean-type orogen along that continental margin. This produces deformation of the continental margins and possibly crustal thickening and mountain building. Mountain building Mountain formation in orogens is largely a result of crustal thickening. The compressive forces produced by plate convergence result in pervasive deformation of the crust of the continental margin (thrust tectonics). This takes the form of folding of the ductile deeper crust and thrust faulting in the upper brittle crust. Crustal thickening raises mountains through the principle of isostasy. Isostacy is the balance of the downward gravitational force upon an upthrust mountain range (composed of light, continental crust material) and the buoyant upward forces exerted by the dense underlying mantle. Portions of orogens can also experience uplift as a result of delamination of the orogenic lithosphere, in which an unstable portion of cold lithospheric root drips down into the asthenospheric mantle, decreasing the density of the lithosphere and causing buoyant uplift. An example is the Sierra Nevada in California. This range of fault-block mountains experienced renewed uplift and abundant magmatism after a delamination of the orogenic root beneath them. Mount Rundle on the Trans-Canada Highway between Banff and Canmore provides a classic example of a mountain cut in dipping-layered rocks. Millions of years ago a collision caused an orogeny, forcing horizontal layers of an ancient ocean crust to be thrust up at an angle of 50–60°. That left Rundle with one sweeping, tree-lined smooth face, and one sharp, steep face where the edge of the uplifted layers are exposed. Although mountain building mostly takes place in orogens, a number of secondary mechanisms are capable of producing substantial mountain ranges. Areas that are rifting apart, such as mid-ocean ridges and the East African Rift, have mountains due to thermal buoyancy related to the hot mantle underneath them; this thermal buoyancy is known as dynamic topography. In strike-slip orogens, such as the San Andreas Fault, restraining bends result in regions of localized crustal shortening and mountain building without a plate-margin-wide orogeny. Hotspot volcanism results in the formation of isolated mountains and mountain chains that look as if they are not necessarily on present tectonic-plate boundaries, but they are essentially the product of plate tectonism. Likewise, uplift and erosion related to epeirogenesis (large-scale vertical motions of portions of continents without much associated folding, metamorphism, or deformation) can create local topographic highs. Closure of the ocean basin Eventually, seafloor spreading in the ocean basin comes to a halt, and continued subduction begins to close the ocean basin. Continental collision and orogeny The closure of the ocean basin ends with a continental collision and the associated Himalayan-type orogen. Erosion Erosion represents the final phase of the orogenic cycle. Erosion of overlying strata in orogenic belts, and isostatic adjustment to the removal of this overlying mass of rock, can bring deeply buried strata to the surface. The erosional process is called unroofing. Erosion inevitably removes much of the mountains, exposing the core or mountain roots (metamorphic rocks brought to the surface from a depth of several kilometres). Isostatic movements may help such unroofing by balancing out the buoyancy of the evolving orogen. Scholars debate about the extent to which erosion modifies the patterns of tectonic deformation (see erosion and tectonics). Thus, the final form of the majority of old orogenic belts is a long arcuate strip of crystalline metamorphic rocks sequentially below younger sediments which are thrust atop them and which dip away from the orogenic core. An orogen may be almost completely eroded away, and only recognizable by studying (old) rocks that bear traces of orogenesis. Orogens are usually long, thin, arcuate tracts of rock that have a pronounced linear structure resulting in terranes or blocks of deformed rocks, separated generally by suture zones or dipping thrust faults. These thrust faults carry relatively thin slices of rock (which are called nappes or thrust sheets, and differ from tectonic plates) from the core of the shortening orogen out toward the margins, and are intimately associated with folds and the development of metamorphism. History of the concept Before the development of geologic concepts during the 19th century, the presence of marine fossils in mountains was explained in Christian contexts as a result of the Biblical Deluge. This was an extension of Neoplatonic thought, which influenced early Christian writers. The 13th-century Dominican scholar Albert the Great posited that, as erosion was known to occur, there must be some process whereby new mountains and other land-forms were thrust up, or else there would eventually be no land; he suggested that marine fossils in mountainsides must once have been at the sea-floor. Orogeny was used by Amanz Gressly (1840) and Jules Thurmann (1854) as orogenic in terms of the creation of mountain elevations, as the term mountain building was still used to describe the processes. Elie de Beaumont (1852) used the evocative "Jaws of a Vise" theory to explain orogeny, but was more concerned with the height rather than the implicit structures created by and contained in orogenic belts. His theory essentially held that mountains were created by the squeezing of certain rocks. Eduard Suess (1875) recognised the importance of horizontal movement of rocks. The concept of a precursor geosyncline or initial downward warping of the solid earth (Hall, 1859) prompted James Dwight Dana (1873) to include the concept of compression in the theories surrounding mountain-building. With hindsight, we can discount Dana's conjecture that this contraction was due to the cooling of the Earth (aka the cooling Earth theory). The cooling Earth theory was the chief paradigm for most geologists until the 1960s. It was, in the context of orogeny, fiercely contested by proponents of vertical movements in the crust, or convection within the asthenosphere or mantle. Gustav Steinmann (1906) recognised different classes of orogenic belts, including the Alpine type orogenic belt, typified by a flysch and molasse geometry to the sediments; ophiolite sequences, tholeiitic basalts, and a nappe style fold structure. In terms of recognising orogeny as an event, Leopold von Buch (1855) recognised that orogenies could be placed in time by bracketing between the youngest deformed rock and the oldest undeformed rock, a principle which is still in use today, though commonly investigated by geochronology using radiometric dating. Based on available observations from the metamorphic differences in orogenic belts of Europe and North America, H. J. Zwart (1967) proposed three types of orogens in relationship to tectonic setting and style: Cordillerotype, Alpinotype, and Hercynotype. His proposal was revised by W. S. Pitcher in 1979 in terms of the relationship to granite occurrences. Cawood et al. (2009) categorized orogenic belts into three types: accretionary, collisional, and intracratonic. Both accretionary and collisional orogens developed in converging plate margins. In contrast, Hercynotype orogens generally show similar features to intracratonic, intracontinental, extensional, and ultrahot orogens, all of which developed in continental detachment systems at converged plate margins. Accretionary orogens, which were produced by subduction of one oceanic plate beneath one continental plate for arc volcanism. They are dominated by calc-alkaline igneous rocks and high-T/low-P metamorphic facies series at high thermal gradients of >30 °C/km. There is a general lack of ophiolites, migmatites and abyssal sediments. Typical examples are all circum-Pacific orogens containing continental arcs. Collisional orogens, which were produced by subduction of one continental block beneath the other continental block with the absence of arc volcanism. They are typified by the occurrence of blueschist to eclogite facies metamorphic zones, indicating high-P/low-T metamorphism at low thermal gradients of <10 °C/km. Orogenic peridotites are present but volumetrically minor, and syn-collisional granites and migmatites are also rare or of only minor extent. Typical examples are the Alps-Himalaya orogens in the southern margin of Eurasian continent and the Dabie-Sulu orogens in east-central China.
Physical sciences
Tectonics
Earth science
53916
https://en.wikipedia.org/wiki/Herbicide
Herbicide
Herbicides (, ), also commonly known as weed killers, are substances used to control undesired plants, also known as weeds. Selective herbicides control specific weed species while leaving the desired crop relatively unharmed, while non-selective herbicides (sometimes called "total weed killers") kill plants indiscriminately. The combined effects of herbicides, nitrogen fertilizer, and improved cultivars has increased yields (per acre) of major crops by three to six times from 1900 to 2000. In the United States in 2012, about 91% of all herbicide usage, determined by weight applied, was in agriculture. In 2012, world pesticide expenditures totaled nearly $24.7 billion; herbicides were about 44% of those sales and constituted the biggest portion, followed by insecticides, fungicides, and fumigants. Herbicide is also used in forestry, where certain formulations have been found to suppress hardwood varieties in favor of conifers after clearcutting, as well as pasture systems. History Prior to the widespread use of herbicides, cultural controls, such as altering soil pH, salinity, or fertility levels, were used to control weeds. Mechanical control including tillage and flooding were also used to control weeds. In the late 19th and early 20th centuries, inorganic chemicals such as sulfuric acid, arsenic, copper salts, kerosene and sodium chlorate were used to control weeds, but these chemicals were either toxic, flammable or corrosive and were expensive and ineffective at controlling weeds. First herbicides The major breakthroughs occurred during the Second World War as the result of research conducted independently in the United Kingdom and the United States into the potential use of herbicides in war. The compound 2,4-D was first synthesized by W. G. Templeman at Imperial Chemical Industries. In 1940, his work with indoleacetic acid and naphthaleneacetic acid indicated that "growth substances applied appropriately would kill certain broad-leaved weeds in cereals without harming the crops," though these substances were too expensive and too short-lived in soil due to degradation by microorganisms to be of practical agricultural use; by 1941, his team succeeded in synthesizing a wide range of chemicals to achieve the same effect at lower cost and better efficacy, including 2,4-D. In the same year, R. Pokorny in the US achieved this as well. Independently, a team under Juda Hirsch Quastel, working at the Rothamsted Experimental Station made the same discovery. Quastel was tasked by the Agricultural Research Council (ARC) to discover methods for improving crop yield. By analyzing soil as a dynamic system, rather than an inert substance, he was able to apply techniques such as perfusion. Quastel was able to quantify the influence of various plant hormones, inhibitors, and other chemicals on the activity of microorganisms in the soil and assess their direct impact on plant growth. While the full work of the unit remained secret, certain discoveries were developed for commercial use after the war, including the 2,4-D compound. When 2,4-D was commercially released in 1946, it became the first successful selective herbicide, triggering a worldwide revolution in agricultural output. It allowed for greatly enhanced weed control in wheat, maize (corn), rice, and similar cereal grass crops, because it kills dicots (broadleaf plants), but not most monocots (grasses). The low cost of 2,4-D has led to continued usage today, and it remains one of the most commonly used herbicides in the world. Like other acid herbicides, current formulations use either an amine salt (often trimethylamine) or one of many esters of the parent compound. Further discoveries The triazine family of herbicides, which includes atrazine, was introduced in the 1950s; they have the current distinction of being the herbicide family of greatest concern regarding groundwater contamination. Atrazine does not break down readily (within a few weeks) after being applied to soils of above-neutral pH. Under alkaline soil conditions, atrazine may be carried into the soil profile as far as the water table by soil water following rainfall causing the aforementioned contamination. Atrazine is thus said to have "carryover", a generally undesirable property for herbicides. Glyphosate had been first prepared in the 1950s but its herbicidal activity was only recognized in the 1960s. It was marketed as Roundup in 1971. The development of glyphosate-resistant crop plants, it is now used very extensively for selective weed control in growing crops. The pairing of the herbicide with the resistant seed contributed to the consolidation of the seed and chemistry industry in the late 1990s. Many modern herbicides used in agriculture and gardening are specifically formulated to degrade within a short period after application. Terminology Herbicides can be classified/grouped in various ways; for example, according to their activity, the timing of application, method of application, mechanism of their action, and their chemical structures. Selectivity Chemical structure of the herbicide is of primary affecting efficacy. 2,4-D, mecoprop, and dicamba control many broadleaf weeds but remain ineffective against turf grasses. Chemical additives influence selectivity. Surfactants alter the physical properties of the spray solution and the overall phytotoxicity of the herbicide, increasing translocation. Herbicide safeners enhance the selectivity by boosting herbicide resistance by the crop but allowing the herbicide to damage the weed. Selectivity is determined by the circumstances and technique of application. Climatic factors affect absorption including humidity, light, precipitation, and temperature. Foliage-applied herbicides will enter the leaf more readily at high humidity by lengthening the drying time of the spray droplet and increasing cuticle hydration. Light of high intensity may break down some herbicides and cause the leaf cuticle to thicken, which can interfere with absorption. Precipitation may wash away or remove some foliage-applied herbicides, but it will increase root absorption of soil-applied herbicides. Drought-stressed plants are less likely to translocate herbicides. As temperature increases, herbicides' performance may decrease. Absorption and translocation may be reduced in very cold weather. Non-selective herbicides Non-selective herbicides, generally known as defoliants, are used to clear industrial sites, waste grounds, railways, and railway embankments. Paraquat, glufosinate, and glyphosate are non-selective herbicides. Timing of application Preplant: Preplant herbicides are nonselective herbicides applied to the soil before planting. Some preplant herbicides may be mechanically incorporated into the soil. The objective for incorporation is to prevent dissipation through photodecomposition and/or volatility. The herbicides kill weeds as they grow through the herbicide-treated zone. Volatile herbicides have to be incorporated into the soil before planting the pasture. Crops grown in soil treated with a preplant herbicide include tomatoes, corn, soybeans, and strawberries. Soil fumigants like metam-sodium and dazomet are in use as preplant herbicides. Preemergence: Preemergence herbicides are applied before the weed seedlings emerge through the soil surface. Herbicides do not prevent weeds from germinating but they kill weeds as they grow through the herbicide-treated zone by affecting the cell division in the emerging seedling. Dithiopyr and pendimethalin are preemergence herbicides. Weeds that have already emerged before application or activation are not affected by pre-herbicides as their primary growing point escapes the treatment. Postemergence: These herbicides are applied after weed seedlings have emerged through the soil surface. They can be foliar or root absorbed, selective or nonselective, and contact or systemic. Application of these herbicides is avoided during rain since being washed off the soil makes it ineffective. 2,4-D is a selective, systemic, foliar-absorbed postemergence herbicide. Method of application Soil applied: Herbicides applied to the soil are usually taken up by the root or shoot of the emerging seedlings and are used as preplant or preemergence treatment. Several factors influence the effectiveness of soil-applied herbicides. Weeds absorb herbicides by both passive and active mechanisms. Herbicide adsorption to soil colloids or organic matter often reduces the amount available for weed absorption. Positioning of the herbicide in the correct layer of soil is very important, which can be achieved mechanically and by rainfall. Herbicides on the soil surface are subjected to several processes that reduce their availability. Volatility and photolysis are two common processes that reduce the availability of herbicides. Many soil-applied herbicides are absorbed through plant shoots while they are still underground leading to their death or injury. EPTC and trifluralin are soil-applied herbicides. Foliar applied: These are applied to a portion of the plant above the ground and are absorbed by exposed tissues. These are generally postemergence herbicides and can either be translocated (systemic) throughout the plant or remain at a specific site (contact). External barriers of plants like cuticles, waxes, cell walls etc. affect herbicide absorption and action. Glyphosate, 2,4-D, and dicamba are foliar-applied herbicides. Persistence An herbicide is described as having low residual activity if it is neutralized within a short time of application (within a few weeks or months) – typically this is due to rainfall, or reactions in the soil. A herbicide described as having high residual activity will remain potent for the long term in the soil. For some compounds, the residual activity can leave the ground almost permanently barren. Mechanism of action Herbicides interfere with the biochemical machinery that supports plant growth. Herbicides often mimic natural plant hormones, enzyme substrates, and cofactors. They interfere with the metabolism in the target plants. Herbicides are often classified according to their site of action because as a general rule, herbicides within the same site of action class produce similar symptoms on susceptible plants. Classification based on the site of action of the herbicide is preferable as herbicide resistance management can be handled more effectively. Classification by mechanism of action (MOA) indicates the first enzyme, protein, or biochemical step affected in the plant following application: ACCase inhibitors: Acetyl coenzyme A carboxylase (ACCase) is part of the first step of lipid synthesis. Thus, ACCase inhibitors affect cell membrane production in the meristems of the grass plant. The ACCases of grasses are sensitive to these herbicides, whereas the ACCases of dicot plants are not. ALS inhibitors: Acetolactate synthase (ALS; also known as acetohydroxyacid synthase, or AHAS) is part of the first step in the synthesis of the branched-chain amino acids (valine, leucine, and isoleucine). These herbicides slowly starve affected plants of these amino acids, which eventually leads to the inhibition of DNA synthesis. They affect grasses and dicots alike. The ALS inhibitor family includes various sulfonylureas (SUs) (such as flazasulfuron and metsulfuron-methyl), imidazolinones (IMIs), triazolopyrimidines (TPs), pyrimidinyl oxybenzoates (POBs), and sulfonylamino carbonyl triazolinones (SCTs). The ALS biological pathway exists only in plants and microorganisms (but not animals), thus making the ALS-inhibitors among the safest herbicides. EPSPS inhibitors: Enolpyruvylshikimate 3-phosphate synthase enzyme (EPSPS) is used in the synthesis of the amino acids tryptophan, phenylalanine and tyrosine. They affect grasses and dicots alike. Glyphosate (Roundup) is a systemic EPSPS inhibitor inactivated by soil contact. Auxin-like herbicides: The discovery of synthetic auxins inaugurated the era of organic herbicides. They were discovered in the 1940s after a long study of the plant growth regulator auxin. Synthetic auxins mimic this plant hormone in some way. They have several points of action on the cell membrane, and are effective in the control of dicot plants. 2,4-D, 2,4,5-T, and Aminopyralid are examples of synthetic auxin herbicides. Photosystem II inhibitors reduce electron flow from water to NADP+ at the photochemical step in photosynthesis. They bind to the Qb site on the D1 protein, and prevent quinone from binding to this site. Therefore, this group of compounds causes electrons to accumulate on chlorophyll molecules. As a consequence, oxidation reactions in excess of those normally tolerated by the cell occur, killing the plant. The triazine herbicides (including simazine, cyanazine, atrazine) and urea derivatives (diuron) are photosystem II inhibitors. Other members of this class are chlorbromuron, pyrazon, isoproturon, bromacil, and terbacil. Photosystem I inhibitors steal electrons from ferredoxins, specifically the normal pathway through FeS to Fdx to NADP+, leading to direct discharge of electrons on oxygen. As a result, reactive oxygen species are produced and oxidation reactions in excess of those normally tolerated by the cell occur, leading to plant death. Bipyridinium herbicides (such as diquat and paraquat) inhibit the FeS to Fdx step of that chain, while diphenyl ether herbicides (such as nitrofen, nitrofluorfen, and acifluorfen) inhibit the Fdx to NADP+ step. HPPD inhibitors inhibit 4-hydroxyphenylpyruvate dioxygenase, which are involved in tyrosine breakdown. Tyrosine breakdown products are used by plants to make carotenoids, which protect chlorophyll in plants from being destroyed by sunlight. If this happens, the plants turn white due to complete loss of chlorophyll, and the plants die. Mesotrione and sulcotrione are herbicides in this class; a drug, nitisinone, was discovered in the course of developing this class of herbicides. Complementary to mechanism-based classifications, herbicides are often classified according to their chemical structures or motifs. Similar structural types work in similar ways. For example, aryloxphenoxypropionates herbicides (diclofop chlorazifop, fluazifop) appear to all act as ACCase inhibitors. The so-called cyclohexanedione herbicides, which are used against grasses, include the following commercial products cycloxydim, clethodim, tralkoxydim, butroxydim, sethoxydim, profoxydim, and mesotrione. Knowing about herbicide chemical family grouping serves as a short-term strategy for managing resistance to site of action. The phenoxyacetic acid mimic the natural auxin indoleacetic acid (IAA). This family includes MCPA, 2,4-D, and 2,4,5-T, picloram, dicamba, clopyralid, and triclopyr. WSSA and HRAC classification Using the Weed Science Society of America (WSSA) and herbicide Resistance and World Grains (HRAC) systems, herbicides are classified by mode of action. Eventually the Herbicide Resistance Action Committee (HRAC) and the Weed Science Society of America (WSSA) developed a classification system. Groups in the WSSA and the HRAC systems are designated by numbers and letters, inform users awareness of herbicide mode of action and provide more accurate recommendations for resistance management. Use and application Most herbicides are applied as water-based sprays using ground equipment. Ground equipment varies in design, but large areas can be sprayed using self-propelled sprayers equipped with long booms, of with spray nozzles spaced every apart. Towed, handheld, and even horse-drawn sprayers are also used. On large areas, herbicides may also at times be applied aerially using helicopters or airplanes, or through irrigation systems (known as chemigation). Weed-wiping may also be used, where a wick wetted with herbicide is suspended from a boom and dragged or rolled across the tops of the taller weed plants. This allows treatment of taller grassland weeds by direct contact without affecting related but desirable shorter plants in the grassland sward beneath. The method has the benefit of avoiding spray drift. In Wales, a scheme offering free weed-wiper hire was launched in 2015 in an effort to reduce the levels of MCPA in water courses. There is little difference in forestry in the early growth stages, when the height similarities between growing trees and growing annual crops yields a similar problem with weed competition. Unlike with annuals however, application is mostly unnecessary thereafter and is thus mostly used to decrease the delay between productive economic cycles of lumber crops. Misuse and misapplication Herbicide volatilisation or spray drift may result in herbicide affecting neighboring fields or plants, particularly in windy conditions. Sometimes, the wrong field or plants may be sprayed due to error. Use politically, militarily, and in conflict Although herbicidal warfare uses chemical substances, its main purpose is to disrupt agricultural food production or to destroy plants which provide cover or concealment to the enemy. During the Malayan Emergency, British Commonwealth forces deployed herbicides and defoliants in the Malaysian countryside in order to deprive Malayan National Liberation Army (MNLA) insurgents of cover, potential sources of food and to flush them out of the jungle. Deployment of herbicides and defoliants served the dual purpose of thinning jungle trails to prevent ambushes and destroying crop fields in regions where the MNLA was active to deprive them of potential sources of food. As part of this process, herbicides and defoliants were also sprayed from Royal Air Force aircraft. The use of herbicides as a chemical weapon by the U.S. military during the Vietnam War has left tangible, long-term impacts upon the Vietnamese people and U.S soldiers that handled the chemicals. More than 20% of South Vietnam's forests and 3.2% of its cultivated land were sprayed at least once between during the war. The government of Vietnam says that up to four million people in Vietnam were exposed to the defoliant, and as many as three million people have suffered illness because of Agent Orange, while the Viet Nam Red Cross Society estimates that up to one million people were disabled or have health problems as a result of exposure to Agent Orange. The United States government has described these figures as unreliable. Health and environmental effects Human health Many questions exist about herbicides' health and environmental effects, because of the many kinds of herbicide and the myriad potential targets, mostly unintended. For example, a 1995 panel of 13 scientists reviewing studies on the carcinogenicity of 2,4-D had divided opinions on the likelihood 2,4-D causes cancer in humans. , studies on phenoxy herbicides were too few to accurately assess the risk of many types of cancer from these herbicides, even although evidence was stronger that exposure to these herbicides is associated with increased risk of soft tissue sarcoma and non-Hodgkin lymphoma. Toxicity Herbicides have widely variable toxicity. Acute toxicity, short term exposure effects, and chronic toxicity, from long term environmental or occupational exposure. Much public suspicion of herbicides confuses valid statements of acute toxicity with equally valid statements of lack of chronic toxicity at the recommended levels of usage. For instance, while glyphosate formulations with tallowamine adjuvants are acutely toxic, their use was found to be uncorrelated with any health issues like cancer in a massive US Department of Health study on 90,000 members of farmer families for over a period of 23 years. That is, the study shows lack of chronic toxicity, but cannot question the herbicide's acute toxicity. Health effects Some herbicides cause a range of health effects ranging from skin rashes to death. The pathway of attack can arise from intentional or unintentional direct consumption, improper application resulting in the herbicide coming into direct contact with people or wildlife, inhalation of aerial sprays, or food consumption prior to the labelled preharvest interval. Under some conditions, certain herbicides can be transported via leaching or surface runoff to contaminate groundwater or distant surface water sources. Generally, the conditions that promote herbicide transport include intense storm events (particularly shortly after application) and soils with limited capacity to adsorb or retain the herbicides. Herbicide properties that increase likelihood of transport include persistence (resistance to degradation) and high water solubility. Contamination Cases have been reported where Phenoxy herbicides are contaminated with dioxins such as TCDD; research has suggested such contamination results in a small rise in cancer risk after occupational exposure to these herbicides. Triazine exposure has been implicated in a likely relationship to increased risk of breast cancer, although a causal relationship remains unclear. False claims Herbicide manufacturers have at times made false or misleading claims about the safety of their products. Chemical manufacturer Monsanto Company agreed to change its advertising after pressure from New York attorney general Dennis Vacco; Vacco complained about misleading claims that its spray-on glyphosate-based herbicides, including Roundup, were safer than table salt and "practically non-toxic" to mammals, birds, and fish (though proof that this was ever said is hard to find). Roundup is toxic and has resulted in death after being ingested in quantities ranging from 85 to 200 ml, although it has also been ingested in quantities as large as 500 ml with only mild or moderate symptoms. The manufacturer of Tordon 101 (Dow AgroSciences, owned by the Dow Chemical Company) has claimed Tordon 101 has no effects on animals and insects, in spite of evidence of strong carcinogenic activity of the active ingredient, picloram, in studies on rats. Ecological effects Herbicide use generally has negative impacts on many aspects of the environment. Insects, non-targeted plants, animals, and aquatic systems subject to serious damage from herbicides. Impacts are highly variable. Aquatic life Atrazine has often been blamed for affecting reproductive behavior of aquatic life, but the data do not support this assertion. Bird populations Bird populations are one of many indicators of herbicide damage.Most observed effects are due not to toxicity, but to habitat changes and the decreases in abundance of species on which birds rely for food or shelter. Herbicide use in silviculture, used to favor certain types of growth following clearcutting, can cause significant drops in bird populations. Even when herbicides which have low toxicity to birds are used, they decrease the abundance of many types of vegetation on which the birds rely. Herbicide use in agriculture in the UK has been linked to a decline in seed-eating bird species which rely on the weeds killed by the herbicides. Heavy use of herbicides in neotropical agricultural areas has been one of many factors implicated in limiting the usefulness of such agricultural land for wintering migratory birds. Resistance One major complication to the use of herbicides for weed control is the ability of plants to evolve herbicide resistance, rendering the herbicides ineffective against target plants. Out of 31 known herbicide modes of action, weeds have evolved resistance to 21. 268 plant species are known to have evolved herbicide resistance at least once. Herbicide resistance was first observed in 1957, and since has evolved repeatedly in weed species from 30 families across the globe. Weed resistance to herbicides has become a major concern in crop production worldwide. Resistance to herbicides is often attributed to overuse as well as the strong evolutionary pressure on the affected weeds. Three agricultural practices account for the evolutionary pressure upon weeds to evolve resistance: monoculture, neglecting non-herbicide weed control practices, and reliance on one herbicide for weed control. To minimize resistance, rotational programs of herbicide application, where herbicides with multiple modes of action are used, have been widely promoted. In particular, glyphosate resistance evolved rapidly in part because when glyphosate use first began, it was continuously and heavily relied upon for weed control. This caused incredibly strong selective pressure upon weeds, encouraging mutations conferring glyphosate resistance to persist and spread. However, in 2015, an expansive study showed an increase in herbicide resistance as a result of rotation, and instead recommended mixing multiple herbicides for simultaneous application. As of 2023, the effectiveness of combining herbicides is also questioned, particularly in light of the rise of non-target site resistance. Plants developed resistance to atrazine and to ALS-inhibitors relatively early, but more recently, glyphosate resistance has dramatically risen. Marestail is one weed that has developed glyphosate resistance. Glyphosate-resistant weeds are present in the vast majority of soybean, cotton and corn farms in some U.S. states. Weeds that can resist multiple other herbicides are spreading. Few new herbicides are near commercialization, and none with a molecular mode of action for which there is no resistance. Because most herbicides could not kill all weeds, farmers rotate crops and herbicides to stop the development of resistant weeds. A 2008–2009 survey of 144 populations of waterhemp in 41 Missouri counties revealed glyphosate resistance in 69%. Weeds from some 500 sites throughout Iowa in 2011 and 2012 revealed glyphosate resistance in approximately 64% of waterhemp samples. As of 2023, 58 weed species have developed glyphosate resistance. Weeds resistant to multiple herbicides with completely different biological action modes are on the rise. In Missouri, 43% of waterhemp samples were resistant to two different herbicides; 6% resisted three; and 0.5% resisted four. In Iowa 89% of waterhemp samples resist two or more herbicides, 25% resist three, and 10% resist five. As of 2023, Palmer amaranth with resistance to six different herbicide modes of action has emerged. Annual bluegrass collected from a golf course in the U.S. state of Tennessee was found in 2020 to be resistant to seven herbicides at once. Rigid ryegrass and annual bluegrass share the distinction of the species with confirmed resistance to the largest number of herbicide modes of action, both with confirmed resistance to 12 different modes of action; however, this number references how many forms of herbicide resistance are known to have emerged in the species at some point, not how many have been found simultaneously in a single plant. In 2015, Monsanto released crop seed varieties resistant to both dicamba and glyphosate, allowing for use of a greater variety of herbicides on fields without harming the crops. By 2020, five years after the release of dicamba-resistant seed, the first example of dicamba-resistant Palmer amaranth was found in one location. Evolutionary insights When mutations occur in the genes responsible for the biological mechanisms that herbicides interfere with, these mutations may cause the herbicide mode of action to work less effectively. This is called target-site resistance. Specific mutations that have the most helpful effect for the plant have been shown to occur in separate instances and dominate throughout resistant weed populations. This is an example of convergent evolution. Some mutations conferring herbicide resistance may have fitness costs, reducing the plant's ability to survive in other ways, but over time, the least costly mutations tend to dominate in weed populations. Recently, incidences of non-target site resistance have increasingly emerged, such as examples where plants are capable of producing enzymes that neutralize herbicides before they can enter the plant's cells – metabolic resistance. This form of resistance is particularly challenging, since plants can develop non-target-site resistance to herbicides their ancestors were never directly exposed to. Biochemistry of resistance Resistance to herbicides can be based on one of the following biochemical mechanisms: Target-site resistance: In target-site resistance, the genetic change that causes the resistance directly alters the chemical mechanism the herbicide targets. The mutation may relate to an enzyme with a crucial function in a metabolic pathway, or to a component of an electron-transport system. For example, ALS-resistant weeds developed by genetic mutations leading to an altered enzyme. Such changes render the herbicide impotent. Target-site resistance may also be caused by an over-expression of the target enzyme (via gene amplification or changes in a gene promoter). A related mechanism is that an adaptable enzyme such as cytochrome P450 is redesigned to neutralize the pesticide itself. Non-target-site resistance: In non-target-site resistance, the genetic change giving resistance is not directly related to the target site, but causes the plant to be less susceptible by some other means. Some mechanisms include metabolic detoxification of the herbicide in the weed, reduced uptake and translocation, sequestration of the herbicide, or reduced penetration of the herbicide into the leaf surface. These mechanisms all cause less of the herbicide's active ingredient to reach the target site in the first place. The following terms are also used to describe cases where plants are resistant to multiple herbicides at once: Cross-resistance: In this case, a single resistance mechanism causes resistance to several herbicides. The term target-site cross-resistance is used when the herbicides bind to the same target site, whereas non-target-site cross-resistance is due to a single non-target-site mechanism (e.g., enhanced metabolic detoxification) that entails resistance across herbicides with different sites of action. Multiple resistance: In this situation, two or more resistance mechanisms are present within individual plants, or within a plant population. Resistance management Due to herbicide resistance – a major concern in agriculture – a number of products combine herbicides with different means of action. Integrated pest management may use herbicides alongside other pest control methods. Integrated weed management (IWM) approach utilizes several tactics to combat weeds and forestall resistance. This approach relies less on herbicides and so selection pressure should be reduced. By relying on diverse weed control methods, including non-herbicide methods of weed control, the selection pressure on weeds to evolve resistance can be lowered. Researchers warn that if herbicide resistance is combatted only with more herbicides, "evolution will most likely win." In 2017, the USEPA issued a revised Pesticide Registration Notice (PRN 2017-1), which provides guidance to pesticide registrants on required pesticide resistance management labeling. This requirement applies to all conventional pesticides and is meant to provide end-users with guidance on managing pesticide resistance. An example of a fully executed label compliant with the USEPA resistance management labeling guidance can be seen on the specimen label for the herbicide, cloransulam-methyl, updated in 2022. Optimising herbicide input to the economic threshold level should avoid the unnecessary use of herbicides and reduce selection pressure. Herbicides should be used to their greatest potential by ensuring that the timing, dose, application method, soil and climatic conditions are optimal for good activity. In the UK, partially resistant grass weeds such as Alopecurus myosuroides (blackgrass) and Avena genus (wild oat) can often be controlled adequately when herbicides are applied at the 2-3 leaf stage, whereas later applications at the 2-3 tiller stage can fail badly. Patch spraying, or applying herbicide to only the badly infested areas of fields, is another means of reducing total herbicide use. Approaches to treating resistant weeds Alternative herbicides When resistance is first suspected or confirmed, the efficacy of alternatives is likely to be the first consideration. If there is resistance to a single group of herbicides, then the use of herbicides from other groups may provide a simple and effective solution, at least in the short term. For example, many triazine-resistant weeds have been readily controlled by the use of alternative herbicides such as dicamba or glyphosate. Mixtures and sequences The use of two or more herbicides which have differing modes of action can reduce the selection for resistant genotypes. Ideally, each component in a mixture should: Be active at different target sites Have a high level of efficacy Be detoxified by different biochemical pathways Have similar persistence in the soil (if it is a residual herbicide) Exert negative cross-resistance Synergise the activity of the other component No mixture is likely to have all these attributes, but the first two listed are the most important. There is a risk that mixtures will select for resistance to both components in the longer term. One practical advantage of sequences of two herbicides compared with mixtures is that a better appraisal of the efficacy of each herbicide component is possible, provided that sufficient time elapses between each application. A disadvantage with sequences is that two separate applications have to be made and it is possible that the later application will be less effective on weeds surviving the first application. If these are resistant, then the second herbicide in the sequence may increase selection for resistant individuals by killing the susceptible plants which were damaged but not killed by the first application, but allowing the larger, less affected, resistant plants to survive. This has been cited as one reason why ALS-resistant Stellaria media has evolved in Scotland recently (2000), despite the regular use of a sequence incorporating mecoprop, a herbicide with a different mode of action. Natural herbicide The term organic herbicide has come to mean herbicides intended for organic farming. Few natural herbicides rival the effectiveness of synthetics. Some plants also produce their own herbicides, such as the genus Juglans (walnuts), or the tree of heaven; such actions of natural herbicides, and other related chemical interactions, is called allelopathy. The applicability of these agents is unclear. Farming practices and resistance: a case study Herbicide resistance became a critical problem in Australian agriculture after many Australian sheep farmers began to exclusively grow wheat in their pastures in the 1970s. Introduced varieties of ryegrass, while good for grazing sheep, compete intensely with wheat. Ryegrasses produce so many seeds that, if left unchecked, they can completely choke a field. Herbicides provided excellent control, reducing soil disruption because of less need to plough. Within little more than a decade, ryegrass and other weeds began to develop resistance. In response Australian farmers changed methods. By 1983, patches of ryegrass had become immune to Hoegrass (diclofop-methyl), a family of herbicides that inhibit an enzyme called acetyl coenzyme A carboxylase. Ryegrass populations were large and had substantial genetic diversity because farmers had planted many varieties. Ryegrass is cross-pollinated by wind, so genes shuffle frequently. To control its distribution, farmers sprayed inexpensive Hoegrass, creating selection pressure. In addition, farmers sometimes diluted the herbicide to save money, which allowed some plants to survive application. Farmers turned to a group of herbicides that block acetolactate synthase when resistance appeared. Once again, ryegrass in Australia evolved a kind of "cross-resistance" that allowed it to break down various herbicides rapidly. Four classes of herbicides become ineffective within a few years. In 2013, only two herbicide classes called Photosystem II and long-chain fatty acid inhibitors, were effective against ryegrass.
Technology
Pest and disease control
null
53933
https://en.wikipedia.org/wiki/Permittivity
Permittivity
In electromagnetism, the absolute permittivity, often simply called permittivity and denoted by the Greek letter (epsilon), is a measure of the electric polarizability of a dielectric material. A material with high permittivity polarizes more in response to an applied electric field than a material with low permittivity, thereby storing more energy in the material. In electrostatics, the permittivity plays an important role in determining the capacitance of a capacitor. In the simplest case, the electric displacement field resulting from an applied electric field E is More generally, the permittivity is a thermodynamic function of state. It can depend on the frequency, magnitude, and direction of the applied field. The SI unit for permittivity is farad per meter (F/m). The permittivity is often represented by the relative permittivity which is the ratio of the absolute permittivity and the vacuum permittivity This dimensionless quantity is also often and ambiguously referred to as the permittivity. Another common term encountered for both absolute and relative permittivity is the dielectric constant which has been deprecated in physics and engineering as well as in chemistry. By definition, a perfect vacuum has a relative permittivity of exactly 1 whereas at standard temperature and pressure, air has a relative permittivity of Relative permittivity is directly related to electric susceptibility () by otherwise written as The term "permittivity" was introduced in the 1880s by Oliver Heaviside to complement Thomson's (1872) "permeability". Formerly written as , the designation with has been in common use since the 1950s. Units The SI unit of permittivity is farad per meter (F/m or F·m−1). Explanation In electromagnetism, the electric displacement field represents the distribution of electric charges in a given medium resulting from the presence of an electric field . This distribution includes charge migration and electric dipole reorientation. Its relation to permittivity in the very simple case of linear, homogeneous, isotropic materials with "instantaneous" response to changes in electric field is: where the permittivity is a scalar. If the medium is anisotropic, the permittivity is a second rank tensor. In general, permittivity is not a constant, as it can vary with the position in the medium, the frequency of the field applied, humidity, temperature, and other parameters. In a nonlinear medium, the permittivity can depend on the strength of the electric field. Permittivity as a function of frequency can take on real or complex values. In SI units, permittivity is measured in farads per meter (F/m or A2·s4·kg−1·m−3). The displacement field is measured in units of coulombs per square meter (C/m2), while the electric field is measured in volts per meter (V/m). and describe the interaction between charged objects. is related to the charge densities associated with this interaction, while is related to the forces and potential differences. Vacuum permittivity The vacuum permittivity (also called permittivity of free space or the electric constant) is the ratio in free space. It also appears in the Coulomb force constant, Its value is where is the speed of light in free space, is the vacuum permeability. The constants and were both defined in SI units to have exact numerical values until the 2019 revision of the SI. Therefore, until that date, could be also stated exactly as a fraction, even if the result was irrational (because the fraction contained ). In contrast, the ampere was a measured quantity before 2019, but since then the ampere is now exactly defined and it is that is an experimentally measured quantity (with consequent uncertainty) and therefore so is the new 2019 definition of ( remains exactly defined before and since 2019). Relative permittivity The linear permittivity of a homogeneous material is usually given relative to that of free space, as a relative permittivity (also called dielectric constant, although this term is deprecated and sometimes only refers to the static, zero-frequency relative permittivity). In an anisotropic material, the relative permittivity may be a tensor, causing birefringence. The actual permittivity is then calculated by multiplying the relative permittivity by : where (frequently written ) is the electric susceptibility of the material. The susceptibility is defined as the constant of proportionality (which may be a tensor) relating an electric field to the induced dielectric polarization density such that where is the electric permittivity of free space. The susceptibility of a medium is related to its relative permittivity by So in the case of a vacuum, The susceptibility is also related to the polarizability of individual particles in the medium by the Clausius-Mossotti relation. The electric displacement is related to the polarization density by The permittivity and permeability of a medium together determine the phase velocity of electromagnetic radiation through that medium: Practical applications Determining capacitance The capacitance of a capacitor is based on its design and architecture, meaning it will not change with charging and discharging. The formula for capacitance in a parallel plate capacitor is written as where is the area of one plate, is the distance between the plates, and is the permittivity of the medium between the two plates. For a capacitor with relative permittivity , it can be said that Gauss's law Permittivity is connected to electric flux (and by extension electric field) through Gauss's law. Gauss's law states that for a closed Gaussian surface, , where is the net electric flux passing through the surface, is the charge enclosed in the Gaussian surface, is the electric field vector at a given point on the surface, and is a differential area vector on the Gaussian surface. If the Gaussian surface uniformly encloses an insulated, symmetrical charge arrangement, the formula can be simplified to where represents the angle between the electric field lines and the normal (perpendicular) to . If all of the electric field lines cross the surface at 90°, the formula can be further simplified to Because the surface area of a sphere is the electric field a distance away from a uniform, spherical charge arrangement is This formula applies to the electric field due to a point charge, outside of a conducting sphere or shell, outside of a uniformly charged insulating sphere, or between the plates of a spherical capacitor. Dispersion and causality In general, a material cannot polarize instantaneously in response to an applied field, and so the more general formulation as a function of time is That is, the polarization is a convolution of the electric field at previous times with time-dependent susceptibility given by . The upper limit of this integral can be extended to infinity as well if one defines for . An instantaneous response would correspond to a Dirac delta function susceptibility . It is convenient to take the Fourier transform with respect to time and write this relationship as a function of frequency. Because of the convolution theorem, the integral becomes a simple product, This frequency dependence of the susceptibility leads to frequency dependence of the permittivity. The shape of the susceptibility with respect to frequency characterizes the dispersion properties of the material. Moreover, the fact that the polarization can only depend on the electric field at previous times (i.e. effectively for ), a consequence of causality, imposes Kramers–Kronig constraints on the susceptibility . Complex permittivity As opposed to the response of a vacuum, the response of normal materials to external fields generally depends on the frequency of the field. This frequency dependence reflects the fact that a material's polarization does not change instantaneously when an electric field is applied. The response must always be causal (arising after the applied field), which can be represented by a phase difference. For this reason, permittivity is often treated as a complex function of the (angular) frequency of the applied field: (since complex numbers allow specification of magnitude and phase). The definition of permittivity therefore becomes where and are the amplitudes of the displacement and electric fields, respectively, is the imaginary unit, . The response of a medium to static electric fields is described by the low-frequency limit of permittivity, also called the static permittivity (also ): At the high-frequency limit (meaning optical frequencies), the complex permittivity is commonly referred to as (or sometimes ). At the plasma frequency and below, dielectrics behave as ideal metals, with electron gas behavior. The static permittivity is a good approximation for alternating fields of low frequencies, and as the frequency increases a measurable phase difference emerges between and . The frequency at which the phase shift becomes noticeable depends on temperature and the details of the medium. For moderate field strength (), and remain proportional, and Since the response of materials to alternating fields is characterized by a complex permittivity, it is natural to separate its real and imaginary parts, which is done by convention in the following way: where is the real part of the permittivity; is the imaginary part of the permittivity; is the loss angle. The choice of sign for time-dependence, , dictates the sign convention for the imaginary part of permittivity. The signs used here correspond to those commonly used in physics, whereas for the engineering convention one should reverse all imaginary quantities. The complex permittivity is usually a complicated function of frequency , since it is a superimposed description of dispersion phenomena occurring at multiple frequencies. The dielectric function must have poles only for frequencies with positive imaginary parts, and therefore satisfies the Kramers–Kronig relations. However, in the narrow frequency ranges that are often studied in practice, the permittivity can be approximated as frequency-independent or by model functions. At a given frequency, the imaginary part, , leads to absorption loss if it is positive (in the above sign convention) and gain if it is negative. More generally, the imaginary parts of the eigenvalues of the anisotropic dielectric tensor should be considered. In the case of solids, the complex dielectric function is intimately connected to band structure. The primary quantity that characterizes the electronic structure of any crystalline material is the probability of photon absorption, which is directly related to the imaginary part of the optical dielectric function . The optical dielectric function is given by the fundamental expression: In this expression, represents the product of the Brillouin zone-averaged transition probability at the energy with the joint density of states, ; is a broadening function, representing the role of scattering in smearing out the energy levels. In general, the broadening is intermediate between Lorentzian and Gaussian; for an alloy it is somewhat closer to Gaussian because of strong scattering from statistical fluctuations in the local composition on a nanometer scale. Tensorial permittivity According to the Drude model of magnetized plasma, a more general expression which takes into account the interaction of the carriers with an alternating electric field at millimeter and microwave frequencies in an axially magnetized semiconductor requires the expression of the permittivity as a non-diagonal tensor: If vanishes, then the tensor is diagonal but not proportional to the identity and the medium is said to be a uniaxial medium, which has similar properties to a uniaxial crystal. Classification of materials Materials can be classified according to their complex-valued permittivity , upon comparison of its real and imaginary components (or, equivalently, conductivity, , when accounted for in the latter). A perfect conductor has infinite conductivity, , while a perfect dielectric is a material that has no conductivity at all, ; this latter case, of real-valued permittivity (or complex-valued permittivity with zero imaginary component) is also associated with the name lossless media. Generally, when we consider the material to be a low-loss dielectric (although not exactly lossless), whereas is associated with a good conductor; such materials with non-negligible conductivity yield a large amount of loss that inhibit the propagation of electromagnetic waves, thus are also said to be lossy media. Those materials that do not fall under either limit are considered to be general media. Lossy media In the case of a lossy medium, i.e. when the conduction current is not negligible, the total current density flowing is: where is the conductivity of the medium; is the real part of the permittivity. is the complex permittivity Note that this is using the electrical engineering convention of the complex conjugate ambiguity; the physics/chemistry convention involves the complex conjugate of these equations. The size of the displacement current is dependent on the frequency of the applied field ; there is no displacement current in a constant field. In this formalism, the complex permittivity is defined as: In general, the absorption of electromagnetic energy by dielectrics is covered by a few different mechanisms that influence the shape of the permittivity as a function of frequency: First are the relaxation effects associated with permanent and induced molecular dipoles. At low frequencies the field changes slowly enough to allow dipoles to reach equilibrium before the field has measurably changed. For frequencies at which dipole orientations cannot follow the applied field because of the viscosity of the medium, absorption of the field's energy leads to energy dissipation. The mechanism of dipoles relaxing is called dielectric relaxation and for ideal dipoles is described by classic Debye relaxation. Second are the resonance effects, which arise from the rotations or vibrations of atoms, ions, or electrons. These processes are observed in the neighborhood of their characteristic absorption frequencies. The above effects often combine to cause non-linear effects within capacitors. For example, dielectric absorption refers to the inability of a capacitor that has been charged for a long time to completely discharge when briefly discharged. Although an ideal capacitor would remain at zero volts after being discharged, real capacitors will develop a small voltage, a phenomenon that is also called soakage or battery action. For some dielectrics, such as many polymer films, the resulting voltage may be less than 1–2% of the original voltage. However, it can be as much as 15–25% in the case of electrolytic capacitors or supercapacitors. Quantum-mechanical interpretation In terms of quantum mechanics, permittivity is explained by atomic and molecular interactions. At low frequencies, molecules in polar dielectrics are polarized by an applied electric field, which induces periodic rotations. For example, at the microwave frequency, the microwave field causes the periodic rotation of water molecules, sufficient to break hydrogen bonds. The field does work against the bonds and the energy is absorbed by the material as heat. This is why microwave ovens work very well for materials containing water. There are two maxima of the imaginary component (the absorptive index) of water, one at the microwave frequency, and the other at far ultraviolet (UV) frequency. Both of these resonances are at higher frequencies than the operating frequency of microwave ovens. At moderate frequencies, the energy is too high to cause rotation, yet too low to affect electrons directly, and is absorbed in the form of resonant molecular vibrations. In water, this is where the absorptive index starts to drop sharply, and the minimum of the imaginary permittivity is at the frequency of blue light (optical regime). At high frequencies (such as UV and above), molecules cannot relax, and the energy is purely absorbed by atoms, exciting electron energy levels. Thus, these frequencies are classified as ionizing radiation. While carrying out a complete ab initio (that is, first-principles) modelling is now computationally possible, it has not been widely applied yet. Thus, a phenomenological model is accepted as being an adequate method of capturing experimental behaviors. The Debye model and the Lorentz model use a first-order and second-order (respectively) lumped system parameter linear representation (such as an RC and an LRC resonant circuit). Measurement The relative permittivity of a material can be found by a variety of static electrical measurements. The complex permittivity is evaluated over a wide range of frequencies by using different variants of dielectric spectroscopy, covering nearly 21 orders of magnitude from 10−6 to 1015 hertz. Also, by using cryostats and ovens, the dielectric properties of a medium can be characterized over an array of temperatures. In order to study systems for such diverse excitation fields, a number of measurement setups are used, each adequate for a special frequency range. Various microwave measurement techniques are outlined in Chen et al. Typical errors for the Hakki–Coleman method employing a puck of material between conducting planes are about 0.3%. Low-frequency time domain measurements ( to  Hz) Low-frequency frequency domain measurements ( to  Hz) Reflective coaxial methods ( to  Hz) Transmission coaxial method ( to  Hz) Quasi-optical methods ( to  Hz) Terahertz time-domain spectroscopy ( to  Hz) Fourier-transform methods ( to  Hz) At infrared and optical frequencies, a common technique is ellipsometry. Dual polarisation interferometry is also used to measure the complex refractive index for very thin films at optical frequencies. For the 3D measurement of dielectric tensors at optical frequency, Dielectric tensor tomography can be used.
Physical sciences
Electrostatics
Physics
53941
https://en.wikipedia.org/wiki/Triangle%20inequality
Triangle inequality
In mathematics, the triangle inequality states that for any triangle, the sum of the lengths of any two sides must be greater than or equal to the length of the remaining side. This statement permits the inclusion of degenerate triangles, but some authors, especially those writing about elementary geometry, will exclude this possibility, thus leaving out the possibility of equality. If , , and are the lengths of the sides of a triangle then the triangle inequality states that with equality only in the degenerate case of a triangle with zero area. In Euclidean geometry and some other geometries, the triangle inequality is a theorem about vectors and vector lengths (norms): where the length of the third side has been replaced by the length of the vector sum . When and are real numbers, they can be viewed as vectors in , and the triangle inequality expresses a relationship between absolute values. In Euclidean geometry, for right triangles the triangle inequality is a consequence of the Pythagorean theorem, and for general triangles, a consequence of the law of cosines, although it may be proved without these theorems. The inequality can be viewed intuitively in either or . The figure at the right shows three examples beginning with clear inequality (top) and approaching equality (bottom). In the Euclidean case, equality occurs only if the triangle has a angle and two angles, making the three vertices collinear, as shown in the bottom example. Thus, in Euclidean geometry, the shortest distance between two points is a straight line. In spherical geometry, the shortest distance between two points is an arc of a great circle, but the triangle inequality holds provided the restriction is made that the distance between two points on a sphere is the length of a minor spherical line segment (that is, one with central angle in ) with those endpoints. The triangle inequality is a defining property of norms and measures of distance. This property must be established as a theorem for any function proposed for such purposes for each particular space: for example, spaces such as the real numbers, Euclidean spaces, the Lp spaces (), and inner product spaces. Euclidean geometry Euclid proved the triangle inequality for distances in plane geometry using the construction in the figure. Beginning with triangle , an isosceles triangle is constructed with one side taken as and the other equal leg along the extension of side . It then is argued that angle has larger measure than angle , so side is longer than side . However: so the sum of the lengths of sides and is larger than the length of . This proof appears in Euclid's Elements, Book 1, Proposition 20. Mathematical expression of the constraint on the sides of a triangle For a proper triangle, the triangle inequality, as stated in words, literally translates into three inequalities (given that a proper triangle has side lengths that are all positive and excludes the degenerate case of zero area): A more succinct form of this inequality system can be shown to be Another way to state it is implying and thus that the longest side length is less than the semiperimeter. A mathematically equivalent formulation is that the area of a triangle with sides must be a real number greater than zero. Heron's formula for the area is In terms of either area expression, the triangle inequality imposed on all sides is equivalent to the condition that the expression under the square root sign be real and greater than zero (so the area expression is real and greater than zero). The triangle inequality provides two more interesting constraints for triangles whose sides are , where and is the golden ratio, as Right triangle In the case of right triangles, the triangle inequality specializes to the statement that the hypotenuse is greater than either of the two sides and less than their sum. The second part of this theorem is already established above for any side of any triangle. The first part is established using the lower figure. In the figure, consider the right triangle . An isosceles triangle is constructed with equal sides . From the triangle postulate, the angles in the right triangle satisfy: Likewise, in the isosceles triangle , the angles satisfy: Therefore, and so, in particular, That means side , which is opposite to angle , is shorter than side , which is opposite to the larger angle . But . Hence: A similar construction shows , establishing the theorem. An alternative proof (also based upon the triangle postulate) proceeds by considering three positions for point : (i) as depicted (which is to be proved), or (ii) coincident with (which would mean the isosceles triangle had two right angles as base angles plus the vertex angle , which would violate the triangle postulate), or lastly, (iii) interior to the right triangle between points and (in which case angle is an exterior angle of a right triangle and therefore larger than , meaning the other base angle of the isosceles triangle also is greater than and their sum exceeds in violation of the triangle postulate). This theorem establishing inequalities is sharpened by Pythagoras' theorem to the equality that the square of the length of the hypotenuse equals the sum of the squares of the other two sides. Examples of use Consider a triangle whose sides are in an arithmetic progression and let the sides be . Then the triangle inequality requires that To satisfy all these inequalities requires When is chosen such that , it generates a right triangle that is always similar to the Pythagorean triple with sides , , . Now consider a triangle whose sides are in a geometric progression and let the sides be . Then the triangle inequality requires that The first inequality requires ; consequently it can be divided through and eliminated. With , the middle inequality only requires . This now leaves the first and third inequalities needing to satisfy The first of these quadratic inequalities requires to range in the region beyond the value of the positive root of the quadratic equation , i.e. where is the golden ratio. The second quadratic inequality requires to range between 0 and the positive root of the quadratic equation , i.e. . The combined requirements result in being confined to the range When the common ratio is chosen such that it generates a right triangle that is always similar to the Kepler triangle. Generalization to any polygon The triangle inequality can be extended by mathematical induction to arbitrary polygonal paths, showing that the total length of such a path is no less than the length of the straight line between its endpoints. Consequently, the length of any polygon side is always less than the sum of the other polygon side lengths. Example of the generalized polygon inequality for a quadrilateral Consider a quadrilateral whose sides are in a geometric progression and let the sides be . Then the generalized polygon inequality requires that These inequalities for reduce to the following The left-hand side polynomials of these two inequalities have roots that are the tribonacci constant and its reciprocal. Consequently, is limited to the range where is the tribonacci constant. Relationship with shortest paths This generalization can be used to prove that the shortest curve between two points in Euclidean geometry is a straight line. No polygonal path between two points is shorter than the line between them. This implies that no curve can have an arc length less than the distance between its endpoints. By definition, the arc length of a curve is the least upper bound of the lengths of all polygonal approximations of the curve. The result for polygonal paths shows that the straight line between the endpoints is the shortest of all the polygonal approximations. Because the arc length of the curve is greater than or equal to the length of every polygonal approximation, the curve itself cannot be shorter than the straight line path. Converse The converse of the triangle inequality theorem is also true: if three real numbers are such that each is less than the sum of the others, then there exists a triangle with these numbers as its side lengths and with positive area; and if one number equals the sum of the other two, there exists a degenerate triangle (that is, with zero area) with these numbers as its side lengths. In either case, if the side lengths are , , we can attempt to place a triangle in the Euclidean plane as shown in the diagram. We need to prove that there exists a real number consistent with the values , , and , in which case this triangle exists. By the Pythagorean theorem we have and according to the figure at the right. Subtracting these yields . This equation allows us to express in terms of the sides of the triangle: For the height of the triangle we have that . By replacing with the formula given above, we have For a real number to satisfy this, must be non-negative: which holds if the triangle inequality is satisfied for all sides. Therefore, there does exist a real number consistent with the sides , and the triangle exists. If each triangle inequality holds strictly, and the triangle is non-degenerate (has positive area); but if one of the inequalities holds with equality, so , the triangle is degenerate. Generalization to higher dimensions The area of a triangular face of a tetrahedron is less than or equal to the sum of the areas of the other three triangular faces. More generally, in Euclidean space the hypervolume of an -facet of an -simplex is less than or equal to the sum of the hypervolumes of the other facets. Much as the triangle inequality generalizes to a polygon inequality, the inequality for a simplex of any dimension generalizes to a polytope of any dimension: the hypervolume of any facet of a polytope is less than or equal to the sum of the hypervolumes of the remaining facets. In some cases the tetrahedral inequality is stronger than several applications of the triangle inequality. For example, the triangle inequality appears to allow the possibility of four points , , , and in Euclidean space such that distances and . However, points with such distances cannot exist: the area of the equilateral triangle is , which is larger than three times , the area of a isosceles triangle (all by Heron's formula), and so the arrangement is forbidden by the tetrahedral inequality. Normed vector space In a normed vector space , one of the defining properties of the norm is the triangle inequality: That is, the norm of the sum of two vectors is at most as large as the sum of the norms of the two vectors. This is also referred to as subadditivity. For any proposed function to behave as a norm, it must satisfy this requirement. If the normed space is Euclidean, or, more generally, strictly convex, then if and only if the triangle formed by , , and , is degenerate, that is, and are on the same ray, i.e., or , or for some . This property characterizes strictly convex normed spaces such as the spaces with . However, there are normed spaces in which this is not true. For instance, consider the plane with the norm (the Manhattan distance) and denote and . Then the triangle formed by , , and , is non-degenerate but Example norms The absolute value is a norm for the real line; as required, the absolute value satisfies the triangle inequality for any real numbers and . If and have the same sign or either of them is zero, then If and have opposite signs, then without loss of generality assume Then The triangle inequality is useful in mathematical analysis for determining the best upper estimate on the size of the sum of two numbers, in terms of the sizes of the individual numbers. There is also a lower estimate, which can be found using the reverse triangle inequality which states that for any real numbers and , The taxicab norm or 1-norm is one generalization absolute value to higher dimensions. To find the norm of a vector just add the absolute value of each component separately, The Euclidean norm or 2-norm defines the length of translation vectors in an -dimensional Euclidean space in terms of a Cartesian coordinate system. For a vector its length is defined using the -dimensional Pythagorean theorem: The inner product is norm in any inner product space, a generalization of Euclidean vector spaces including infinite-dimensional examples. The triangle inequality follows from the Cauchy–Schwarz inequality as follows: Given vectors and , and denoting the inner product as : {| | || |- | || |- | || |- | || (by the Cauchy–Schwarz inequality) |- | || . |} The Cauchy–Schwarz inequality turns into an equality if and only if and are linearly dependent. The inequality turns into an equality for linearly dependent and if and only if one of the vectors or is a nonnegative scalar of the other. Taking the square root of the final result gives the triangle inequality. The -norm is a generalization of taxicab and Euclidean norms, using an arbitrary positive integer exponent, where the are the components of vector . Except for the case , the -norm is not an inner product norm, because it does not satisfy the parallelogram law. The triangle inequality for general values of is called Minkowski's inequality. It takes the form: Metric space In a metric space with metric , the triangle inequality is a requirement upon distance: for all points , , and in . That is, the distance from to is at most as large as the sum of the distance from to and the distance from to . The triangle inequality is responsible for most of the interesting structure on a metric space, namely, convergence. This is because the remaining requirements for a metric are rather simplistic in comparison. For example, the fact that any convergent sequence in a metric space is a Cauchy sequence is a direct consequence of the triangle inequality, because if we choose any and such that and , where is given and arbitrary (as in the definition of a limit in a metric space), then by the triangle inequality, , so that the sequence is a Cauchy sequence, by definition. This version of the triangle inequality reduces to the one stated above in case of normed vector spaces where a metric is induced via , with being the vector pointing from point to . Reverse triangle inequality The reverse triangle inequality is an equivalent alternative formulation of the triangle inequality that gives lower bounds instead of upper bounds. For plane geometry, the statement is: Any side of a triangle is greater than or equal to the difference between the other two sides. In the case of a normed vector space, the statement is: or for metric spaces, . This implies that the norm as well as the distance-from- function are Lipschitz continuous with Lipschitz constant , and therefore are in particular uniformly continuous. The proof of the reverse triangle inequality from the usual one uses to find: Combining these two statements gives: In the converse, the proof of the triangle inequality from the reverse triangle inequality works in two cases: If then by the reverse triangle inequality, , and if then trivially by the nonnegativity of the norm. Thus, in both cases, we find that . For metric spaces, the proof of the reverse triangle inequality is found similarly by: Putting these equations together we find: And in the converse, beginning from the reverse triangle inequality, we can again use two cases: If , then , and if then again by the nonnegativity of the metric. Thus, in both cases, we find that . Triangle inequality for cosine similarity By applying the cosine function to the triangle inequality and reverse triangle inequality for arc lengths and employing the angle addition and subtraction formulas for cosines, it follows immediately that and With these formulas, one needs to compute a square root for each triple of vectors that is examined rather than for each pair of vectors examined, and could be a performance improvement when the number of triples examined is less than the number of pairs examined. Reversal in Minkowski space The Minkowski space metric is not positive-definite, which means that can have either sign or vanish, even if the vector is non-zero. Moreover, if and are both timelike vectors lying in the future light cone, the triangle inequality is reversed: A physical example of this inequality is the twin paradox in special relativity. The same reversed form of the inequality holds if both vectors lie in the past light cone, and if one or both are null vectors. The result holds in dimensions for any . If the plane defined by and is space-like (and therefore a Euclidean subspace) then the usual triangle inequality holds.
Mathematics
Linear algebra
null
53948
https://en.wikipedia.org/wiki/Hippocampus
Hippocampus
The hippocampus (: hippocampi; via Latin from Greek , 'seahorse'), also hippocampus proper, is a major component of the brain of humans and many other vertebrates. In the human brain the hippocampus, the dentate gyrus, and the subiculum are the components of the hippocampal formation located in the limbic system. The hippocampus plays important roles in the consolidation of information from short-term memory to long-term memory, and in spatial memory that enables navigation. In humans, and other primates the hippocampus is located in the archicortex, one of the three regions of allocortex, in each hemisphere with neural projections to the neocortex. The hippocampus, as the medial pallium, is a structure found in all vertebrates. In Alzheimer's disease (and other forms of dementia), the hippocampus is one of the first regions of the brain to suffer damage; short-term memory loss and disorientation are included among the early symptoms. Damage to the hippocampus can also result from oxygen starvation (hypoxia), encephalitis, or medial temporal lobe epilepsy. People with extensive, bilateral hippocampal damage may experience anterograde amnesia: the inability to form and retain new memories. Since different neuronal cell types are neatly organized into layers in the hippocampus, it has frequently been used as a model system for studying neurophysiology. The form of neural plasticity known as long-term potentiation (LTP) was initially discovered to occur in the hippocampus and has often been studied in this structure. LTP is widely believed to be one of the main neural mechanisms by which memories are stored in the brain. In rodents as model organisms, the hippocampus has been studied extensively as part of a brain system responsible for spatial memory and navigation. Many neurons in the rat and mouse hippocampi respond as place cells: that is, they fire bursts of action potentials when the animal passes through a specific part of its environment. Hippocampal place cells interact extensively with head direction cells, whose activity acts as an inertial compass, and conjecturally with grid cells in the neighboring entorhinal cortex. Name The earliest description of the ridge running along the floor of the inferior horn of the lateral ventricle comes from the Venetian anatomist Julius Caesar Aranzi (1587), who likened it first to a silkworm and then to a seahorse (Latin hippocampus, from Greek ἱππόκαμπος, from ἵππος, 'horse' + κάμπος, 'sea monster'). The German anatomist Duvernoy (1729), the first to illustrate the structure, also wavered between "seahorse" and "silkworm". "Ram's horn" was proposed by the Danish anatomist Jacob Winsløw in 1732; and a decade later his fellow Parisian, the surgeon de Garengeot, used cornu Ammonis – horn of Amun, the ancient Egyptian god who was often represented as having a ram's head. Another reference appeared with the term pes hippocampi, which may date back to Diemerbroeck in 1672, introducing a comparison with the shape of the folded back forelimbs and webbed feet of the mythological hippocampus, a sea monster with a horse's forequarters and a fish's tail. The hippocampus was then described as pes hippocampi major, with an adjacent bulge in the occipital horn, described as the pes hippocampi minor and later renamed as the calcar avis. The renaming of the hippocampus as hippocampus major, and the calcar avis as hippocampus minor, has been attributed to Félix Vicq-d'Azyr systematizing nomenclature of parts of the brain in 1786. Mayer mistakenly used the term hippopotamus in 1779, and was followed by some other authors until Karl Friedrich Burdach resolved this error in 1829. In 1861 the hippocampus minor became the center of a dispute over human evolution between Thomas Henry Huxley and Richard Owen, satirized as the Great Hippocampus Question. The term hippocampus minor fell from use in anatomy textbooks and was officially removed in the Nomina Anatomica of 1895. Today, the structure is just called the hippocampus, with the term cornu Ammonis (that is, 'Ammon's horn') surviving in the names of the hippocampal subfields CA1–CA4. Relation to limbic system The term limbic system was introduced in 1952 by Paul MacLean to describe the set of structures that line the deep edge of the cortex (Latin limbus meaning border): These include the hippocampus, cingulate cortex, olfactory cortex, and amygdala. Paul MacLean later suggested that the limbic structures comprise the neural basis of emotion. The hippocampus is anatomically connected to parts of the brain that are involved with emotional behaviorthe septum, the hypothalamic mammillary body, and the anterior nuclear complex in the thalamus, and is generally accepted to be part of the limbic system. Anatomy The hippocampus can be seen as a ridge of gray matter tissue, elevating from the floor of each lateral ventricle in the region of the inferior horn. This ridge can also be seen as an inward fold of the archicortex into the medial temporal lobe. The hippocampus can only be seen in dissections as it is concealed by the parahippocampal gyrus. The hippocampus is located in the three-layered archicortex, one of the three regions of the allocortex. The hippocampal formation refers to the hippocampus, and its related parts. Typically, the formation includes the hippocampus, the dentate gyrus, and the subiculum. Parts of the subiculum include the presubiculum, and parasubiculum, and sometimes the entorhinal cortex is included in the formation. The neural layout and pathways within the hippocampal formation are very similar in all mammals. The hippocampus, including the dentate gyrus, has the shape of a curved tube, which has been compared to a seahorse, and to a horn of a ram, which after the ancient Egyptian god often portrayed as such takes the name cornu Ammonis. Its abbreviation CA is used in naming the hippocampal subfields CA1, CA2, CA3, and CA4. It can be distinguished as an area where the cortex narrows into a single layer of densely packed pyramidal neurons, which curl into a tight U shape. One edge of the "U," – CA4, is embedded into the backward-facing, flexed dentate gyrus. The hippocampus is described as having an anterior and posterior part (in primates) or a ventral and dorsal part in other animals. Both parts are of similar composition but belong to different neural circuits. In the rat, the two hippocampi resemble a pair of bananas, joined at the stems by the commissure of fornix (also called the hippocampal commissure). In primates, the part of the hippocampus at the bottom, near the base of the temporal lobe, is much broader than the part at the top. This means that in cross-section the hippocampus can show a number of different shapes, depending on the angle and location of the cut. In a cross-section of the hippocampus, including the dentate gyrus, several layers will be shown. The dentate gyrus has three layers of cells (or four if the hilus is included). The layers are from the outer in – the molecular layer, the inner molecular layer, the granular layer, and the hilus. The CA3 in the hippocampus proper has the following cell layers known as strata: lacunosum-moleculare, radiatum, lucidum, pyramidal, and oriens. CA2 and CA1 also have these layers except the lucidum stratum. The input to the hippocampus (from varying cortical and subcortical structures) comes from the entorhinal cortex via the perforant path. The entorhinal cortex (EC) is strongly and reciprocally connected with many cortical and subcortical structures as well as with the brainstem. Different thalamic nuclei, (from the anterior and midline groups), the medial septal nucleus, the supramammillary nucleus of the hypothalamus, and the raphe nuclei and locus coeruleus of the brainstem all send axons to the EC, so that it serves as the interface between the neocortex and the other connections, and the hippocampus. The EC is located in the parahippocampal gyrus, a cortical region adjacent to the hippocampus. This gyrus conceals the hippocampus. The parahippocampal gyrus is adjacent to the perirhinal cortex, which plays an important role in the visual recognition of complex objects. There is also substantial evidence that it makes a contribution to memory, which can be distinguished from the contribution of the hippocampus. It is apparent that complete amnesia occurs only when both the hippocampus and the parahippocampus are damaged. Circuitry The major input to the hippocampus is through the entorhinal cortex (EC), whereas its major output is via CA1 to the subiculum. Information reaches CA1 via two main pathways, direct and indirect. Axons from the EC that originate in layer III are the origin of the direct perforant pathway and form synapses on the very distal apical dendrites of CA1 neurons. Conversely, axons originating from layer II are the origin of the indirect pathway, and information reaches CA1 via the trisynaptic circuit. In the initial part of this pathway, the axons project through the perforant pathway to the granule cells of the dentate gyrus (first synapse). From then, the information follows via the mossy fibres to CA3 (second synapse). From there, CA3 axons called Schaffer collaterals leave the deep part of the cell body and loop up to the apical dendrites and then extend to CA1 (third synapse). Axons from CA1 then project back to the entorhinal cortex, completing the circuit. Basket cells in CA3 receive excitatory input from the pyramidal cells and then give an inhibitory feedback to the pyramidal cells. This recurrent inhibition is a simple feedback circuit that can dampen excitatory responses in the hippocampus. The pyramidal cells give a recurrent excitation which is an important mechanism found in some memory processing microcircuits. Several other connections play important roles in hippocampal function. Beyond the output to the EC, additional output pathways go to other cortical areas including the prefrontal cortex. A major output goes via the fornix to the lateral septal area and to the mammillary body of the hypothalamus (which the fornix interconnects with the hippocampus). The hippocampus receives modulatory input from the serotonin, norepinephrine, and dopamine systems, and from the nucleus reuniens of the thalamus to field CA1. A very important projection comes from the medial septal nucleus, which sends cholinergic, and gamma amino butyric acid (GABA) stimulating fibers (GABAergic fibers) to all parts of the hippocampus. The inputs from the medial septal nucleus play a key role in controlling the physiological state of the hippocampus; destruction of this nucleus abolishes the hippocampal theta rhythm and severely impairs certain types of memory. Regions Areas of the hippocampus are shown to be functionally and anatomically distinct. The dorsal hippocampus (DH), ventral hippocampus (VH) and intermediate hippocampus serve different functions, project with differing pathways, and have varying degrees of place cells. The dorsal hippocampus serves for spatial memory, verbal memory, and learning of conceptual information. Using the radial arm maze, lesions in the DH were shown to cause spatial memory impairment while VH lesions did not. Its projecting pathways include the medial septal nucleus and supramammillary nucleus. The dorsal hippocampus also has more place cells than both the ventral and intermediate hippocampal regions. The intermediate hippocampus has overlapping characteristics with both the ventral and dorsal hippocampus. Using anterograde tracing methods, Cenquizca and Swanson (2007) located the moderate projections to two primary olfactory cortical areas and prelimbic areas of the medial prefrontal cortex. This region has the smallest number of place cells. The ventral hippocampus functions in fear conditioning and affective processes. Anagnostaras et al. (2002) showed that alterations to the ventral hippocampus reduced the amount of information sent to the amygdala by the dorsal and ventral hippocampus, consequently altering fear conditioning in rats. Historically, the earliest widely held hypothesis was that the hippocampus is involved in olfaction. This idea was cast into doubt by a series of anatomical studies that did not find any direct projections to the hippocampus from the olfactory bulb. However, later work did confirm that the olfactory bulb does project into the ventral part of the lateral entorhinal cortex, and field CA1 in the ventral hippocampus sends axons to the main olfactory bulb, the anterior olfactory nucleus, and to the primary olfactory cortex. There continues to be some interest in hippocampal olfactory responses, in particular, the role of the hippocampus in memory for odors, but few specialists today believe that olfaction is its primary function. Function Theories of hippocampal functions Over the years, three main ideas of hippocampal function have dominated the literature: response inhibition, episodic memory, and spatial cognition. The behavioral inhibition theory (caricatured by John O'Keefe and Lynn Nadel as "slam on the brakes!") was very popular up to the 1960s. It derived much of its justification from two observations: first, that animals with hippocampal damage tend to be hyperactive; second, that animals with hippocampal damage often have difficulty learning to inhibit responses that they have previously been taught, especially if the response requires remaining quiet as in a passive avoidance test. British psychologist Jeffrey Gray developed this line of thought into a full-fledged theory of the role of the hippocampus in anxiety. The inhibition theory is currently the least popular of the three. The second major line of thought relates the hippocampus to memory. Although it had historical precursors, this idea derived its main impetus from a famous report by American neurosurgeon William Beecher Scoville and British-Canadian neuropsychologist Brenda Milner describing the results of surgical destruction of the hippocampi when trying to relieve epileptic seizures in an American man Henry Molaison, known until his death in 2008 as "Patient H.M." The unexpected outcome of the surgery was severe anterograde and partial retrograde amnesia; Molaison was unable to form new episodic memories after his surgery and could not remember any events that occurred just before his surgery, but he did retain memories of events that occurred many years earlier extending back into his childhood. This case attracted such widespread professional interest that Molaison became the most intensively studied subject in medical history. In the ensuing years, other patients with similar levels of hippocampal damage and amnesia (caused by accident or disease) have also been studied, and thousands of experiments have studied the physiology of activity-driven changes in synaptic connections in the hippocampus. There is now universal agreement that the hippocampi play some sort of important role in memory; however, the precise nature of this role remains widely debated. A later theory (2020) proposed – without questioning its role in spatial cognition – that the hippocampus encodes new episodic memories by associating representations in the newborn granule cells of the dentate gyrus and arranging those representations sequentially in the CA3 by relying on the phase precession generated in the entorhinal cortex. The third important theory of hippocampal function relates the hippocampus to space. The spatial theory was originally championed by O'Keefe and Nadel, who were influenced by American psychologist E.C. Tolman's theories about "cognitive maps" in humans and animals. O'Keefe and his student Dostrovsky in 1971 discovered neurons in the rat hippocampus that appeared to them to show activity related to the rat's location within its environment. Despite skepticism from other investigators, O'Keefe and his co-workers, especially Lynn Nadel, continued to investigate this question, in a line of work that eventually led to their very influential 1978 book The Hippocampus as a Cognitive Map. There is now almost universal agreement that hippocampal function plays an important role in spatial coding, but the details are widely debated. Later research has focused on trying to bridge the disconnect between the two main views of hippocampal function as being split between memory and spatial cognition. In some studies, these areas have been expanded to the point of near convergence. In an attempt to reconcile the two disparate views, it is suggested that a broader view of the hippocampal function is taken and seen to have a role that encompasses both the organisation of experience (mental mapping, as per Tolman's original concept in 1948) and the directional behaviour seen as being involved in all areas of cognition, so that the function of the hippocampus can be viewed as a broader system that incorporates both the memory and the spatial perspectives in its role that involves the use of a wide scope of cognitive maps. This relates to the purposive behaviorism born of Tolman's original goal of identifying the complex cognitive mechanisms and purposes that guided behaviour. It has also been proposed that the spiking activity of hippocampal neurons is associated spatially, and it was suggested that the mechanisms of memory and planning both evolved from mechanisms of navigation and that their neuronal algorithms were basically the same. Many studies have made use of neuroimaging techniques such as functional magnetic resonance imaging (fMRI), and a functional role in approach-avoidance conflict has been noted. The anterior hippocampus is seen to be involved in decision-making under approach-avoidance conflict processing. It is suggested that the memory, spatial cognition, and conflict processing functions may be seen as working together and not mutually exclusive. Role in memory Psychologists and neuroscientists generally agree that the hippocampus plays an important role in the formation of new memories about experienced events (episodic or autobiographical memory). Part of this function is hippocampal involvement in the detection of new events, places and stimuli. Some researchers regard the hippocampus as part of a larger medial temporal lobe memory system responsible for general declarative memory (memories that can be explicitly verbalizedthese would include, for example, memory for facts in addition to episodic memory). The hippocampus also encodes emotional context from the amygdala. This is partly why returning to a location where an emotional event occurred may evoke that emotion. There is a deep emotional connection between episodic memories and places. Due to bilateral symmetry the brain has a hippocampus in each cerebral hemisphere. If damage to the hippocampus occurs in only one hemisphere, leaving the structure intact in the other hemisphere, the brain can retain near-normal memory functioning. Severe damage to the hippocampi in both hemispheres results in profound difficulties in forming new memories (anterograde amnesia) and often also affects memories formed before the damage occurred (retrograde amnesia). Although the retrograde effect normally extends many years back before the brain damage, in some cases older memories remain. This retention of older memories leads to the idea that consolidation over time involves the transfer of memories out of the hippocampus to other parts of the brain. Experiments using intrahippocampal transplantation of hippocampal cells in primates with neurotoxic lesions of the hippocampus have shown that the hippocampus is required for the formation and recall, but not the storage, of memories. It has been shown that a decrease in the volume of various parts of the hippocampus leads to specific memory impairments. In particular, efficiency of verbal memory retention is related to the anterior parts of the right and left hippocampus. The right head of the hippocampus is more involved in executive functions and regulation during verbal memory recall. The tail of the left hippocampus tends to be closely related to verbal memory capacity. Damage to the hippocampus does not affect some types of memory, such as the ability to learn new skills (playing a musical instrument or solving certain types of puzzles, for example). This fact suggests that such abilities depend on different types of memory (procedural memory) and different brain regions. Furthermore, amnesic patients frequently show "implicit" memory for experiences even in the absence of conscious knowledge. For example, patients asked to guess which of two faces they have seen most recently may give the correct answer most of the time in spite of stating that they have never seen either of the faces before. Some researchers distinguish between conscious recollection, which depends on the hippocampus, and familiarity, which depends on portions of the medial temporal lobe. When rats are exposed to an intense learning event, they may retain a life-long memory of the event even after a single training session. The memory of such an event appears to be first stored in the hippocampus, but this storage is transient. Much of the long-term storage of the memory seems to take place in the anterior cingulate cortex. When such an intense learning event was experimentally applied, more than 5,000 differently methylated DNA regions appeared in the hippocampus neuronal genome of the rats at one hour and at 24 hours after training. These alterations in methylation pattern occurred at many genes that were down-regulated, often due to the formation of new 5-methylcytosine sites in CpG rich regions of the genome. Furthermore, many other genes were upregulated, likely often due to the removal of methyl groups from previously existing 5-methylcytosines (5mCs) in DNA. Demethylation of 5mC can be carried out by several proteins acting in concert, including TET enzymes as well as enzymes of the DNA base excision repair pathway. Between systems model The between-systems memory interference model describes the inhibition of non-hippocampal systems of memory during concurrent hippocampal activity. Specifically it was found that when the hippocampus was inactive, non-hippocampal systems located elsewhere in the brain were found to consolidate memory in its place. However, when the hippocampus was reactivated, memory traces consolidated by non-hippocampal systems were not recalled, suggesting that the hippocampus interferes with long-term memory consolidation in other memory-related systems. One of the major implications that this model illustrates is the dominant effects of the hippocampus on non-hippocampal networks when information is incongruent. With this information in mind, future directions could lead towards the study of these non-hippocampal memory systems through hippocampal inactivation, further expanding the labile constructs of memory. Additionally, many theories of memory are holistically based around the hippocampus. This model could add beneficial information to hippocampal research and memory theories such as the multiple trace theory. Lastly, the between-system memory interference model allows researchers to evaluate their results on a multiple-systems model, suggesting that some effects may not be simply mediated by one portion of the brain. Role in spatial memory and navigation Studies on freely moving rats and mice have shown many hippocampal neurons to act as place cells that cluster in place fields, and these fire bursts of action potentials when the animal passes through a particular location. This place-related neural activity in the hippocampus has also been reported in monkeys that were moved around a room whilst in a restraint chair. However, the place cells may have fired in relation to where the monkey was looking rather than to its actual location in the room. Over many years, many studies have been carried out on place-responses in rodents, which have given a large amount of information. Place cell responses are shown by pyramidal cells in the hippocampus and by granule cells in the dentate gyrus. Other cells in smaller proportion are inhibitory interneurons, and these often show place-related variations in their firing rate that are much weaker. There is little, if any, spatial topography in the representation; in general, cells lying next to each other in the hippocampus have uncorrelated spatial firing patterns. Place cells are typically almost silent when a rat is moving around outside the place field but reach sustained rates as high as 40 Hz when the rat is near the center. Neural activity sampled from 30 to 40 randomly chosen place cells carries enough information to allow a rat's location to be reconstructed with high confidence. The size of place fields varies in a gradient along the length of the hippocampus, with cells at the dorsal end showing the smallest fields, cells near the center showing larger fields, and cells at the ventral tip showing fields that cover the entire environment. In some cases, the firing rate of hippocampal cells depends not only on place but also the direction a rat is moving, the destination toward which it is traveling, or other task-related variables. The firing of place cells is timed in relation to local theta waves, a process termed phase precession. Cells with location-specific firing patterns have been reported during a study of people with drug-resistant epilepsy. They were undergoing an invasive procedure to localize the source of their seizures, with a view to surgical resection. They had diagnostic electrodes implanted in their hippocampi and then used a computer to move around in a virtual reality town. Similar brain imaging studies in navigation have shown the hippocampus to be active. A study was carried out on taxi drivers. London's black cab drivers need to learn the locations of a large number of places and the fastest routes between them in order to pass a strict test known as The Knowledge in order to gain a license to operate. A study showed that the posterior part of the hippocampus is larger in these drivers than in the general public, and that a positive correlation exists between the length of time served as a driver and the increase in the volume of this part. It was also found the total volume of the hippocampus was unchanged, as the increase seen in the posterior part was made at the expense of the anterior part, which showed a relative decrease in size. There have been no reported adverse effects from this disparity in hippocampal proportions. Another study showed opposite findings in blind individuals. The anterior part of the right hippocampus was larger and the posterior part was smaller, compared with sighted individuals. There are several navigational cells in the brain that are either in the hippocampus itself or are strongly connected to it, such as the speed cells present in the medial entorhinal cortex. Together these cells form a network that serves as spatial memory. The first of such cells discovered in the 1970s were the place cells, which led to the idea of the hippocampus acting to give a neural representation of the environment in a cognitive map. When the hippocampus is dysfunctional, orientation is affected; people may have difficulty in remembering how they arrived at a location and how to proceed further. Getting lost is a common symptom of amnesia. Studies with animals have shown that an intact hippocampus is required for initial learning and long-term retention of some spatial memory tasks, in particular ones that require finding the way to a hidden goal. Other cells have been discovered since the finding of the place cells in the rodent brain that are either in the hippocampus or the entorhinal cortex. These have been assigned as head direction cells, grid cells and boundary cells. Speed cells are thought to provide input to the hippocampal grid cells. Role in approach-avoidance conflict processing Approach-avoidance conflict happens when a situation is presented that can either be rewarding or punishing, and the ensuing decision-making has been associated with anxiety. fMRI findings from studies in approach-avoidance decision-making found evidence for a functional role that is not explained by either long-term memory or spatial cognition. Overall findings showed that the anterior hippocampus is sensitive to conflict, and that it may be part of a larger cortical and subcortical network seen to be important in decision-making in uncertain conditions. A review makes reference to a number of studies that show the involvement of the hippocampus in conflict tasks. The authors suggest that one challenge is to understand how conflict processing relates to the functions of spatial navigation and memory and how all of these functions need not be mutually exclusive. Role in social memory The hippocampus has received renewed attention for its role in social memory. Epileptic human subjects with depth electrodes in the left posterior, left anterior or right anterior hippocampus demonstrate distinct, individual cell responses when presented with faces of presumably recognizable famous people. Associations among facial and vocal identity were similarly mapped to the hippocampus of rheseus monkeys. Single neurons in the CA1 and CA3 responded strongly to social stimulus recognition by MRI. The CA2 was not distinguished, and may likely comprise a proportion of the claimed CA1 cells in the study. The dorsal CA2 and ventral CA1 subregions of the hippocampus have been implicated in social memory processing. Genetic inactivation of CA2 pyramidal neurons leads to pronounced loss of social memory, while maintaining intact sociability in mice. Similarly, ventral CA1 pyramidal neurons have also been demonstrated as critical for social memory under optogenetic control in mice. Physiology The hippocampus shows two major "modes" of activity, each associated with a distinct pattern of neural population activity and waves of electrical activity as measured by an electroencephalogram (EEG). These modes are named after the EEG patterns associated with them: theta and large irregular activity (LIA). The main characteristics described below are for the rat, which is the animal most extensively studied. The theta mode appears during states of active, alert behavior (especially locomotion), and also during REM (dreaming) sleep. In the theta mode, the EEG is dominated by large regular waves with a frequency range of 6 to 9 Hz, and the main groups of hippocampal neurons (pyramidal cells and granule cells) show sparse population activity, which means that in any short time interval, the great majority of cells are silent, while the small remaining fraction fire at relatively high rates, up to 50 spikes in one second for the most active of them. An active cell typically stays active for half a second to a few seconds. As the rat behaves, the active cells fall silent and new cells become active, but the overall percentage of active cells remains more or less constant. In many situations, cell activity is determined largely by the spatial location of the animal, but other behavioral variables also clearly influence it. The LIA mode appears during slow-wave (non-dreaming) sleep, and also during states of waking immobility such as resting or eating. In the LIA mode, the EEG is dominated by sharp waves that are randomly timed large deflections of the EEG signal lasting for 25–50 milliseconds. Sharp waves are frequently generated in sets, with sets containing up to 5 or more individual sharp waves and lasting up to 500 ms. The spiking activity of neurons within the hippocampus is highly correlated with sharp wave activity. Most neurons decrease their firing rate between sharp waves; however, during a sharp wave, there is a dramatic increase in firing rate in up to 10% of the hippocampal population These two hippocampal activity modes can be seen in primates as well as rats, with the exception that it has been difficult to see robust theta rhythmicity in the primate hippocampus. There are, however, qualitatively similar sharp waves and similar state-dependent changes in neural population activity. Theta rhythm The underlying currents producing the theta wave are generated mainly by densely packed neural layers of the entorhinal cortex, CA3, and the dendrites of pyramidal cells. The theta wave is one of the largest signals seen on EEG, and is known as the hippocampal theta rhythm. In some situations the EEG is dominated by regular waves at 3 to 10 Hz, often continuing for many seconds. These reflect subthreshold membrane potentials and strongly modulate the spiking of hippocampal neurons and synchronise across the hippocampus in a travelling wave pattern. The trisynaptic circuit is a relay of neurotransmission in the hippocampus that interacts with many brain regions. From rodent studies it has been proposed that the trisynaptic circuit generates the hippocampal theta rhythm. Theta rhythmicity is very obvious in rabbits and rodents and also clearly present in cats and dogs. Whether theta can be seen in primates is not yet clear. In rats (the animals that have been the most extensively studied), theta is seen mainly in two conditions: first, when an animal is walking or in some other way actively interacting with its surroundings; second, during REM sleep. The function of theta has not yet been convincingly explained although numerous theories have been proposed. The most popular hypothesis has been to relate it to learning and memory. An example would be the phase with which theta rhythms, at the time of stimulation of a neuron, shape the effect of that stimulation upon its synapses. What is meant here is that theta rhythms may affect those aspects of learning and memory that are dependent upon synaptic plasticity. It is well established that lesions of the medial septumthe central node of the theta systemcause severe disruptions of memory. However, the medial septum is more than just the controller of theta; it is also the main source of cholinergic projections to the hippocampus. It has not been established that septal lesions exert their effects specifically by eliminating the theta rhythm. Sharp waves During sleep or during resting, when an animal is not engaged with its surroundings, the hippocampal EEG shows a pattern of irregular slow waves, somewhat larger in amplitude than theta waves. This pattern is occasionally interrupted by large surges called sharp waves. These events are associated with bursts of spike activity lasting 50 to 100 milliseconds in pyramidal cells of CA3 and CA1. They are also associated with short-lived high-frequency EEG oscillations called "ripples", with frequencies in the range 150 to 200 Hz in rats, and together they are known as sharp waves and ripples. Sharp waves are most frequent during sleep when they occur at an average rate of around 1 per second (in rats) but in a very irregular temporal pattern. Sharp waves are less frequent during inactive waking states and are usually smaller. Sharp waves have also been observed in humans and monkeys. In macaques, sharp waves are robust but do not occur as frequently as in rats. Sharp waves appear to be associated with memory. Numerous later studies, have reported that when hippocampal place cells have overlapping spatial firing fields (and therefore often fire in near-simultaneity), they tend to show correlated activity during sleep following the behavioral session. This enhancement of correlation, commonly known as reactivation, has been found to occur mainly during sharp waves. It has been proposed that sharp waves are, in fact, reactivations of neural activity patterns that were memorized during behavior, driven by strengthening of synaptic connections within the hippocampus. This idea forms a key component of the "two-stage memory" theory, advocated by Buzsáki and others, which proposes that memories are stored within the hippocampus during behavior and then later transferred to the neocortex during sleep. Sharp waves in Hebbian theory are seen as persistently repeated stimulations by presynaptic cells, of postsynaptic cells that are suggested to drive synaptic changes in the cortical targets of hippocampal output pathways. Suppression of sharp waves and ripples in sleep or during immobility can interfere with memories expressed at the level of the behavior, nonetheless, the newly formed CA1 place cell code can re-emerge even after a sleep with abolished sharp waves and ripples, in spatially non-demanding tasks. Long-term potentiation Since at least the time of Ramon y Cajal (1852–1934), psychologists have speculated that the brain stores memory by altering the strength of connections between neurons that are simultaneously active. This idea was formalized by Donald Hebb in 1949, but for many years remained unexplained. In 1973, Tim Bliss and Terje Lømo described a phenomenon in the rabbit hippocampus that appeared to meet Hebb's specifications: a change in synaptic responsiveness induced by brief strong activation and lasting for hours or days or longer. This phenomenon was soon referred to as long-term potentiation (LTP). As a candidate mechanism for long-term memory, LTP has since been studied intensively, and a great deal has been learned about it. However, the complexity and variety of the intracellular signalling cascades that can trigger LTP is acknowledged as preventing a more complete understanding. The hippocampus is a particularly favorable site for studying LTP because of its densely packed and sharply defined layers of neurons, but similar types of activity-dependent synaptic change have also been observed in many other brain areas. The best-studied form of LTP has been seen in CA1 of the hippocampus and occurs at synapses that terminate on dendritic spines and use the neurotransmitter glutamate. The synaptic changes depend on a special type of glutamate receptor, the N-methyl-D-aspartate (NMDA) receptor, a cell surface receptor which has the special property of allowing calcium to enter the postsynaptic spine only when presynaptic activation and postsynaptic depolarization occur at the same time. Drugs that interfere with NMDA receptors block LTP and have major effects on some types of memory, especially spatial memory. Genetically modified mice that are modified to disable the LTP mechanism, also generally show severe memory deficits. Clinical significance Aging Normal aging is associated with a gradual decline in some types of memory, including episodic memory and working memory (or short-term memory). Because the hippocampus is thought to play a central role in memory, there has been considerable interest in the possibility that age-related declines could be caused by hippocampal deterioration. Some early studies reported substantial loss of neurons in the hippocampus of elderly people, but later studies using more precise techniques found only minimal differences. Similarly, some MRI studies have reported shrinkage of the hippocampus in elderly people, but other studies have failed to reproduce this finding. There is, however, a reliable relationship between the size of the hippocampus and memory performance; so that where there is age-related shrinkage, memory performance will be impaired. There are also reports that memory tasks tend to produce less hippocampal activation in the elderly than in the young. Furthermore, a randomized control trial published in 2011 found that aerobic exercise could increase the size of the hippocampus in adults aged 55 to 80 and also improve spatial memory. Dementia Dementia, is very often caused by cerebral ischemia, that is believed to trigger changes in the hippocampus. Changes in CA1, the hippocampal area that underlies episodic memory, cause episodic memory impairment, the earliest symptom of post-ischemic dementia. Stress The hippocampus contains high levels of glucocorticoid receptors, which make it more vulnerable to long-term stress than most other brain areas. There is evidence that humans having experienced severe, long-lasting traumatic stress show atrophy of the hippocampus more than of other parts of the brain. These effects show up in post-traumatic stress disorder, and they may contribute to the hippocampal atrophy reported in schizophrenia and severe depression. Anterior hippocampal volume in children is positively correlated with parental family income and this correlation is thought to be mediated by income related stress. A study has revealed atrophy as a result of depression, but this can be stopped with anti-depressants even if they are not effective in relieving other symptoms. Chronic stress resulting in elevated levels of glucocorticoids, notably of cortisol, is seen to be a cause of neuronal atrophy in the hippocampus. This atrophy results in a smaller hippocampal volume which is also seen in Cushing's syndrome. The higher levels of cortisol in Cushing's syndrome is usually the result of medications taken for other conditions. Neuronal loss also occurs as a result of impaired neurogenesis. Another factor that contributes to a smaller hippocampal volume is that of dendritic retraction where dendrites are shortened in length and reduced in number, in response to increased glucocorticoids. This dendritic retraction is reversible. After treatment with medication to reduce cortisol in Cushing's syndrome, the hippocampal volume is seen to be restored by as much as 10%. This change is seen to be due to the reforming of the dendrites. This dendritic restoration can also happen when stress is removed. There is, however, evidence derived mainly from studies using rats that stress occurring shortly after birth can affect hippocampal function in ways that persist throughout life. Sex-specific responses to stress have also been demonstrated in the rat to have an effect on the hippocampus. Chronic stress in the male rat showed dendritic retraction and cell loss in the CA3 region but this was not shown in the female. This was thought to be due to neuroprotective ovarian hormones. In rats, DNA damage increases in the hippocampus under conditions of stress. Epilepsy The hippocampus is one of the few brain regions where new neurons are generated. This process of neurogenesis is confined to the dentate gyrus. Neurogenesis can be positively affected by exercise or negatively affected by epileptic seizures. Seizures in temporal lobe epilepsy can affect the normal development of new neurons and can cause tissue damage. Hippocampal sclerosis specific to the mesial temporal lobe, is the most common type of such tissue damage. It is not yet clear, however, whether the epilepsy is usually caused by hippocampal abnormalities or whether the hippocampus is damaged by cumulative effects of seizures. However, in experimental settings where repetitive seizures are artificially induced in animals, hippocampal damage is a frequent result. This may be a consequence of the concentration of excitable glutamate receptors in the hippocampus. Hyperexcitability can lead to cytotoxicity and cell death. It may also have something to do with the hippocampus being a site where new neurons continue to be created throughout life, and to abnormalities in this process. Schizophrenia The causes of schizophrenia are not well understood, but numerous abnormalities of brain structure have been reported. The most thoroughly investigated alterations involve the cerebral cortex, but effects on the hippocampus have also been described. Many reports have found reductions in the size of the hippocampus in people with schizophrenia. The left hippocampus seems to be affected more than the right. The changes noted have largely been accepted to be the result of abnormal development. It is unclear whether hippocampal alterations play any role in causing the psychotic symptoms that are the most important feature of schizophrenia. It has been suggested that on the basis of experimental work using animals, hippocampal dysfunction might produce an alteration of dopamine release in the basal ganglia, thereby indirectly affecting the integration of information in the prefrontal cortex. It has also been suggested that hippocampal dysfunction might account for the disturbances in long-term memory frequently observed. MRI studies have found a smaller brain volume and larger ventricles in people with schizophreniahowever researchers do not know if the shrinkage is from the schizophrenia or from the medication. The hippocampus and thalamus have been shown to be reduced in volume; and the volume of the globus pallidus is increased. Cortical patterns are altered, and a reduction in the volume and thickness of the cortex particularly in the frontal and temporal lobes has been noted. It has further been proposed that many of the changes seen are present at the start of the disorder which gives weight to the theory that there is abnormal neurodevelopment. The hippocampus has been seen as central to the pathology of schizophrenia, both in the neural and physiological effects. It has been generally accepted that there is an abnormal synaptic connectivity underlying schizophrenia. Several lines of evidence implicate changes in the synaptic organization and connectivity, in and from the hippocampus Many studies have found dysfunction in the synaptic circuitry within the hippocampus and its activity on the prefrontal cortex. The glutamatergic pathways have been seen to be largely affected. The subfield CA1 is seen to be the least involved of the other subfields, and CA4 and the subiculum have been reported elsewhere as being the most implicated areas. The review concluded that the pathology could be due to genetics, faulty neurodevelopment or abnormal neural plasticity. It was further concluded that schizophrenia is not due to any known neurodegenerative disorder. Oxidative DNA damage is substantially increased in the hippocampus of elderly patients with chronic schizophrenia. Transient global amnesia Transient global amnesia is a dramatic, sudden, temporary, near-total loss of short-term memory. Various causes have been hypothesized including ischemia, epilepsy, migraine and disturbance of cerebral venous blood flow, leading to ischemia of structures such as the hippocampus that are involved in memory. There has been no scientific proof of any cause. However, diffusion-weighted MRI studies taken from 12 to 24 hours following an episode has shown there to be small dot-like lesions in the hippocampus. These findings have suggested a possible implication of CA1 neurons made vulnerable by metabolic stress. PTSD Some studies shows correlation of reduced hippocampus volume and post-traumatic stress disorder (PTSD). A study of Vietnam War combat veterans with PTSD showed a 20% reduction in the volume of their hippocampus compared with veterans with no such symptoms. This finding was not replicated in those with chronic PTSD, traumatized at an air show plane crash in 1988 (Ramstein, Germany). It is also the case that non-combat twin brothers of Vietnam veterans with PTSD also had smaller hippocampi than other controls, raising questions about the nature of the correlation. A 2016 study strengthened the theory that a smaller hippocampus increases the risk for post-traumatic stress disorder, and a larger hippocampus increases the likelihood of efficacious treatment. Microcephaly Hippocampus atrophy has been characterized in those with microcephaly. Mouse models with Wdr62 mutations which recapitulate human point mutations show a deficiency in hippocampal development, and neurogenesis. Other animals Other mammals The hippocampus has a generally similar appearance across the range of mammals, from egg-laying mammals such as the echidna, to humans and other primates. The hippocampal-size-to-body-size ratio broadly increases, being about twice as large for primates as for the echidna. It does not, however, increase at anywhere close to the rate of the neocortex-to-body-size ratio. Therefore, the hippocampus takes up a much larger fraction of the cortical mantle in rodents than in primates. In adult humans the volume of the hippocampus on each side of the brain is about 3.0 to 3.5 cm3 as compared to 320 to 420 cm3 for the volume of the neocortex. There is also a general relationship between the size of the hippocampus and spatial memory. When comparisons are made between similar species, those that have a greater capacity for spatial memory tend to have larger hippocampal volumes. This relationship also extends to sex differences; in species where males and females show strong differences in spatial memory ability they also tend to show corresponding differences in hippocampal volume. Other vertebrates Non-mammalian species do not have a brain structure that looks like the mammalian hippocampus, but they have one that is considered homologous to it. The hippocampus, as pointed out above, is in essence part of the allocortex. Only mammals have a fully developed cortex, but the structure it evolved from, called the pallium, is present in all vertebrates, even the most primitive ones such as the lamprey or hagfish. The pallium is usually divided into three zones: medial, lateral and dorsal. The medial pallium forms the precursor of the hippocampus. It does not resemble the hippocampus visually because the layers are not warped into an S shape or enfolded by the dentate gyrus, but the homology is indicated by strong chemical and functional affinities. There is now evidence that these hippocampal-like structures are involved in spatial cognition in reptiles, and fish. Birds In birds, the correspondence is sufficiently well established that most anatomists refer to the medial pallial zone as the "avian hippocampus". Numerous species of birds have strong spatial skills, in particular those that cache food. There is evidence that food-caching birds have a larger hippocampus than other types of birds and that damage to the hippocampus causes impairments in spatial memory. Fish The story for fish is more complex. In teleost fish (which make up the great majority of existing species), the forebrain is distorted in comparison to other types of vertebrates: most neuroanatomists believe that the teleost forebrain is in essence everted, like a sock turned inside-out, so that structures that lie in the interior, next to the ventricles, for most vertebrates, are found on the outside in teleost fish, and vice versa. One of the consequences of this is that the medial pallium ("hippocampal" zone) of a typical vertebrate is thought to correspond to the lateral pallium of a typical fish. Several types of fish (particularly goldfish) have been shown experimentally to have strong spatial memory abilities, even forming "cognitive maps" of the areas they inhabit. There is evidence that damage to the lateral pallium impairs spatial memory. It is not yet known whether the medial pallium plays a similar role in even more primitive vertebrates, such as sharks and rays, or even lampreys and hagfish. Insects and molluscs Some types of insects, and molluscs such as the octopus, also have strong spatial learning and navigation abilities, but these appear to work differently from the mammalian spatial system, so there is as yet no good reason to think that they have a common evolutionary origin; nor is there sufficient similarity in brain structure to enable anything resembling a "hippocampus" to be identified in these species. Some have proposed, however, that the insect's mushroom bodies may have a function similar to that of the hippocampus. Additional images
Biology and health sciences
Nervous system
Biology
53949
https://en.wikipedia.org/wiki/Black-and-white%20colobus
Black-and-white colobus
Black-and-white colobuses (or colobi) are Old World monkeys of the genus Colobus, native to Africa. They are closely related to the red colobus monkeys of genus Piliocolobus. There are five species of this monkey, and at least eight subspecies. They are generally found in high-density forests where they forage on leaves, flowers and fruit. Social groups of colobus are diverse, varying from group to group. Resident-egalitarian and allomothering relationships have been observed among the female population. Complex behaviours have also been observed in this species, including greeting rituals and varying group sleeping patterns. Colobi play a significant role in seed dispersal. Etymology The word "colobus" comes from the Greek (kolobós, "docked", "maimed") and refers to the stump-like thumb. Taxonomy Fossil species †Colobus flandrini †Colobus freedmani Behaviour and ecology Colobus habitats include primary and secondary forests, riverine forests, and wooded grasslands; they are found more in higher-density logged forests than in other primary forests. Their ruminant-like digestive systems have enabled them to occupy niches that are inaccessible to other primates: they are herbivorous, eating leaves, fruit, flowers, lichen, herbaceous vegetation and bark. Colobuses are important for seed dispersal through their sloppy eating habits, as well as through their digestive systems. Leaf toughness influences colobus foraging efficiency. Tougher leaves correlate negatively with ingestion rate (g/min) as they are costly in terms of mastication, but positively with investment (chews/g). Individuals spend approximately 150 minutes actively feeding each day. In a montane habitat colobus are known to utilise lichen as a fallback food during periods of low food availability. Social patterns and morphology Colobuses live in territorial groups that vary in both size (3-15 individuals) and structure. It was originally believed that the structure of these groups consisted of one male and about 8 female members. However, more recent observations have shown variation in structure and the number of males within groups, with one species forming multi-male, multifemale groups in a multilevel society, and in some populations supergroups form exceeding 500 individuals. There appears to be a dominant male, whilst there is no clear dominance among female members. Relationships among females are considered to be resident-egalitarian, as there is low competition and aggression between them within their own groups. Juveniles are treated as a lower-rank (in regards to authority) than subadults and likewise when comparing subadults to adults. Colobuses do not display any type of seasonal breeding patterns. As suggested by their name, adult colobi have black fur with white features. White fur surrounds their facial region and a "U" shape of long white fur runs along the sides of their body. Newborn colobi are completely white with a pink face. Cases of allomothering are documented, which means members of the troop other than the infant's biological mother care for it. Allomothering is believed to increase inclusive fitness or maternal practice for the benefit of future offspring. Social behaviours Many members participate in a greeting ritual when they are reunited with familiar individuals, an act of reaffirming. The greeting behaviour is generally carried out by the approaching monkey and often is followed with grooming. They participate in three greeting behaviours of physical contact. This includes mounting, head mounting (grasps the shoulders) and embracing. It seems as though these behaviours do not have any relationship with mating or courting.Black-and-white colobus have complex sleeping patterns. They sleep in trees near a food source, which may serve to save energy. Groups seem to regularly switch up sleeping locations (suggested due to reducing risk of parasites and placement prediction) and generally do not sleep near other groups. They also tend to sleep more tightly together on nights with great visibility. They sleep in mid- to upper sections of tall trees which allows for predator watch as well as protection from ground and aerial predators while they are asleep. Although there is no obvious preference for tree type, they have often been observed in Antiaris toxicaria. Conservation They are prey for many forest predators such as leopards and chimpanzees, and are threatened by hunting for the bushmeat trade, logging, and habitat destruction. Individuals are more vigilant (conspecific threat) in low canopy, they also spend less time scanning when they are around familiar group members as opposed to unfamiliar. There are no clear difference in vigilance between male and females. However, there is a positive correlation between mean monthly vigilance and encounter rates. Male vigilance generally increases during mating.
Biology and health sciences
Old World monkeys
Animals
53951
https://en.wikipedia.org/wiki/Diarrhea
Diarrhea
Diarrhea (American English), also spelled diarrhoea or diarrhœa (British English), is the condition of having at least three loose, liquid, or watery bowel movements in a day. It often lasts for a few days and can result in dehydration due to fluid loss. Signs of dehydration often begin with loss of the normal stretchiness of the skin and irritable behaviour. This can progress to decreased urination, loss of skin color, a fast heart rate, and a decrease in responsiveness as it becomes more severe. Loose but non-watery stools in babies who are exclusively breastfed, however, are normal. The most common cause is an infection of the intestines due to a virus, bacterium, or parasite—a condition also known as gastroenteritis. These infections are often acquired from food or water that has been contaminated by feces, or directly from another person who is infected. The three types of diarrhea are: short duration watery diarrhea, short duration bloody diarrhea, and persistent diarrhea (lasting more than two weeks, which can be either watery or bloody). The short duration watery diarrhea may be due to cholera, although this is rare in the developed world. If blood is present, it is also known as dysentery. A number of non-infectious causes can result in diarrhea. These include lactose intolerance, irritable bowel syndrome, non-celiac gluten sensitivity, celiac disease, inflammatory bowel disease such as ulcerative colitis, hyperthyroidism, bile acid diarrhea, and a number of medications. In most cases, stool cultures to confirm the exact cause are not required. Diarrhea can be prevented by improved sanitation, clean drinking water, and hand washing with soap. Breastfeeding for at least six months and vaccination against rotavirus is also recommended. Oral rehydration solution (ORS)—clean water with modest amounts of salts and sugar—is the treatment of choice. Zinc tablets are also recommended. These treatments have been estimated to have saved 50 million children in the past 25 years. When people have diarrhea it is recommended that they continue to eat healthy food, and babies continue to be breastfed. If commercial ORS is not available, homemade solutions may be used. In those with severe dehydration, intravenous fluids may be required. Most cases, however, can be managed well with fluids by mouth. Antibiotics, while rarely used, may be recommended in a few cases such as those who have bloody diarrhea and a high fever, those with severe diarrhea following travelling, and those who grow specific bacteria or parasites in their stool. Loperamide may help decrease the number of bowel movements but is not recommended in those with severe disease. About 1.7 to 5 billion cases of diarrhea occur per year. It is most common in developing countries, where young children get diarrhea on average three times a year. Total deaths from diarrhea are estimated at 1.53 million in 2019—down from 2.9 million in 1990. In 2012, it was the second most common cause of deaths in children younger than five (0.76 million or 11%). Frequent episodes of diarrhea are also a common cause of malnutrition and the most common cause in those younger than five years of age. Other long term problems that can result include stunted growth and poor intellectual development. Terminology The word diarrhea is from the Ancient Greek from "through" and "flow". Diarrhea is the spelling in American English, whereas diarrhoea is the spelling in British English. Slang terms for the condition include "the runs", "the squirts" (or "squits" in Britain) and "the trots". The word is often pronounced as . Definition Diarrhea is defined by the World Health Organization as having three or more loose or liquid stools per day, or as having more stools than is normal for that person. Acute diarrhea is defined as an abnormally frequent discharge of semisolid or fluid fecal matter from the bowel, lasting less than 14 days, by World Gastroenterology Organization. Acute diarrhea that is watery may be known as AWD (Acute Watery Diarrhoea.) Secretory Secretory diarrhea means that there is an increase in the active secretion, or there is an inhibition of absorption. There is little to no structural damage. The most common cause of this type of diarrhea is a cholera toxin that stimulates the secretion of anions, especially chloride ions (Cl–). Therefore, to maintain a charge balance in the gastrointestinal tract, sodium (Na+) is carried with it, along with water. In this type of diarrhea intestinal fluid secretion is isotonic with plasma even during fasting. It continues even when there is no oral food intake. Osmotic Osmotic diarrhea occurs when too much water is drawn into the bowels. If a person drinks solutions with excessive sugar or excessive salt, these can draw water from the body into the bowel and cause osmotic diarrhea. Osmotic diarrhea can also result from maldigestion (e.g., pancreatic disease or coeliac disease) in which the nutrients are left in the lumen to pull in water. Or it can be caused by osmotic laxatives (which work to alleviate constipation by drawing water into the bowels). In healthy individuals, too much magnesium, vitamin C or undigested lactose can produce osmotic diarrhea and distention of the bowel. A person who has lactose intolerance can have difficulty absorbing lactose after an extraordinarily high intake of dairy products. In persons who have fructose malabsorption, excess fructose intake can also cause diarrhea. High-fructose foods that also have a high glucose content are more absorbable and less likely to cause diarrhea. Sugar alcohols such as sorbitol (often found in sugar-free foods) are difficult for the body to absorb and, in large amounts, may lead to osmotic diarrhea. In most of these cases, osmotic diarrhea stops when the offending agent (e.g., milk or sorbitol) is stopped. Exudative Exudative diarrhea occurs with the presence of blood and pus in the stool. This occurs with inflammatory bowel diseases, such as Crohn's disease or ulcerative colitis, and other severe infections such as E. coli or other forms of food poisoning. Inflammatory Inflammatory diarrhea occurs when there is damage to the mucosal lining or brush border, which leads to a passive loss of protein-rich fluids and a decreased ability to absorb these lost fluids. Features of all three of the other types of diarrhea can be found in this type of diarrhea. It can be caused by bacterial infections, viral infections, parasitic infections, or autoimmune problems such as inflammatory bowel diseases. It can also be caused by tuberculosis, colon cancer, and enteritis. Dysentery If there is blood visible in the stools, it is also known as dysentery. The blood is a trace of an invasion of bowel tissue. Dysentery is a symptom of, among others, Shigella, Entamoeba histolytica, and Salmonella. Health effects Diarrheal disease may have a negative impact on both physical fitness and mental development. "Early childhood malnutrition resulting from any cause reduces physical fitness and work productivity in adults", and diarrhea is a primary cause of childhood malnutrition. Further, evidence suggests that diarrheal disease has significant impacts on mental development and health; it has been shown that, even when controlling for helminth infection and early breastfeeding, children who had experienced severe diarrhea had significantly lower scores on a series of tests of intelligence. Diarrhea can cause electrolyte imbalances, kidney impairment, dehydration, and defective immune system responses. When oral drugs are administered, the efficiency of the drug is to produce a therapeutic effect and the lack of this effect may be due to the medication travelling too quickly through the digestive system, limiting the time that it can be absorbed. Clinicians try to treat the diarrheas by reducing the dosage of medication, changing the dosing schedule, discontinuation of the drug, and rehydration. The interventions to control the diarrhea are not often effective. Diarrhea can have a profound effect on the quality of life because fecal incontinence is one of the leading factors for placing older adults in long term care facilities (nursing homes). Causes In the latter stages of human digestion, ingested materials are inundated with water and digestive fluids such as gastric acid, bile, and digestive enzymes in order to break them down into their nutrient components, which are then absorbed into the bloodstream via the intestinal tract in the small intestine. Prior to defecation, the large intestine reabsorbs the water and other digestive solvents in the waste product in order to maintain proper hydration and overall equilibrium. Diarrhea occurs when the large intestine is prevented, for any number of reasons, from sufficiently absorbing the water or other digestive fluids from fecal matter, resulting in a liquid, or "loose", bowel movement. Acute diarrhea is most commonly due to viral gastroenteritis with rotavirus, which accounts for 40% of cases in children under five. In travelers, however, bacterial infections predominate. Various toxins such as mushroom poisoning and drugs can also cause acute diarrhea. Chronic diarrhea can be the part of the presentations of a number of chronic medical conditions affecting the intestine. Common causes include ulcerative colitis, Crohn's disease, microscopic colitis, celiac disease, irritable bowel syndrome, and bile acid malabsorption. Infections There are many causes of infectious diarrhea, which include viruses, bacteria and parasites. Infectious diarrhea is frequently referred to as gastroenteritis. Norovirus is the most common cause of viral diarrhea in adults, but rotavirus is the most common cause in children under five years old. Adenovirus types 40 and 41, and astroviruses cause a significant number of infections. Shiga-toxin producing Escherichia coli, such as E coli o157:h7, are the most common cause of infectious bloody diarrhea in the United States. Campylobacter spp. are a common cause of bacterial diarrhea, but infections by Salmonella spp., Shigella spp. and some strains of Escherichia coli are also a frequent cause. In the elderly, particularly those who have been treated with antibiotics for unrelated infections, a toxin produced by Clostridioides difficile often causes severe diarrhea. Parasites, particularly protozoa e.g., Cryptosporidium spp., Giardia spp., Entamoeba histolytica, Blastocystis spp., Cyclospora cayetanensis, are frequently the cause of diarrhea that involves chronic infection. The broad-spectrum antiparasitic agent nitazoxanide has shown efficacy against many diarrhea-causing parasites. Other infectious agents, such as parasites or bacterial toxins, may exacerbate symptoms. In sanitary living conditions where there is ample food and a supply of clean water, an otherwise healthy person usually recovers from viral infections in a few days. However, for ill or malnourished individuals, diarrhea can lead to severe dehydration and can become life-threatening. Sanitation Open defecation is a leading cause of infectious diarrhea leading to death. Poverty is a good indicator of the rate of infectious diarrhea in a population. This association does not stem from poverty itself, but rather from the conditions under which impoverished people live. The absence of certain resources compromises the ability of the poor to defend themselves against infectious diarrhea. "Poverty is associated with poor housing, crowding, dirt floors, lack of access to clean water or to sanitary disposal of fecal waste (sanitation), cohabitation with domestic animals that may carry human pathogens, and a lack of refrigerated storage for food, all of which increase the frequency of diarrhea... Poverty also restricts the ability to provide age-appropriate, nutritionally balanced diets or to modify diets when diarrhea develops so as to mitigate and repair nutrient losses. The impact is exacerbated by the lack of adequate, available, and affordable medical care." One of the most common causes of infectious diarrhea is a lack of clean water. Often, improper fecal disposal leads to contamination of groundwater. This can lead to widespread infection among a population, especially in the absence of water filtration or purification. Human feces contains a variety of potentially harmful human pathogens. Nutrition Proper nutrition is important for health and functioning, including the prevention of infectious diarrhea. It is especially important to young children who do not have a fully developed immune system. Zinc deficiency, a condition often found in children in developing countries can, even in mild cases, have a significant impact on the development and proper functioning of the human immune system. Indeed, this relationship between zinc deficiency and reduced immune functioning corresponds with an increased severity of infectious diarrhea. Children who have lowered levels of zinc have a greater number of instances of diarrhea, severe diarrhea, and diarrhea associated with fever. Similarly, vitamin A deficiency can cause an increase in the severity of diarrheal episodes. However, there is some discrepancy when it comes to the impact of vitamin A deficiency on the rate of disease. While some argue that a relationship does not exist between the rate of disease and vitamin A status, others suggest an increase in the rate associated with deficiency. Given that estimates suggest 127 million preschool children worldwide are vitamin A deficient, this population has the potential for increased risk of disease contraction. Malabsorption Malabsorption is the inability to absorb food fully, mostly from disorders in the small bowel, but also due to maldigestion from diseases of the pancreas. Causes include: enzyme deficiencies or mucosal abnormality, as in food allergy and food intolerance, e.g. celiac disease (gluten intolerance), lactose intolerance (intolerance to milk sugar, common in non-Europeans), and fructose malabsorption. pernicious anemia, or impaired bowel function due to the inability to absorb vitamin B12, loss of pancreatic secretions, which may be due to cystic fibrosis or pancreatitis, structural defects, like short bowel syndrome (surgically removed bowel) and radiation fibrosis, such as usually follows cancer treatment and other drugs, including agents used in chemotherapy; and certain drugs, like orlistat, which inhibits the absorption of fat. Inflammatory bowel disease The two overlapping types here are of unknown origin: Ulcerative colitis is marked by chronic bloody diarrhea and inflammation mostly affects the distal colon near the rectum. Crohn's disease typically affects fairly well demarcated segments of bowel in the colon and often affects the end of the small bowel. Irritable bowel syndrome Another possible cause of diarrhea is irritable bowel syndrome (IBS), which usually presents with abdominal discomfort relieved by defecation and unusual stool (diarrhea or constipation) for at least three days a week over the previous three months. Symptoms of diarrhea-predominant IBS can be managed through a combination of dietary changes, soluble fiber supplements and medications such as loperamide or codeine. About 30% of patients with diarrhea-predominant IBS have bile acid malabsorption diagnosed with an abnormal SeHCAT test. Other diseases Diarrhea can be caused by other diseases and conditions, namely: Chronic ethanol ingestion Hyperthyroidism Certain medications Bile acid malabsorption Ischemic bowel disease: This usually affects older people and can be due to blocked arteries. Microscopic colitis, a type of inflammatory bowel disease where changes are seen only on histological examination of colonic biopsies. Bile salt malabsorption (primary bile acid diarrhea) where excessive bile acids in the colon produce a secretory diarrhea. Hormone-secreting tumors: some hormones, e.g. serotonin, can cause diarrhea if secreted in excess (usually from a tumor). Chronic mild diarrhea in infants and toddlers may occur with no obvious cause and with no other ill effects; this condition is called toddler's diarrhea. Environmental enteropathy Radiation enteropathy following treatment for pelvic and abdominal cancers. Medications Over 700 medications, such as penicillin, are known to cause diarrhea. The classes of medications that are known to cause diarrhea are laxatives, antacids, heartburn medications, antibiotics, anti-neoplastic drugs, anti-inflammatories as well as many dietary supplements. Pathophysiology Evolution According to two researchers, Nesse and Williams, diarrhea may function as an evolved expulsion defense mechanism. As a result, if it is stopped, there might be a delay in recovery. They cite in support of this argument research published in 1973 that found that treating Shigella with the anti-diarrhea drug (Co-phenotrope, Lomotil) caused people to stay feverish twice as long as those not so treated. The researchers indeed themselves observed that: "Lomotil may be contraindicated in shigellosis. Diarrhea may represent a defense mechanism". Diagnostic approach The following types of diarrhea may indicate further investigation is needed: In infants Moderate or severe diarrhea in young children Associated with blood Continues for more than two days Associated non-cramping abdominal pain, fever, weight loss, etc. In travelers In food handlers, because of the potential to infect others; In institutions such as hospitals, child care centers, or geriatric and convalescent homes. A severity score is used to aid diagnosis in children. When diarrhea lasts for more than four weeks a number of further tests may be recommended including: Complete blood count and a ferritin if anemia is present Thyroid stimulating hormone Tissue transglutaminase for celiac disease Fecal calprotectin to exclude inflammatory bowel disease Stool tests for ova and parasites as well as for Clostridioides difficile A colonoscopy or fecal immunochemical testing for cancer, including biopsies to detect microscopic colitis Testing for bile acid diarrhea with SeHCAT, 7α-hydroxy-4-cholesten-3-one or fecal bile acids depending on availability Hydrogen breath test looking for lactose intolerance Further tests if immunodeficiency, pelvic radiation disease or small intestinal bacterial overgrowth suspected. A 2019 guideline recommended that testing for ova and parasites was only needed in people who are at high risk though they recommend routine testing for giardia. Erythrocyte sedimentation rate (ESR) and C-reactive protein (CRP) were not recommended. Epidemiology Worldwide in 2004, approximately 2.5 billion cases of diarrhea occurred, which resulted in 1.5 million deaths among children under the age of five. Greater than half of these were in Africa and South Asia. This is down from a death rate of 4.5 million in 1980 for gastroenteritis. Diarrhea remains the second leading cause of infant mortality (16%) after pneumonia (17%) in this age group. The majority of such cases occur in the developing world, with over half of the recorded cases of childhood diarrhea occurring in Africa and Asia, with 696 million and 1.2 billion cases, respectively, compared to only 480 million in the rest of the world. Infectious diarrhea resulted in about 0.7 million deaths in children under five years old in 2011 and 250 million lost school days. In the Americas, diarrheal disease accounts for a total of 10% of deaths among children aged 1–59 months while in South East Asia, it accounts for 31.3% of deaths. It is estimated that around 21% of child mortalities in developing countries are due to diarrheal disease. The World Health Organization has reported that "deaths due to diarrhoeal diseases have dropped by 45%, from sixth leading cause of death in 2000 to thirteenth in 2021." Even though diarrhea is best known in humans, it affects many other species, notably among primates. The cecal appendix, when present, appears to afford some protection against diarrhea to young primates. Prevention Sanitation Numerous studies have shown that improvements in drinking water and sanitation (WASH) lead to decreased risks of diarrhoea. Such improvements might include for example use of water filters, provision of high-quality piped water and sewer connections. In institutions, communities, and households, interventions that promote hand washing with soap lead to significant reductions in the incidence of diarrhea. The same applies to preventing open defecation at a community-wide level and providing access to improved sanitation. This includes use of toilets and implementation of the entire sanitation chain connected to the toilets (collection, transport, disposal or reuse of human excreta). There is limited evidence that safe disposal of child or adult feces can prevent diarrheal disease. Hand washing Basic sanitation techniques can have a profound effect on the transmission of diarrheal disease. The implementation of hand washing using soap and water, for example, has been experimentally shown to reduce the incidence of disease by approximately 30–48%. Hand washing in developing countries, however, is compromised by poverty as acknowledged by the CDC: "Handwashing is integral to disease prevention in all parts of the world; however, access to soap and water is limited in a number of less developed countries. This lack of access is one of many challenges to proper hygiene in less developed countries." Solutions to this barrier require the implementation of educational programs that encourage sanitary behaviours. Water Given that water contamination is a major means of transmitting diarrheal disease, efforts to provide clean water supply and improved sanitation have the potential to dramatically cut the rate of disease incidence. In fact, it has been proposed that we might expect an 88% reduction in child mortality resulting from diarrheal disease as a result of improved water sanitation and hygiene. Similarly, a meta-analysis of numerous studies on improving water supply and sanitation shows a 22–27% reduction in disease incidence, and a 21–30% reduction in mortality rate associated with diarrheal disease. Chlorine treatment of water, for example, has been shown to reduce both the risk of diarrheal disease, and of contamination of stored water with diarrheal pathogens. Vaccination Immunization against the pathogens that cause diarrheal disease is a viable prevention strategy, however it does require targeting certain pathogens for vaccination. In the case of Rotavirus, which was responsible for around 6% of diarrheal episodes and 20% of diarrheal disease deaths in the children of developing countries, use of a Rotavirus vaccine in trials in 1985 yielded a slight (2–3%) decrease in total diarrheal disease incidence, while reducing overall mortality by 6–10%. Similarly, a Cholera vaccine showed a strong reduction in morbidity and mortality, though the overall impact of vaccination was minimal as Cholera is not one of the major causative pathogens of diarrheal disease. Since this time, more effective vaccines have been developed that have the potential to save many thousands of lives in developing nations, while reducing the overall cost of treatment, and the costs to society. Rotavirus vaccine decreases the rates of diarrhea in a population. New vaccines against rotavirus, Shigella, Enterotoxigenic Escherichia coli (ETEC), and cholera are under development, as well as other causes of infectious diarrhea. Nutrition Dietary deficiencies in developing countries can be combated by promoting better eating practices. Zinc supplementation proved successful showing a significant decrease in the incidence of diarrheal disease compared to a control group. The majority of the literature suggests that vitamin A supplementation is advantageous in reducing disease incidence. Development of a supplementation strategy should take into consideration the fact that vitamin A supplementation was less effective in reducing diarrhea incidence when compared to vitamin A and zinc supplementation, and that the latter strategy was estimated to be significantly more cost effective. Breastfeeding Breastfeeding practices have been shown to have a dramatic effect on the incidence of diarrheal disease in poor populations. Studies across a number of developing nations have shown that those who receive exclusive breastfeeding during their first 6 months of life are better protected against infection with diarrheal diseases. One study in Brazil found that non-breastfed infants were 14 times more likely to die from diarrhea than exclusively breastfed infants. Exclusive breastfeeding is currently recommended for the first six months of an infant's life by the WHO, with continued breastfeeding until at least two years of age. Others Probiotics decrease the risk of diarrhea in those taking antibiotics. Insecticide spraying may reduce fly numbers and the risk of diarrhea in children in a setting where there is seasonal variations in fly numbers throughout the year. Management In many cases of diarrhea, replacing lost fluid and salts is the only treatment needed. This is usually by mouth – oral rehydration therapy – or, in severe cases, intravenously. Diet restrictions such as the BRAT diet are no longer recommended. Research does not support the limiting of milk to children as doing so has no effect on duration of diarrhea. To the contrary, WHO recommends that children with diarrhea continue to eat as sufficient nutrients are usually still absorbed to support continued growth and weight gain, and that continuing to eat also speeds up recovery of normal intestinal functioning. CDC recommends that children and adults with cholera also continue to eat. There is no evidence that early refeeding in children can cause an increase in inappropriate use of intravenous fluid, episodes of vomiting, and risk of having persistent diarrhea. Medications such as loperamide (Imodium) and bismuth subsalicylate may be beneficial; however they may be contraindicated in certain situations. Fluids Oral rehydration solution (ORS) (a slightly sweetened and salty water) can be used to prevent dehydration. Standard home solutions such as salted rice water, salted yogurt drinks, vegetable and chicken soups with salt can be given. Home solutions such as water in which cereal has been cooked, unsalted soup, green coconut water, weak tea (unsweetened), and unsweetened fresh fruit juices can have from half a teaspoon to full teaspoon of salt (from one-and-a-half to three grams) added per liter. Clean plain water can also be one of several fluids given. There are commercial solutions such as Pedialyte, and relief agencies such as UNICEF widely distribute packets of salts and sugar. A WHO publication for physicians recommends a homemade ORS consisting of one liter water with one teaspoon salt (3 grams) and two tablespoons sugar (18 grams) added (approximately the "taste of tears"). Rehydration Project recommends adding the same amount of sugar but only one-half a teaspoon of salt, stating that this more dilute approach is less risky with very little loss of effectiveness. Both agree that drinks with too much sugar or salt can make dehydration worse. Appropriate amounts of supplemental zinc and potassium should be added if available. But the availability of these should not delay rehydration. As WHO points out, the most important thing is to begin preventing dehydration as early as possible. In another example of prompt ORS hopefully preventing dehydration, CDC recommends for the treatment of cholera continuing to give Oral Rehydration Solution during travel to medical treatment. Vomiting often occurs during the first hour or two of treatment with ORS, especially if a child drinks the solution too quickly, but this seldom prevents successful rehydration since most of the fluid is still absorbed. WHO recommends that if a child vomits, to wait five or ten minutes and then start to give the solution again more slowly. Drinks especially high in simple sugars, such as soft drinks and fruit juices, are not recommended in children under five as they may increase dehydration. A too rich solution in the gut draws water from the rest of the body, just as if the person were to drink sea water. Plain water may be used if more specific and effective ORT preparations are unavailable or are not palatable. Additionally, a mix of both plain water and drinks perhaps too rich in sugar and salt can alternatively be given to the same person, with the goal of providing a medium amount of sodium overall. A nasogastric tube can be used in young children to administer fluids if warranted. Eating The WHO recommends a child with diarrhea continue to be fed. Continued feeding speeds the recovery of normal intestinal function. In contrast, children whose food is restricted have diarrhea of longer duration and recover intestinal function more slowly. The WHO states "Food should never be withheld and the child's usual foods should not be diluted. Breastfeeding should always be continued." In the specific example of cholera, the CDC makes the same recommendation. Breast-fed infants with diarrhea often choose to breastfeed more, and should be encouraged to do so. In young children who are not breast-fed and live in the developed world, a lactose-free diet may be useful to speed recovery. Eating food containing soluble fibre may help, but insoluble fibre might make it worse. Medications Antidiarrheal agents can be classified into four different groups: antimotility, antisecretory, adsorbent, and anti-infectious. While antibiotics are beneficial in certain types of acute diarrhea, they are usually not used except in specific situations. There are concerns that antibiotics may increase the risk of hemolytic uremic syndrome in people infected with Escherichia coli O157:H7. In resource-poor countries, treatment with antibiotics may be beneficial. However, some bacteria are developing antibiotic resistance, particularly Shigella. Antibiotics can also cause diarrhea, and antibiotic-associated diarrhea is the most common adverse effect of treatment with general antibiotics. While bismuth compounds (Pepto-Bismol) decreased the number of bowel movements in those with travelers' diarrhea, they do not decrease the length of illness. Anti-motility agents like loperamide are also effective at reducing the number of stools but not the duration of disease. These agents should be used only if bloody diarrhea is not present. Diosmectite, a natural aluminomagnesium silicate clay, is effective in alleviating symptoms of acute diarrhea in children, and also has some effects in chronic functional diarrhea, radiation-induced diarrhea, and chemotherapy-induced diarrhea. Another absorbent agent used for the treatment of mild diarrhea is kaopectate. Racecadotril an antisecretory medication may be used to treat diarrhea in children and adults. It has better tolerability than loperamide, as it causes less constipation and flatulence. However, it has little benefit in improving acute diarrhea in children. Bile acid sequestrants such as cholestyramine can be effective in chronic diarrhea due to bile acid malabsorption. Therapeutic trials of these drugs are indicated in chronic diarrhea if bile acid malabsorption cannot be diagnosed with a specific test, such as SeHCAT retention. Alternative therapies Zinc supplementation may benefit children over six months old with diarrhea in areas with high rates of malnourishment or zinc deficiency. This supports the World Health Organization guidelines for zinc, but not in the very young. A Cochrane Review from 2020 concludes that probiotics make little or no difference to people who have diarrhea lasting 2 days or longer and that there is no proof that they reduce its duration. The probiotic lactobacillus can help prevent antibiotic-associated diarrhea in adults but possibly not children. For those with lactose intolerance, taking digestive enzymes containing lactase when consuming dairy products often improves symptoms.
Biology and health sciences
Non-infectious disease
null
53991
https://en.wikipedia.org/wiki/Adjoint%20functors
Adjoint functors
In mathematics, specifically category theory, adjunction is a relationship that two functors may exhibit, intuitively corresponding to a weak form of equivalence between two related categories. Two functors that stand in this relationship are known as adjoint functors, one being the left adjoint and the other the right adjoint. Pairs of adjoint functors are ubiquitous in mathematics and often arise from constructions of "optimal solutions" to certain problems (i.e., constructions of objects having a certain universal property), such as the construction of a free group on a set in algebra, or the construction of the Stone–Čech compactification of a topological space in topology. By definition, an adjunction between categories and is a pair of functors (assumed to be covariant)   and   and, for all objects in and in , a bijection between the respective morphism sets such that this family of bijections is natural in and . Naturality here means that there are natural isomorphisms between the pair of functors and for a fixed in , and also the pair of functors and for a fixed in . The functor is called a left adjoint functor or left adjoint to , while is called a right adjoint functor or right adjoint to . We write . An adjunction between categories and is somewhat akin to a "weak form" of an equivalence between and , and indeed every equivalence is an adjunction. In many situations, an adjunction can be "upgraded" to an equivalence, by a suitable natural modification of the involved categories and functors. Terminology and notation The terms adjoint and adjunct are both used, and are cognates: one is taken directly from Latin, the other from Latin via French. In the classic text Categories for the Working Mathematician, Mac Lane makes a distinction between the two. Given a family of hom-set bijections, we call an adjunction or an adjunction between and . If is an arrow in , is the right adjunct of (p. 81). The functor is left adjoint to , and is right adjoint to . (Note that may have itself a right adjoint that is quite different from ; see below for an example.) In general, the phrases " is a left adjoint" and " has a right adjoint" are equivalent. We call a left adjoint because it is applied to the left argument of , and a right adjoint because it is applied to the right argument of . If F is left adjoint to G, we also write The terminology comes from the Hilbert space idea of adjoint operators , with , which is formally similar to the above relation between hom-sets. The analogy to adjoint maps of Hilbert spaces can be made precise in certain contexts. Introduction and motivation Common mathematical constructions are very often adjoint functors. Consequently, general theorems about left/right adjoint functors encode the details of many useful and otherwise non-trivial results. Such general theorems include the equivalence of the various definitions of adjoint functors, the uniqueness of a right adjoint for a given left adjoint, the fact that left/right adjoint functors respectively preserve colimits/limits (which are also found in every area of mathematics), and the general adjoint functor theorems giving conditions under which a given functor is a left/right adjoint. Solutions to optimization problems In a sense, an adjoint functor is a way of giving the most efficient solution to some problem via a method that is formulaic. For example, an elementary problem in ring theory is how to turn a rng (which is like a ring that might not have a multiplicative identity) into a ring. The most efficient way is to adjoin an element '1' to the rng, adjoin all (and only) the elements that are necessary for satisfying the ring axioms (e.g. r+1 for each r in the ring), and impose no relations in the newly formed ring that are not forced by axioms. Moreover, this construction is formulaic in the sense that it works in essentially the same way for any rng. This is rather vague, though suggestive, and can be made precise in the language of category theory: a construction is most efficient if it satisfies a universal property, and is formulaic if it defines a functor. Universal properties come in two types: initial properties and terminal properties. Since these are dual notions, it is only necessary to discuss one of them. The idea of using an initial property is to set up the problem in terms of some auxiliary category E, so that the problem at hand corresponds to finding an initial object of E. This has an advantage that the optimization—the sense that the process finds the most efficient solution—means something rigorous and recognisable, rather like the attainment of a supremum. The category E is also formulaic in this construction, since it is always the category of elements of the functor to which one is constructing an adjoint. Back to our example: take the given rng R, and make a category E whose objects are rng homomorphisms R → S, with S a ring having a multiplicative identity. The morphisms in E between R → S1 and R → S2 are commutative triangles of the form (R → S1, R → S2, S1 → S2) where S1 → S2 is a ring map (which preserves the identity). (Note that this is precisely the definition of the comma category of R over the inclusion of unitary rings into rng.) The existence of a morphism between R → S1 and R → S2 implies that S1 is at least as efficient a solution as S2 to our problem: S2 can have more adjoined elements and/or more relations not imposed by axioms than S1. Therefore, the assertion that an object R → R* is initial in E, that is, that there is a morphism from it to any other element of E, means that the ring R* is a most efficient solution to our problem. The two facts that this method of turning rngs into rings is most efficient and formulaic can be expressed simultaneously by saying that it defines an adjoint functor. More explicitly: Let F denote the above process of adjoining an identity to a rng, so F(R)=R*. Let G denote the process of “forgetting″ whether a ring S has an identity and considering it simply as a rng, so essentially G(S)=S. Then F is the left adjoint functor of G. Note however that we haven't actually constructed R* yet; it is an important and not altogether trivial algebraic fact that such a left adjoint functor R → R* actually exists. Symmetry of optimization problems It is also possible to start with the functor F, and pose the following (vague) question: is there a problem to which F is the most efficient solution? The notion that F is the most efficient solution to the problem posed by G is, in a certain rigorous sense, equivalent to the notion that G poses the most difficult problem that F solves. This gives the intuition behind the fact that adjoint functors occur in pairs: if F is left adjoint to G, then G is right adjoint to F. Formal definitions There are various equivalent definitions for adjoint functors: The definitions via universal morphisms are easy to state, and require minimal verifications when constructing an adjoint functor or proving two functors are adjoint. They are also the most analogous to our intuition involving optimizations. The definition via hom-sets makes symmetry the most apparent, and is the reason for using the word adjoint. The definition via counit–unit adjunction is convenient for proofs about functors that are known to be adjoint, because they provide formulas that can be directly manipulated. The equivalency of these definitions is quite useful. Adjoint functors arise everywhere, in all areas of mathematics. Since the structure in any of these definitions gives rise to the structures in the others, switching between them makes implicit use of many details that would otherwise have to be repeated separately in every subject area. Conventions The theory of adjoints has the terms left and right at its foundation, and there are many components that live in one of two categories C and D that are under consideration. Therefore it can be helpful to choose letters in alphabetical order according to whether they live in the "lefthand" category C or the "righthand" category D, and also to write them down in this order whenever possible. In this article for example, the letters X, F, f, ε will consistently denote things that live in the category C, the letters Y, G, g, η will consistently denote things that live in the category D, and whenever possible such things will be referred to in order from left to right (a functor F : D → C can be thought of as "living" where its outputs are, in C). If the arrows for the left adjoint functor F were drawn they would be pointing to the left; if the arrows for the right adjoint functor G were drawn they would be pointing to the right. Definition via universal morphisms By definition, a functor is a left adjoint functor if for each object in there exists a universal morphism from to . Spelled out, this means that for each object in there exists an object in and a morphism such that for every object in and every morphism there exists a unique morphism with . The latter equation is expressed by the following commutative diagram: In this situation, one can show that can be turned into a functor in a unique way such that for all morphisms in ; is then called a left adjoint to . Similarly, we may define right-adjoint functors. A functor is a right adjoint functor if for each object in , there exists a universal morphism from to . Spelled out, this means that for each object in , there exists an object in and a morphism such that for every object in and every morphism there exists a unique morphism with . Again, this can be uniquely turned into a functor such that for a morphism in ; is then called a right adjoint to . It is true, as the terminology implies, that is left adjoint to if and only if is right adjoint to . These definitions via universal morphisms are often useful for establishing that a given functor is left or right adjoint, because they are minimalistic in their requirements. They are also intuitively meaningful in that finding a universal morphism is like solving an optimization problem. Definition via Hom-sets Using hom-sets, an adjunction between two categories and can be defined as consisting of two functors and and a natural isomorphism . This specifies a family of bijections for all objects and . In this situation, is left adjoint to and is right adjoint to . This definition is a logical compromise in that it is more difficult to establish its satisfaction than the universal morphism definitions, and has fewer immediate implications than the counit–unit definition. It is useful because of its obvious symmetry, and as a stepping-stone between the other definitions. In order to interpret as a natural isomorphism, one must recognize and as functors. In fact, they are both bifunctors from to (the category of sets). For details, see the article on hom functors. Spelled out, the naturality of means that for all morphisms in and all morphisms in the following diagram commutes: The vertical arrows in this diagram are those induced by composition. Formally, is given by for each is similar. Definition via counit–unit A third way of defining an adjunction between two categories and consists of two functors and and two natural transformations respectively called the counit and the unit of the adjunction (terminology from universal algebra), such that the compositions are the identity morphisms and on and respectively. In this situation we say that is left adjoint to and is right adjoint to , and may indicate this relationship by writing   , or, simply   . In equational form, the above conditions on are the counit–unit equations which imply that for each and each . Note that denotes the identify functor on the category , denotes the identity natural transformation from the functor to itself, and denotes the identity morphism of the object These equations are useful in reducing proofs about adjoint functors to algebraic manipulations. They are sometimes called the triangle identities, or sometimes the zig-zag equations because of the appearance of the corresponding string diagrams. A way to remember them is to first write down the nonsensical equation and then fill in either or in one of the two simple ways that make the compositions defined. Note: The use of the prefix "co" in counit here is not consistent with the terminology of limits and colimits, because a colimit satisfies an initial property whereas the counit morphisms satisfy terminal properties, and dually for limit versus unit. The term unit here is borrowed from the theory of monads, where it looks like the insertion of the identity into a monoid. History The idea of adjoint functors was introduced by Daniel Kan in 1958. Like many of the concepts in category theory, it was suggested by the needs of homological algebra, which was at the time devoted to computations. Those faced with giving tidy, systematic presentations of the subject would have noticed relations such as hom(F(X), Y) = hom(X, G(Y)) in the category of abelian groups, where F was the functor (i.e. take the tensor product with A), and G was the functor hom(A,–) (this is now known as the tensor-hom adjunction). The use of the equals sign is an abuse of notation; those two groups are not really identical but there is a way of identifying them that is natural. It can be seen to be natural on the basis, firstly, that these are two alternative descriptions of the bilinear mappings from X × A to Y. That is, however, something particular to the case of tensor product. In category theory the 'naturality' of the bijection is subsumed in the concept of a natural isomorphism. Examples Free groups The construction of free groups is a common and illuminating example. Let F : Set → Grp be the functor assigning to each set Y the free group generated by the elements of Y, and let G : Grp → Set be the forgetful functor, which assigns to each group X its underlying set. Then F is left adjoint to G: Initial morphisms. For each set Y, the set GFY is just the underlying set of the free group FY generated by Y. Let    be the set map given by "inclusion of generators". This is an initial morphism from Y to G, because any set map from Y to the underlying set GW of some group W will factor through    via a unique group homomorphism from FY to W. This is precisely the universal property of the free group on Y. Terminal morphisms. For each group X, the group FGX is the free group generated freely by GX, the elements of X. Let    be the group homomorphism that sends the generators of FGX to the elements of X they correspond to, which exists by the universal property of free groups. Then each    is a terminal morphism from F to X, because any group homomorphism from a free group FZ to X will factor through    via a unique set map from Z to GX. This means that (F,G) is an adjoint pair. Hom-set adjunction. Group homomorphisms from the free group FY to a group X correspond precisely to maps from the set Y to the set GX: each homomorphism from FY to X is fully determined by its action on generators, another restatement of the universal property of free groups. One can verify directly that this correspondence is a natural transformation, which means it is a hom-set adjunction for the pair (F,G). counit–unit adjunction. One can also verify directly that ε and η are natural. Then, a direct verification that they form a counit–unit adjunction    is as follows: The first counit–unit equation    says that for each set Y the composition should be the identity. The intermediate group FGFY is the free group generated freely by the words of the free group FY. (Think of these words as placed in parentheses to indicate that they are independent generators.) The arrow    is the group homomorphism from FY into FGFY sending each generator y of FY to the corresponding word of length one (y) as a generator of FGFY. The arrow    is the group homomorphism from FGFY to FY sending each generator to the word of FY it corresponds to (so this map is "dropping parentheses"). The composition of these maps is indeed the identity on FY. The second counit–unit equation    says that for each group X the composition    should be the identity. The intermediate set GFGX is just the underlying set of FGX. The arrow    is the "inclusion of generators" set map from the set GX to the set GFGX. The arrow    is the set map from GFGX to GX, which underlies the group homomorphism sending each generator of FGX to the element of X it corresponds to ("dropping parentheses"). The composition of these maps is indeed the identity on GX. Free constructions and forgetful functors Free objects are all examples of a left adjoint to a forgetful functor, which assigns to an algebraic object its underlying set. These algebraic free functors have generally the same description as in the detailed description of the free group situation above. Diagonal functors and limits Products, fibred products, equalizers, and kernels are all examples of the categorical notion of a limit. Any limit functor is right adjoint to a corresponding diagonal functor (provided the category has the type of limits in question), and the counit of the adjunction provides the defining maps from the limit object (i.e. from the diagonal functor on the limit, in the functor category). Below are some specific examples. Products Let Π : Grp2 → Grp be the functor that assigns to each pair (X1, X2) the product group X1×X2, and let Δ : Grp → Grp2 be the diagonal functor that assigns to every group X the pair (X, X) in the product category Grp2. The universal property of the product group shows that Π is right-adjoint to Δ. The counit of this adjunction is the defining pair of projection maps from X1×X2 to X1 and X2 which define the limit, and the unit is the diagonal inclusion of a group X into X×X (mapping x to (x,x)). The cartesian product of sets, the product of rings, the product of topological spaces etc. follow the same pattern; it can also be extended in a straightforward manner to more than just two factors. More generally, any type of limit is right adjoint to a diagonal functor. Kernels. Consider the category D of homomorphisms of abelian groups. If f1 : A1 → B1 and f2 : A2 → B2 are two objects of D, then a morphism from f1 to f2 is a pair (gA, gB) of morphisms such that gBf1 = f2gA. Let G : D → Ab be the functor which assigns to each homomorphism its kernel and let F : Ab → D be the functor which maps the group A to the homomorphism A → 0. Then G is right adjoint to F, which expresses the universal property of kernels. The counit of this adjunction is the defining embedding of a homomorphism's kernel into the homomorphism's domain, and the unit is the morphism identifying a group A with the kernel of the homomorphism A → 0. A suitable variation of this example also shows that the kernel functors for vector spaces and for modules are right adjoints. Analogously, one can show that the cokernel functors for abelian groups, vector spaces and modules are left adjoints. Colimits and diagonal functors Coproducts, fibred coproducts, coequalizers, and cokernels are all examples of the categorical notion of a colimit. Any colimit functor is left adjoint to a corresponding diagonal functor (provided the category has the type of colimits in question), and the unit of the adjunction provides the defining maps into the colimit object. Below are some specific examples. Coproducts. If F : Ab2 → Ab assigns to every pair (X1, X2) of abelian groups their direct sum, and if G : Ab → Ab2 is the functor which assigns to every abelian group Y the pair (Y, Y), then F is left adjoint to G, again a consequence of the universal property of direct sums. The unit of this adjoint pair is the defining pair of inclusion maps from X1 and X2 into the direct sum, and the counit is the additive map from the direct sum of (X,X) to back to X (sending an element (a,b) of the direct sum to the element a+b of X). Analogous examples are given by the direct sum of vector spaces and modules, by the free product of groups and by the disjoint union of sets. Further examples Algebra Adjoining an identity to a rng. This example was discussed in the motivation section above. Given a rng R, a multiplicative identity element can be added by taking RxZ and defining a Z-bilinear product with (r,0)(0,1) = (0,1)(r,0) = (r,0), (r,0)(s,0) = (rs,0), (0,1)(0,1) = (0,1). This constructs a left adjoint to the functor taking a ring to the underlying rng. Adjoining an identity to a semigroup. Similarly, given a semigroup S, we can add an identity element and obtain a monoid by taking the disjoint union S {1} and defining a binary operation on it such that it extends the operation on S and 1 is an identity element. This construction gives a functor that is a left adjoint to the functor taking a monoid to the underlying semigroup. Ring extensions. Suppose R and S are rings, and ρ : R → S is a ring homomorphism. Then S can be seen as a (left) R-module, and the tensor product with S yields a functor F : R-Mod → S-Mod. Then F is left adjoint to the forgetful functor G : S-Mod → R-Mod. Tensor products. If R is a ring and M is a right R-module, then the tensor product with M yields a functor F : R-Mod → Ab. The functor G : Ab → R-Mod, defined by G(A) = homZ(M,A) for every abelian group A, is a right adjoint to F. From monoids and groups to rings. The integral monoid ring construction gives a functor from monoids to rings. This functor is left adjoint to the functor that associates to a given ring its underlying multiplicative monoid. Similarly, the integral group ring construction yields a functor from groups to rings, left adjoint to the functor that assigns to a given ring its group of units. One can also start with a field K and consider the category of K-algebras instead of the category of rings, to get the monoid and group rings over K. Field of fractions. Consider the category Domm of integral domains with injective morphisms. The forgetful functor Field → Domm from fields has a left adjoint—it assigns to every integral domain its field of fractions. Polynomial rings. Let Ring* be the category of pointed commutative rings with unity (pairs (A,a) where A is a ring, a ∈ A and morphisms preserve the distinguished elements). The forgetful functor G:Ring* → Ring has a left adjoint – it assigns to every ring R the pair (R[x],x) where R[x] is the polynomial ring with coefficients from R. Abelianization. Consider the inclusion functor G : Ab → Grp from the category of abelian groups to category of groups. It has a left adjoint called abelianization which assigns to every group G the quotient group Gab=G/[G,G]. The Grothendieck group. In K-theory, the point of departure is to observe that the category of vector bundles on a topological space has a commutative monoid structure under direct sum. One may make an abelian group out of this monoid, the Grothendieck group, by formally adding an additive inverse for each bundle (or equivalence class). Alternatively one can observe that the functor that for each group takes the underlying monoid (ignoring inverses) has a left adjoint. This is a once-for-all construction, in line with the third section discussion above. That is, one can imitate the construction of negative numbers; but there is the other option of an existence theorem. For the case of finitary algebraic structures, the existence by itself can be referred to universal algebra, or model theory; naturally there is also a proof adapted to category theory, too. Frobenius reciprocity in the representation theory of groups: see induced representation. This example foreshadowed the general theory by about half a century. Topology A functor with a left and a right adjoint. Let G be the functor from topological spaces to sets that associates to every topological space its underlying set (forgetting the topology, that is). G has a left adjoint F, creating the discrete space on a set Y, and a right adjoint H creating the trivial topology on Y. Suspensions and loop spaces. Given topological spaces X and Y, the space [SX, Y] of homotopy classes of maps from the suspension SX of X to Y is naturally isomorphic to the space [X, ΩY] of homotopy classes of maps from X to the loop space ΩY of Y. The suspension functor is therefore left adjoint to the loop space functor in the homotopy category, an important fact in homotopy theory. Stone–Čech compactification. Let KHaus be the category of compact Hausdorff spaces and G : KHaus → Top be the inclusion functor to the category of topological spaces. Then G has a left adjoint F : Top → KHaus, the Stone–Čech compactification. The unit of this adjoint pair yields a continuous map from every topological space X into its Stone–Čech compactification. Direct and inverse images of sheaves. Every continuous map f : X → Y between topological spaces induces a functor f ∗ from the category of sheaves (of sets, or abelian groups, or rings...) on X to the corresponding category of sheaves on Y, the direct image functor. It also induces a functor f −1 from the category of sheaves of abelian groups on Y to the category of sheaves of abelian groups on X, the inverse image functor. f −1 is left adjoint to f ∗. Here a more subtle point is that the left adjoint for coherent sheaves will differ from that for sheaves (of sets). Soberification. The article on Stone duality describes an adjunction between the category of topological spaces and the category of sober spaces that is known as soberification. Notably, the article also contains a detailed description of another adjunction that prepares the way for the famous duality of sober spaces and spatial locales, exploited in pointless topology. Posets Every partially ordered set can be viewed as a category (where the elements of the poset become the category's objects and we have a single morphism from x to y if and only if x ≤ y). A pair of adjoint functors between two partially ordered sets is called a Galois connection (or, if it is contravariant, an antitone Galois connection). See that article for a number of examples: the case of Galois theory of course is a leading one. Any Galois connection gives rise to closure operators and to inverse order-preserving bijections between the corresponding closed elements. As is the case for Galois groups, the real interest lies often in refining a correspondence to a duality (i.e. antitone order isomorphism). A treatment of Galois theory along these lines by Kaplansky was influential in the recognition of the general structure here. The partial order case collapses the adjunction definitions quite noticeably, but can provide several themes: adjunctions may not be dualities or isomorphisms, but are candidates for upgrading to that status closure operators may indicate the presence of adjunctions, as corresponding monads (cf. the Kuratowski closure axioms) a very general comment of William Lawvere is that syntax and semantics are adjoint: take C to be the set of all logical theories (axiomatizations), and D the power set of the set of all mathematical structures. For a theory T in C, let G(T) be the set of all structures that satisfy the axioms T; for a set of mathematical structures S, let F(S) be the minimal axiomatization of S. We can then say that S is a subset of G(T) if and only if F(S) logically implies T: the "semantics functor" G is right adjoint to the "syntax functor" F. division is (in general) the attempt to invert multiplication, but in situations where this is not possible, we often attempt to construct an adjoint instead: the ideal quotient is adjoint to the multiplication by ring ideals, and the implication in propositional logic is adjoint to logical conjunction. Category theory Equivalences. If F : D → C is an equivalence of categories, then we have an inverse equivalence G : C → D, and the two functors F and G form an adjoint pair. The unit and counit are natural isomorphisms in this case. A series of adjunctions. The functor π0 which assigns to a category its set of connected components is left-adjoint to the functor D which assigns to a set the discrete category on that set. Moreover, D is left-adjoint to the object functor U which assigns to each category its set of objects, and finally U is left-adjoint to A which assigns to each set the indiscrete category on that set. Exponential object. In a cartesian closed category the endofunctor C → C given by –×A has a right adjoint –A. This pair is often referred to as currying and uncurrying; in many special cases, they are also continuous and form a homeomorphism. Categorical logic Quantification. If is a unary predicate expressing some property, then a sufficiently strong set theory may prove the existence of the set of terms that fulfill the property. A proper subset and the associated injection of into is characterized by a predicate expressing a strictly more restrictive property. The role of quantifiers in predicate logics is in forming propositions and also in expressing sophisticated predicates by closing formulas with possibly more variables. For example, consider a predicate with two open variables of sort and . Using a quantifier to close , we can form the set of all elements of for which there is an to which it is -related, and which itself is characterized by the property . Set theoretic operations like the intersection of two sets directly corresponds to the conjunction of predicates. In categorical logic, a subfield of topos theory, quantifiers are identified with adjoints to the pullback functor. Such a realization can be seen in analogy to the discussion of propositional logic using set theory but the general definition make for a richer range of logics. So consider an object in a category with pullbacks. Any morphism induces a functor on the category that is the preorder of subobjects. It maps subobjects of (technically: monomorphism classes of ) to the pullback . If this functor has a left- or right adjoint, they are called and , respectively. They both map from back to . Very roughly, given a domain to quantify a relation expressed via over, the functor/quantifier closes in and returns the thereby specified subset of . Example: In , the category of sets and functions, the canonical subobjects are the subset (or rather their canonical injections). The pullback of an injection of a subset into along is characterized as the largest set which knows all about and the injection of into . It therefore turns out to be (in bijection with) the inverse image . For , let us figure out the left adjoint, which is defined via which here just means . Consider . We see . Conversely, If for an we also have , then clearly . So implies . We conclude that left adjoint to the inverse image functor is given by the direct image. Here is a characterization of this result, which matches more the logical interpretation: The image of under is the full set of 's, such that is non-empty. This works because it neglects exactly those which are in the complement of . So Put this in analogy to our motivation . The right adjoint to the inverse image functor is given (without doing the computation here) by The subset of is characterized as the full set of 's with the property that the inverse image of with respect to is fully contained within . Note how the predicate determining the set is the same as above, except that is replaced by .
Mathematics
Category theory
null