id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
63837
https://en.wikipedia.org/wiki/Diplura
Diplura
The order Diplura ("two-pronged bristletails") is one of three orders of non-insect hexapods within the class Entognatha (alongside Collembola (springtails) and Protura). The name "diplura", or "two tails", refers to the characteristic pair of caudal appendages or filaments at the terminal end of the body. Around 800 species of diplurans have been described. Anatomy Diplurans are typically long, with most falling between . However, some species of Japyx may reach . They have no eyes and, apart from the darkened cerci in some species, they are unpigmented. Diplurans have long antennae with 10 or more bead-like segments projecting forward from the head. The abdomens of diplurans bear eversible vesicles, which seem to absorb moisture from the environment and help with the animal's water balance. The body segments themselves may display several types of setae, or scales and setae. Diplurans possess a characteristic pair of cerci projecting backwards from the last of the 11 abdominal somites. These cerci may be long and filamentous or short and pincer-like, leading to occasional confusion with earwigs. Some diplurans have the ability to shed their cerci if necessary (autotomy). Moulting occurs up to 30 times throughout the life of a dipluran, which is estimated to last up to one year. As entognaths, the mouthparts are concealed within a small pouch by the lateral margins of the head capsule. The mandibles usually have several apical teeth. Diplurans do not possess any eyes or wings. In males, glandular setae or disculi may be visible along the first abdominal sternite. External genital organs are present on the eighth abdominal segment. Ecology Diplurans are common in moist soil, leaf litter or humus, but are rarely seen because of their size and subterranean lifestyles. They have biting mouthparts and feed on a variety of live prey and dead organic matter. Those species with long cerci are herbivorous. Diplurans are found on nearly all land masses, except Antarctica and several oceanic islands. Their role as soil-dwelling organisms may play a key role in indicating soil quality, and as a measure of anthropogenic impact (e.g. soil nutrient depletion as a result of farming). Reproduction Like other non-insect hexapods, diplurans practice external fertilisation. Males lay up to 200 spermatophores a week, which are held off the ground by short stalks and probably only remain viable for about two days. The female collects the spermatophore with her genital opening, and later lays eggs in a cavity in the ground. The hatchlings (or nymphs) do not undergo metamorphosis, but resemble the adults, apart from their smaller size, lesser number of setae and their lack of reproductive organs. Lineages Several major lineages within Diplura are readily recognizable by the structure of their cerci. Japygidae: possess forceps-like cerci (resembling those of an earwig). Usually very aggressive predatory diplurans, using their pincer-like cerci to capture prey, including springtails, isopods, small myriapods, insect larvae, and even other diplurans. Projapygidae: possess stout, short, and rigid cerci. Campodeidae: possess elongate, flexible cerci that may be as long as the antennae and have many segments. Feed on soil fungi, mites, springtails, and other small soil invertebrates, as well as detritus. Relatives The relationships among the four groups of hexapods are not resolved, but most recent studies argue against a monophyletic Entognatha. The fossil record of the Diplura is sparse, but one apparent dipluran dates from the Carboniferous. This early dipluran, Testajapyx, had compound eyes, and mouthparts that more closely resembled those of true insects.
Biology and health sciences
Non-insect hexapods
Animals
63844
https://en.wikipedia.org/wiki/Protura
Protura
The Protura, or proturans, and sometimes nicknamed coneheads, are very small (0.6–1.5mm long), soil-dwelling animals, so inconspicuous they were not noticed until the 20th century. The Protura constitute an order of hexapods that were previously regarded as insects, and sometimes treated as a class in their own right. Some evidence indicates the Protura are basal to all other hexapods, although not all researchers consider them Hexapoda, rendering the monophyly of Hexapoda unsettled. Uniquely among hexapods, proturans show anamorphic development, whereby body segments are added during moults. There are close to 800 species, described in seven families. Nearly 300 species are contained in a single genus, Eosentomon. Morphology Proturans have no eyes, wings, or antennae, and, lacking pigmentation, are usually whitish or pale brown. The sensory function of the absent antennae is fulfilled by the first pair of the three pairs of five-segmented legs, which are held up, pointing forward, and bearing numerous tarsal sensilla and sensory hairs. They ambulate using the four rear legs. The head is conical, and bears two pseudoculi with unknown function. The body is elongated and cylindrical, with a post-anal telson at the end. The mouthparts are entognathous (enclosed within the head capsule) and consist of narrow mandibles and maxillae. There are no cerci at the end of the abdomen, which gives the group their name, from the Greek proto- (meaning "first", in this case implying primitive), and ura, meaning "tail". The first three abdominal segments bear short limb-like appendages, called "styli". The first pair of styli is two-segmented, while the second and third pair are either two-segmented or unsegmented. The genitalia are internal and the genital opening lies between the eleventh segment and the telson of the adult. During mating, the genitalia of both sexes are everted from an abdominal chamber. Only the two families Eosentomidae and Sinentomidae possess a simple tracheal system with a pair of spiracles on both the mesothorax and the metathorax, while proturans in the remaining families lack these structures and perform gas exchange by diffusion. Ecology Proturans live chiefly in soil, mosses, and leaf litter of moist temperate forests that are not too acidic. They have also been found beneath rocks or under the bark of trees, as well as in animal burrows. They are generally restricted to the uppermost , but have been found as deep as . Although they are sometimes regarded as uncommon, proturans are most likely overlooked because of their small size, as densities of over 90,000 individuals per square metre have been recorded. The diet of proturans has not yet been sufficiently observed to be characterised. In laboratory culture, they may be fed mycorrhizal fungi, dead mites and pulverized, dried mushrooms; they are believed to feed on decaying vegetable matter and fungi in the wild. The styliform mouthparts suggest the Protura may be fluid feeders, based on evidence that some species suck out the liquid contents of fungal hyphae. Proturan species which spend their lives near the soil surface generally produce one new generation of offspring each year; they also possess longer legs. Species living at deeper soil levels have shorter legs and tend to reproduce less seasonally. Some migratory proturan species move to deeper soil layers for the winter and ascend to shallower soil layers for the summer. Proturans play a role in soil formation and composition by speeding decomposition, helping in the breakdown of leaf litter and recycling nutrients into the soil. Development The nymph has 8 abdominal segments plus the telson; the number of abdominal segments increases through moulting until the full adult complement of 11 abdominal segments is achieved. Further moults may occur, but do not add additional body segments; it is still not known whether the adults continue to moult throughout their lives. Eggs have been observed in only a few species. In most proturan families, five developmental stages follow the egg stage: the prenymph hatches from the egg and has only weakly developed mouthparts and 8 abdominal segments; nymph I follows and has fully developed mouthparts; nymph II has 9 abdominal segments; "maturus junior" has 11 abdominal segments, and moults into the sexually mature adult. Male individuals of the family Acerentomidae differ from this five-stage scheme, having an additional developmental stage, the preimago, which has partially developed genitalia and appears between the "maturus junior" and the adult stage. History Proturans were first discovered in the early 20th century, when Filippo Silvestri and Antonio Berlese independently described the animals. The first species to be described was Acerentomon doderoi, published in 1907 by Silvestri, based on material found near Syracuse, New York.
Biology and health sciences
Non-insect hexapods
Animals
63847
https://en.wikipedia.org/wiki/Formaldehyde
Formaldehyde
Formaldehyde ( , ) (systematic name methanal) is an organic compound with the chemical formula and structure , more precisely . The compound is a pungent, colourless gas that polymerises spontaneously into paraformaldehyde. It is stored as aqueous solutions (formalin), which consists mainly of the hydrate CH2(OH)2. It is the simplest of the aldehydes (). As a precursor to many other materials and chemical compounds, in 2006 the global production of formaldehyde was estimated at 12 million tons per year. It is mainly used in the production of industrial resins, e.g., for particle board and coatings. Small amounts also occur naturally. Formaldehyde is classified as a carcinogen and can cause respiratory and skin irritation upon exposure. Forms Formaldehyde is more complicated than many simple carbon compounds in that it adopts several diverse forms. These compounds can often be used interchangeably and can be interconverted. Molecular formaldehyde. A colorless gas with a characteristic pungent, irritating odor. It is stable at about 150 °C, but it polymerizes when condensed to a liquid. 1,3,5-Trioxane, with the formula (CH2O)3. It is a white solid that dissolves without degradation in organic solvents. It is a trimer of molecular formaldehyde. Paraformaldehyde, with the formula HO(CH2O)nH. It is a white solid that is insoluble in most solvents. Methanediol, with the formula CH2(OH)2. This compound also exists in equilibrium with various oligomers (short polymers), depending on the concentration and temperature. A saturated water solution, of about 40% formaldehyde by volume or 37% by mass, is called "100% formalin". A small amount of stabilizer, such as methanol, is usually added to suppress oxidation and polymerization. A typical commercial-grade formalin may contain 10–12% methanol in addition to various metallic impurities. "Formaldehyde" was first used as a generic trademark in 1893 following a previous trade name, "formalin". Structure and bonding Molecular formaldehyde contains a central carbon atom with a double bond to the oxygen atom and a single bond to each hydrogen atom. This structure is summarised by the condensed formula H2C=O. The molecule is planar, Y-shaped and its molecular symmetry belongs to the C2v point group. The precise molecular geometry of gaseous formaldehyde has been determined by gas electron diffraction and microwave spectroscopy. The bond lengths are 1.21 Å for the carbon–oxygen bond and around 1.11 Å for the carbon–hydrogen bond, while the H–C–H bond angle is 117°, close to the 120° angle found in an ideal trigonal planar molecule. Some excited electronic states of formaldehyde are pyramidal rather than planar as in the ground state. Occurrence Processes in the upper atmosphere contribute more than 80% of the total formaldehyde in the environment. Formaldehyde is an intermediate in the oxidation (or combustion) of methane, as well as of other carbon compounds, e.g. in forest fires, automobile exhaust, and tobacco smoke. When produced in the atmosphere by the action of sunlight and oxygen on atmospheric methane and other hydrocarbons, it becomes part of smog. Formaldehyde has also been detected in outer space. Formaldehyde and its adducts are ubiquitous in nature. Food may contain formaldehyde at levels 1–100 mg/kg. Formaldehyde, formed in the metabolism of the amino acids serine and threonine, is found in the bloodstream of humans and other primates at concentrations of approximately 50 micromolar. Experiments in which animals are exposed to an atmosphere containing isotopically labeled formaldehyde have demonstrated that even in deliberately exposed animals, the majority of formaldehyde-DNA adducts found in non-respiratory tissues are derived from endogenously produced formaldehyde. Formaldehyde does not accumulate in the environment, because it is broken down within a few hours by sunlight or by bacteria present in soil or water. Humans metabolize formaldehyde quickly, converting it to formic acid. It nonetheless presents significant health concerns, as a contaminant. Interstellar formaldehyde Formaldehyde appears to be a useful probe in astrochemistry due to prominence of the 110←111 and 211←212 K-doublet transitions. It was the first polyatomic organic molecule detected in the interstellar medium. Since its initial detection in 1969, it has been observed in many regions of the galaxy. Because of the widespread interest in interstellar formaldehyde, it has been extensively studied, yielding new extragalactic sources. A proposed mechanism for the formation is the hydrogenation of CO ice: H + CO → HCO HCO + H → CH2O HCN, HNC, H2CO, and dust have also been observed inside the comae of comets C/2012 F6 (Lemmon) and C/2012 S1 (ISON). Synthesis and industrial production Laboratory synthesis Formaldehyde was discovered in 1859 by the Russian chemist Aleksandr Butlerov (1828–1886) when he attempted to synthesize methanediol ("methylene glycol") from iodomethane and silver oxalate. In his paper, Butlerov referred to formaldehyde as "dioxymethylen" (methylene dioxide) because his empirical formula for it was incorrect, as atomic weights were not precisely determined until the Karlsruhe Congress. The compound was identified as an aldehyde by August Wilhelm von Hofmann, who first announced its production by passing methanol vapor in air over hot platinum wire. With modifications, Hofmann's method remains the basis of the present day industrial route. Solution routes to formaldehyde also entail oxidation of methanol or iodomethane. Industry Formaldehyde is produced industrially by the catalytic oxidation of methanol. The most common catalysts are silver metal, iron(III) oxide, iron molybdenum oxides (e.g. iron(III) molybdate) with a molybdenum-enriched surface, or vanadium oxides. In the commonly used formox process, methanol and oxygen react at c. 250–400 °C in presence of iron oxide in combination with molybdenum and/or vanadium to produce formaldehyde according to the chemical equation: 2CH3OH + O2 → 2CH2O + 2H2O The silver-based catalyst usually operates at a higher temperature, about 650 °C. Two chemical reactions on it simultaneously produce formaldehyde: that shown above and the dehydrogenation reaction: CH3OH → CH2O + H2 In principle, formaldehyde could be generated by oxidation of methane, but this route is not industrially viable because the methanol is more easily oxidized than methane. Biochemistry Formaldehyde is produced via several enzyme-catalyzed routes. Living beings, including humans, produce formaldehyde as part of their metabolism. Formaldehyde is key to several bodily functions (e.g. epigenetics), but its amount must also be tightly controlled to avoid self-poisoning. Serine hydroxymethyltransferase can decompose serine into formaldehyde and glycine, according to this reaction: HOCH2CH(NH2)CO2H → CH2O + H2C(NH2)CO2H. Methylotrophic microbes convert methanol into formaldehyde and energy via methanol dehydrogenase: CH3OH → CH2O + 2e− + 2H+ Other routes to formaldehyde include oxidative demethylations, semicarbazide-sensitive amine oxidases, dimethylglycine dehydrogenases, lipid peroxidases, P450 oxidases, and N-methyl group demethylases. Formaldehyde is catabolized by alcohol dehydrogenase ADH5 and aldehyde dehydrogenase ALDH2. Organic chemistry Formaldehyde is a building block in the synthesis of many other compounds of specialised and industrial significance. It exhibits most of the chemical properties of other aldehydes but is more reactive. Polymerization and hydration Monomeric CH2O is a gas and is rarely encountered in the laboratory. Aqueous formaldehyde, unlike some other small aldehydes (which need specific conditions to oligomerize through aldol condensation) oligomerizes spontaneously at a common state. The trimer 1,3,5-trioxane, , is a typical oligomer. Many cyclic oligomers of other sizes have been isolated. Similarly, formaldehyde hydrates to give the geminal diol methanediol, which condenses further to form hydroxy-terminated oligomers HO(CH2O)nH. The polymer is called paraformaldehyde. The higher concentration of formaldehyde—the more equilibrium shifts towards polymerization. Diluting with water or increasing the solution temperature, as well as adding alcohols (such as methanol or ethanol) lowers that tendency. Gaseous formaldehyde polymerizes at active sites on vessel walls, but the mechanism of the reaction is unknown. Small amounts of hydrogen chloride, boron trifluoride, or stannic chloride present in gaseous formaldehyde provide the catalytic effect and make the polymerization rapid. Cross-linking reactions Formaldehyde forms cross-links by first combining with a protein to form methylol, which loses a water molecule to form a Schiff base. The Schiff base can then react with DNA or protein to create a cross-linked product. This reaction is the basis for the most common process of chemical fixation. Oxidation and reduction Formaldehyde is readily oxidized by atmospheric oxygen into formic acid. For this reason, commercial formaldehyde is typically contaminated with formic acid. Formaldehyde can be hydrogenated into methanol. In the Cannizzaro reaction, formaldehyde and base react to produce formic acid and methanol, a disproportionation reaction. Hydroxymethylation and chloromethylation Formaldehyde reacts with many compounds, resulting in hydroxymethylation: X-H + CH2O → X-CH2OH(X = R2N, RC(O)NR', SH). The resulting hydroxymethyl derivatives typically react further. Thus, amines give hexahydro-1,3,5-triazines: 3RNH2 + 3CH2O → (RNCH2)3 + 3H2O Similarly, when combined with hydrogen sulfide, it forms trithiane: 3CH2O + 3H2S → (CH2S)3 + 3H2O In the presence of acids, it participates in electrophilic aromatic substitution reactions with aromatic compounds resulting in hydroxymethylated derivatives: ArH + CH2O → ArCH2OH When conducted in the presence of hydrogen chloride, the product is the chloromethyl compound, as described in the Blanc chloromethylation. If the arene is electron-rich, as in phenols, elaborate condensations ensue. With 4-substituted phenols one obtains calixarenes. Phenol results in polymers. Other reactions Many amino acids react with formaldehyde. Cysteine converts to thioproline. Uses Industrial applications Formaldehyde is a common precursor to more complex compounds and materials. In approximate order of decreasing consumption, products generated from formaldehyde include urea formaldehyde resin, melamine resin, phenol formaldehyde resin, polyoxymethylene plastics, 1,4-butanediol, and methylene diphenyl diisocyanate. The textile industry uses formaldehyde-based resins as finishers to make fabrics crease-resistant. When condensed with phenol, urea, or melamine, formaldehyde produces, respectively, hard thermoset phenol formaldehyde resin, urea formaldehyde resin, and melamine resin. These polymers are permanent adhesives used in plywood and carpeting. They are also foamed to make insulation, or cast into moulded products. Production of formaldehyde resins accounts for more than half of formaldehyde consumption. Formaldehyde is also a precursor to polyfunctional alcohols such as pentaerythritol, which is used to make paints and explosives. Other formaldehyde derivatives include methylene diphenyl diisocyanate, an important component in polyurethane paints and foams, and hexamine, which is used in phenol-formaldehyde resins as well as the explosive RDX. Condensation with acetaldehyde affords pentaerythritol, a chemical necessary in synthesizing PETN, a high explosive: Niche uses Disinfectant and biocide An aqueous solution of formaldehyde can be useful as a disinfectant as it kills most bacteria and fungi (including their spores). It is used as an additive in vaccine manufacturing to inactivate toxins and pathogens. Formaldehyde releasers are used as biocides in personal care products such as cosmetics. Although present at levels not normally considered harmful, they are known to cause allergic contact dermatitis in certain sensitised individuals. Aquarists use formaldehyde as a treatment for the parasites Ichthyophthirius multifiliis and Cryptocaryon irritans. Formaldehyde is one of the main disinfectants recommended for destroying anthrax. Formaldehyde is also approved for use in the manufacture of animal feeds in the US. It is an antimicrobial agent used to maintain complete animal feeds or feed ingredients Salmonella negative for up to 21 days. Tissue fixative and embalming agent Formaldehyde preserves or fixes tissue or cells. The process involves cross-linking of primary amino groups. The European Union has banned the use of formaldehyde as a biocide (including embalming) under the Biocidal Products Directive (98/8/EC) due to its carcinogenic properties. Countries with a strong tradition of embalming corpses, such as Ireland and other colder-weather countries, have raised concerns. Despite reports to the contrary, no decision on the inclusion of formaldehyde on Annex I of the Biocidal Products Directive for product-type 22 (embalming and taxidermist fluids) had been made . Formaldehyde-based crosslinking is exploited in ChIP-on-chip or ChIP-sequencing genomics experiments, where DNA-binding proteins are cross-linked to their cognate binding sites on the chromosome and analyzed to determine what genes are regulated by the proteins. Formaldehyde is also used as a denaturing agent in RNA gel electrophoresis, preventing RNA from forming secondary structures. A solution of 4% formaldehyde fixes pathology tissue specimens at about one mm per hour at room temperature. Drug testing Formaldehyde and 18 M (concentrated) sulfuric acid makes Marquis reagent—which can identify alkaloids and other compounds. Photography In photography, formaldehyde is used in low concentrations for the process C-41 (color negative film) stabilizer in the final wash step, as well as in the process E-6 pre-bleach step, to make it unnecessary in the final wash. Due to improvements in dye coupler chemistry, more modern (2006 or later) E-6 and C-41 films do not need formaldehyde, as their dyes are already stable. Safety In view of its widespread use, toxicity, and volatility, formaldehyde poses a significant danger to human health. In 2011, the US National Toxicology Program described formaldehyde as "known to be a human carcinogen". Chronic inhalation Concerns are associated with chronic (long-term) exposure by inhalation as may happen from thermal or chemical decomposition of formaldehyde-based resins and the production of formaldehyde resulting from the combustion of a variety of organic compounds (for example, exhaust gases). As formaldehyde resins are used in many construction materials, it is one of the more common indoor air pollutants. At concentrations above 0.1 ppm in air, formaldehyde can irritate the eyes and mucous membranes. Formaldehyde inhaled at this concentration may cause headaches, a burning sensation in the throat, and difficulty breathing, and can trigger or aggravate asthma symptoms. The CDC considers formaldehyde as a systemic poison. Formaldehyde poisoning can cause permanent changes in the nervous system's functions. A 1988 Canadian study of houses with urea-formaldehyde foam insulation found that formaldehyde levels as low as 0.046 ppm were positively correlated with eye and nasal irritation. A 2009 review of studies has shown a strong association between exposure to formaldehyde and the development of childhood asthma. A theory was proposed for the carcinogenesis of formaldehyde in 1978. In 1987 the United States Environmental Protection Agency (EPA) classified it as a probable human carcinogen, and after more studies the WHO International Agency for Research on Cancer (IARC) in 1995 also classified it as a probable human carcinogen. Further information and evaluation of all known data led the IARC to reclassify formaldehyde as a known human carcinogen associated with nasal sinus cancer and nasopharyngeal cancer. Studies in 2009 and 2010 have also shown a positive correlation between exposure to formaldehyde and the development of leukemia, particularly myeloid leukemia. Nasopharyngeal and sinonasal cancers are relatively rare, with a combined annual incidence in the United States of < 4,000 cases. About 30,000 cases of myeloid leukemia occur in the United States each year. Some evidence suggests that workplace exposure to formaldehyde contributes to sinonasal cancers. Professionals exposed to formaldehyde in their occupation, such as funeral industry workers and embalmers, showed an increased risk of leukemia and brain cancer compared with the general population. Other factors are important in determining individual risk for the development of leukemia or nasopharyngeal cancer. In yeast, formaldehyde is found to perturb pathways for DNA repair and cell cycle. In the residential environment, formaldehyde exposure comes from a number of routes; formaldehyde can be emitted by treated wood products, such as plywood or particle board, but it is produced by paints, varnishes, floor finishes, and cigarette smoking as well. In July 2016, the U.S. EPA released a prepublication version of its final rule on Formaldehyde Emission Standards for Composite Wood Products. These new rules impact manufacturers, importers, distributors, and retailers of products containing composite wood, including fiberboard, particleboard, and various laminated products, who must comply with more stringent record-keeping and labeling requirements. The U.S. EPA allows no more than 0.016 ppm formaldehyde in the air in new buildings constructed for that agency. A U.S. EPA study found a new home measured 0.076 ppm when brand new and 0.045 ppm after 30 days. The Federal Emergency Management Agency (FEMA) has also announced limits on the formaldehyde levels in trailers purchased by that agency. The EPA recommends the use of "exterior-grade" pressed-wood products with phenol instead of urea resin to limit formaldehyde exposure, since pressed-wood products containing formaldehyde resins are often a significant source of formaldehyde in homes. The eyes are most sensitive to formaldehyde exposure: The lowest level at which many people can begin to smell formaldehyde ranges between 0.05 and 1 ppm. The maximum concentration value at the workplace is 0.3 ppm. In controlled chamber studies, individuals begin to sense eye irritation at about 0.5 ppm; 5 to 20 percent report eye irritation at 0.5 to 1 ppm; and greater certainty for sensory irritation occurred at 1 ppm and above. While some agencies have used a level as low as 0.1 ppm as a threshold for irritation, the expert panel found that a level of 0.3 ppm would protect against nearly all irritation. In fact, the expert panel found that a level of 1.0 ppm would avoid eye irritation—the most sensitive endpoint—in 75–95% of all people exposed. Formaldehyde levels in building environments are affected by a number of factors. These include the potency of formaldehyde-emitting products present, the ratio of the surface area of emitting materials to volume of space, environmental factors, product age, interactions with other materials, and ventilation conditions. Formaldehyde emits from a variety of construction materials, furnishings, and consumer products. The three products that emit the highest concentrations are medium density fiberboard, hardwood plywood, and particle board. Environmental factors such as temperature and relative humidity can elevate levels because formaldehyde has a high vapor pressure. Formaldehyde levels from building materials are the highest when a building first opens because materials would have less time to off-gas. Formaldehyde levels decrease over time as the sources suppress. In operating rooms, formaldehyde is produced as a byproduct of electrosurgery and is present in surgical smoke, exposing surgeons and healthcare workers to potentially unsafe concentrations. Formaldehyde levels in air can be sampled and tested in several ways, including impinger, treated sorbent, and passive monitors. The National Institute for Occupational Safety and Health (NIOSH) has measurement methods numbered 2016, 2541, 3500, and 3800. In June 2011, the twelfth edition of the National Toxicology Program (NTP) Report on Carcinogens (RoC) changed the listing status of formaldehyde from "reasonably anticipated to be a human carcinogen" to "known to be a human carcinogen." Concurrently, a National Academy of Sciences (NAS) committee was convened and issued an independent review of the draft U.S. EPA IRIS assessment of formaldehyde, providing a comprehensive health effects assessment and quantitative estimates of human risks of adverse effects. Acute irritation and allergic reaction For most people, irritation from formaldehyde is temporary and reversible, although formaldehyde can cause allergies and is part of the standard patch test series. In 2005–06, it was the seventh-most-prevalent allergen in patch tests (9.0%). People with formaldehyde allergy are advised to avoid formaldehyde releasers as well (e.g., Quaternium-15, imidazolidinyl urea, and diazolidinyl urea). People who suffer allergic reactions to formaldehyde tend to display lesions on the skin in the areas that have had direct contact with the substance, such as the neck or thighs (often due to formaldehyde released from permanent press finished clothing) or dermatitis on the face (typically from cosmetics). Formaldehyde has been banned in cosmetics in both Sweden and Japan. Other routes Formaldehyde occurs naturally, and is "an essential intermediate in cellular metabolism in mammals and humans." According to the American Chemistry Council, "Formaldehyde is found in every living system—from plants to animals to humans. It metabolizes quickly in the body, breaks down rapidly, is not persistent and does not accumulate in the body." The twelfth edition of NTP Report on Carcinogens notes that "food and water contain measureable concentrations of formaldehyde, but the significance of ingestion as a source of formaldehyde exposure for the general population is questionable." Food formaldehyde generally occurs in a bound form and formaldehyde is unstable in an aqueous solution. In humans, ingestion of as little as of 37% formaldehyde solution can cause death. Other symptoms associated with ingesting such a solution include gastrointestinal damage (vomiting, abdominal pain), and systematic damage (dizziness). Testing for formaldehyde is by blood and/or urine by gas chromatography–mass spectrometry. Other methods include infrared detection, gas detector tubes, etc., of which high-performance liquid chromatography is the most sensitive. Regulation Several web articles claim that formaldehyde has been banned from manufacture or import into the European Union (EU) under REACH (Registration, Evaluation, Authorization, and restriction of Chemical substances) legislation. That is a misconception, as formaldehyde is not listed in the Annex I of Regulation (EC) No 689/2008 (export and import of dangerous chemicals regulation), nor on a priority list for risk assessment. However, formaldehyde is banned from use in certain applications (preservatives for liquid-cooling and processing systems, slimicides, metalworking-fluid preservatives, and antifouling products) under the Biocidal Products Directive. In the EU, the maximum allowed concentration of formaldehyde in finished products is 0.2%, and any product that exceeds 0.05% has to include a warning that the product contains formaldehyde. In the United States, Congress passed a bill July 7, 2010, regarding the use of formaldehyde in hardwood plywood, particle board, and medium density fiberboard. The bill limited the allowable amount of formaldehyde emissions from these wood products to 0.09 ppm, and required companies to meet this standard by January 2013. The final U.S. EPA rule specified maximum emissions of "0.05 ppm formaldehyde for hardwood plywood, 0.09 ppm formaldehyde for particleboard, 0.11 ppm formaldehyde for medium-density fiberboard, and 0.13 ppm formaldehyde for thin medium-density fiberboard." Formaldehyde was declared a toxic substance by the 1999 Canadian Environmental Protection Act. The FDA is proposing a ban on hair relaxers with formaldehyde due to cancer concerns. Contaminant in food Scandals have broken in both the 2005 Indonesia food scare and 2007 Vietnam food scare regarding the addition of formaldehyde to foods to extend shelf life. In 2011, after a four-year absence, Indonesian authorities found foods with formaldehyde being sold in markets in a number of regions across the country. In August 2011, at least at two Carrefour supermarkets, the Central Jakarta Livestock and Fishery Sub-Department found cendol containing 10 parts per million of formaldehyde. In 2014, the owner of two noodle factories in Bogor, Indonesia, was arrested for using formaldehyde in noodles. 50 kg of formaldehyde was confiscated. Foods known to be contaminated included noodles, salted fish, and tofu. Chicken and beer were also rumored to be contaminated. In some places, such as China, manufacturers still use formaldehyde illegally as a preservative in foods, which exposes people to formaldehyde ingestion. In the early 1900s, it was frequently added by US milk plants to milk bottles as a method of pasteurization due to the lack of knowledge and concern regarding formaldehyde's toxicity. In 2011 in Nakhon Ratchasima, Thailand, truckloads of rotten chicken were treated with formaldehyde for sale in which "a large network", including 11 slaughterhouses run by a criminal gang, were implicated. In 2012, 1 billion rupiah (almost US$100,000) of fish imported from Pakistan to Batam, Indonesia, were found laced with formaldehyde. Formalin contamination of foods has been reported in Bangladesh, with stores and supermarkets selling fruits, fishes, and vegetables that have been treated with formalin to keep them fresh. However, in 2015, a Formalin Control Bill was passed in the Parliament of Bangladesh with a provision of life-term imprisonment as the maximum punishment as well as a maximum fine of 2,000,000 BDT but not less than 500,000 BDT for importing, producing, or hoarding formalin without a license. Formaldehyde was one of the chemicals used in 19th century industrialised food production that was investigated by Dr. Harvey W. Wiley with his famous 'Poison Squad' as part of the US Department of Agriculture. This led to the 1906 Pure Food and Drug Act, a landmark event in the early history of food regulation in the United States.
Physical sciences
Carbon–oxygen bond
null
63904
https://en.wikipedia.org/wiki/Match
Match
A match is a tool for starting a fire. Typically, matches are made of small wooden sticks or stiff paper. One end is coated with a material that can be ignited by friction generated by striking the match against a suitable surface. Wooden matches are packaged in matchboxes, and paper matches are partially cut into rows and stapled into matchbooks. The coated end of a match, known as the match "head", consists of a bead of active ingredients and binder, often colored for easier inspection. There are two main types of matches: safety matches, which can be struck only against a specially prepared surface, and strike-anywhere matches, for which any suitably frictional surface can be used. Etymology The word match derives from Old French mèche, referring to the wick of a candle. Historically, the term match referred to lengths of cord (later cambric) impregnated with chemicals, and allowed to burn continuously. These were used to light fires and fire guns (see matchlock) and cannons (see linstock) and to detonate explosive devices such as dynamite sticks. Such matches were characterised by their burning speed i.e. quick match and slow match. Depending on its formulation, a slow match burns at a rate of around 30 cm (1 ft) per hour and a quick match at per minute. The modern equivalent of a match (in the sense of a burnable cord) is the simple fuse such as a visco fuse, still used in pyrotechnics to obtain a controlled time delay before ignition. The original meaning of the word still persists in some pyrotechnics terms, such as black match (a black-powder-impregnated fuse) and Bengal match (a firework akin to sparklers producing a relatively long-burning, colored flame). However, when friction matches became commonplace, the term match came to refer mainly to these. History Early matches A note in the text Cho Keng Lu, written in 1366, describes a sulfur match, small sticks of pinewood impregnated with sulfur, used in China by "impoverished court ladies" in AD 577 during the conquest of Northern Qi. During the Five Dynasties and Ten Kingdoms (AD 907–960), a book called the Records of the Unworldly and the Strange written by Chinese author Tao Gu in about 950 stated: If there occurs an emergency at night it may take some time to make a light to light a lamp. But an ingenious man devised the system of impregnating little sticks of pinewood with sulfur and storing them ready for use. At the slightest touch of fire, they burst into flame. One gets a little flame like an ear of corn. This marvelous thing was formerly called a "light-bringing slave", but afterward when it became an article of commerce its name was changed to 'fire inch-stick'. Another text, Wu Lin Chiu Shih, dated from 1270 AD, lists sulfur matches as something that was sold in the markets of Hangzhou, around the time of Marco Polo's visit. The matches were known as fa chu or tshui erh. Chemical matches Before the use of matches, fires were sometimes lit using a burning glass (a lens) to focus the sun on tinder, a method that could only work on sunny days. Another more common method was igniting tinder with sparks produced by striking flint and steel, or by sharply increasing air pressure in a fire piston. Early work had been done by alchemist Hennig Brand, who discovered the flammable nature of phosphorus in 1669. Others, including Robert Boyle and his assistant, Ambrose Godfrey, continued these experiments in the 1680s with phosphorus and sulfur, but their efforts did not produce practical and inexpensive methods for generating fires. A number of different ways were employed in order to light smoking tobacco: One was the use of a spill – a thin object something like a thin candle, a rolled paper or a straw, which would be lit from a nearby, already existing flame and then used to light the cigar or pipe – most often kept near the fireplace in a spill vase. Another method saw the use of a striker, a tool that looked like scissors, but with flint on one "blade" and steel on the other. These would then be rubbed together, ultimately producing sparks. If neither of these two was available, one could also use ember tongs to pick up a coal from a fire and light the tobacco directly. The first modern, self-igniting match was invented in 1805 by Jean Chancel, assistant to Professor Louis Jacques Thénard of Paris. The head of the match consisted of a mixture of potassium chlorate, sulfur, gum arabic and sugar. The match was ignited by dipping its tip in a small asbestos bottle filled with sulfuric acid. This kind of match was quite expensive, however, and its use was also relatively dangerous, so Chancel's matches never really became widely adopted or in commonplace use. This approach to match making was further refined in the following decades, culminating with the 'Promethean match' that was patented by Samuel Jones of London in 1828. His match consisted of a small glass capsule containing a chemical composition of sulfuric acid colored with indigo and coated on the exterior with potassium chlorate, all of which was wrapped up in rolls of paper. The immediate ignition of this particular form of a match was achieved by crushing the capsule with a pair of pliers, mixing and releasing the ingredients in order for it to become alight. In London, similar matches meant for lighting cigars were introduced in 1849 by Heurtner who had a shop called the Lighthouse in the Strand. One version that he sold was called "Euperion" (sometimes "Empyrion") which was popular for kitchen use and nicknamed as "Hugh Perry", while another meant for outdoor use was called a "Vesuvian" or "flamer". The head was large and contained niter, charcoal and wood dust, and had a phosphorus tip. The handle was large and made of hardwood so as to burn vigorously and last for a while. Some even had glass stems. Both Vesuvians and Prometheans had a bulb of sulfuric acid at the tip which had to be broken to start the reaction. Samuel Jones introduced fuzees for lighting cigars and pipes in 1832. A similar invention was patented in 1839 by John Hucks Stevens in America. In 1832, William Newton patented the "wax vesta" in England. It consisted of a wax stem that embedded cotton threads and had a tip of phosphorus. Variants known as "candle matches" were made by Savaresse and Merckel in 1836. John Hucks Stevens also patented a safety version of the friction match in 1839. Friction matches Chemical matches were unable to make the leap into mass production, due to the expense, their cumbersome nature, and the inherent danger of using them. An alternative method was to produce the ignition through friction produced by rubbing two rough surfaces together. An early example was made by François Derosne in 1816. His crude match was called a briquet phosphorique and it used a sulfur-tipped match to scrape inside a tube coated internally with phosphorus. It was both inconvenient and unsafe. The first successful friction match was invented in 1826 by John Walker, an English chemist and druggist from Stockton-on-Tees, County Durham. He developed a keen interest in trying to find a means of obtaining fire easily. Several chemical mixtures were already known that would ignite by a sudden explosion, but it had not been found possible to transmit the flame to a slow-burning substance like wood. While Walker was preparing a lighting mixture on one occasion, a match that had been dipped in it took fire by an accidental friction upon the hearth. He at once appreciated the practical value of the discovery, and started making friction matches. They consisted of wooden splints or sticks of cardboard coated with sulfur and tipped with a mixture of sulfide of antimony, chlorate of potash, and gum. The treatment with sulfur helped the splints to catch fire, and the odor was improved by the addition of camphor. The price of a box of 50 matches was one shilling. With each box was supplied a piece of sandpaper, folded double, through which the match had to be drawn to ignite it. Walker did not name the matches "Congreves" in honour of the inventor and rocket pioneer Sir William Congreve, as it is sometimes stated. The congreves were the invention of Charles Sauria, a French chemistry student at the time. Walker did not divulge the exact composition of his matches. Between 1827 and 1829, Walker made about 168 sales of his matches. It was, however, dangerous and flaming balls sometimes fell to the floor, burning carpets and dresses, leading to their ban in France and Germany. Walker either refused or neglected to patent his invention. In 1829, Scots inventor Sir Isaac Holden invented an improved version of Walker's match and demonstrated it to his class at Castle Academy in Reading, Berkshire. Holden did not patent his invention and claimed that one of his pupils wrote to his father Samuel Jones, a chemist in London who commercialised his process. A version of Holden's match was patented by Samuel Jones, and these were sold as lucifer matches. These early matches had a number of problems an initial violent reaction, an unsteady flame, and unpleasant odor and fumes. Lucifers could ignite explosively, sometimes throwing sparks a considerable distance. Lucifers were manufactured in the United States by Ezekial Byam. The term "lucifer" persisted as slang for a match into the 20th century. For example, the song "Pack Up Your Troubles" includes the line "while you’ve a lucifer to light your fag". Matches are still called "lucifers" in Dutch. Lucifers were quickly replaced after 1830 by matches made according to the process devised by Frenchman Charles Sauria, who substituted white phosphorus for the antimony sulfide. These new phosphorus matches had to be kept in airtight metal boxes but became popular and went by the name of loco foco ("crazy fire") in the United States, from which was derived the name of a political party. The earliest American patent for the phosphorus friction match was granted in 1836 to Alonzo Dwight Phillips of Springfield, Massachusetts. From 1830 to 1890, the composition of these matches remained largely unchanged, although some improvements were made. In 1843 William Ashgard replaced the sulfur with beeswax, reducing the pungency of the fumes. This was replaced by paraffin in 1862 by Charles W. Smith, resulting in what were called "parlor matches". From 1870 the end of the splint was fireproofed by impregnation with fire-retardant chemicals such as alum, sodium silicate, and other salts resulting in what was commonly called a "drunkard's match" that prevented the accidental burning of the user's fingers. Other advances were made for the mass manufacture of matches. Early matches were made from blocks of woods with cuts separating the splints but leaving their bases attached. Later versions were made in the form of thin combs. The splints would be broken away from the comb when required. A noiseless match was invented in 1836 by the Hungarian János Irinyi, who was a student of chemistry. An unsuccessful experiment by his professor, Meissner, gave Irinyi the idea to replace potassium chlorate with lead dioxide in the head of the phosphorus match. He liquefied phosphorus in warm water and shook it in a glass vial, until the two liquids emulsified. He mixed the phosphorus with lead dioxide and gum arabic, poured the paste-like mass into a jar, and dipped the pine sticks into the mixture and let them dry. When he tried them that evening, all of them lit evenly. He sold the invention and production rights for these noiseless matches to István Rómer, a Hungarian pharmacist living in Vienna, for 60 florins (about 22.5 oz t of silver). As a match manufacturer, Rómer became rich, and Irinyi went on to publish articles and a textbook on chemistry, and founded several match factories. Replacement of white phosphorus Those involved in the manufacture of the new phosphorus matches were afflicted with phossy jaw and other bone disorders, and there was enough white phosphorus in one pack to kill a person. Deaths and suicides from eating the heads of matches became frequent. The earliest report of phosphorus necrosis was made in 1845 by Lorinser in Vienna, and a New York surgeon published a pamphlet with notes on nine cases. The conditions of working-class women at the Bryant & May factories led to the London matchgirls strike of 1888. The strike was focused on the severe health complications of working with white phosphorus, such as phossy jaw. Social activist Annie Besant published an article in her halfpenny weekly paper The Link on 23 June 1888. A strike fund was set up and some newspapers collected donations from readers. The women and girls also solicited contributions. Members of the Fabian Society, including George Bernard Shaw, Sidney Webb, and Graham Wallas, were involved in the distribution of the cash collected. The strike and negative publicity led to changes being made to limit the health effects of the inhalation of white phosphorus. Attempts were made to reduce the ill-effects on workers through the introduction of inspections and regulations. Anton Schrötter von Kristelli discovered in 1850 that heating white phosphorus at 250 °C in an inert atmosphere produced a red allotropic form, which did not fume in contact with air. It was suggested that this would make a suitable substitute in match manufacture although it was slightly more expensive. Two French chemists, Henri Savene and Emile David Cahen, proved in 1898 that the addition of phosphorus sesquisulfide meant that the substance was not poisonous, that it could be used in a "strike-anywhere" match, and that the match heads were not explosive. British company Albright and Wilson was the first company to produce phosphorus sesquisulfide matches commercially. The company developed a safe means of making commercial quantities of phosphorus sesquisulfide in 1899 and started selling it to match manufacturers. However, white phosphorus continued to be used, and its serious effects led many countries to ban its use. Finland prohibited the use of white phosphorus in 1872, followed by Denmark in 1874, France in 1897, Switzerland in 1898, and the Netherlands in 1901. An agreement, the Berne Convention, was reached at Bern, Switzerland, in September 1906, which banned the use of white phosphorus in matches. This required each country to pass laws prohibiting the use of white phosphorus in matches. The United Kingdom passed a law in 1908 prohibiting its use in matches after 31 December 1910. The United States did not pass a law, but instead placed a "punitive tax" in 1913 on white phosphorus–based matches, one so high as to render their manufacture financially impractical, and Canada banned them in 1914. India and Japan banned them in 1919; China followed, banning them in 1925. In 1901 Albright and Wilson started making phosphorus sesquisulfide at their Niagara Falls, New York plant for the US market, but American manufacturers continued to use white phosphorus matches. The Niagara Falls plant made them until 1910, when the United States Congress forbade the shipment of white phosphorus matches in interstate commerce. Safety matches The dangers of white phosphorus in the manufacture of matches led to the development of the "hygienic" or "safety match". The major innovation in its development was the use of red phosphorus, not on the head of the match but instead on a specially designed striking surface. Arthur Albright developed the industrial process for large-scale manufacture of red phosphorus after Schrötter's discoveries became known. By 1851, his company was producing the substance by heating white phosphorus in a sealed pot at a specific temperature. He exhibited his red phosphorus in 1851, at The Great Exhibition held at The Crystal Palace in London. The idea of creating a specially designed striking surface was developed in 1844 by the Swede Gustaf Erik Pasch. Pasch patented the use of red phosphorus in the striking surface. He found that this could ignite heads that did not need to contain white phosphorus. Johan Edvard Lundström and his younger brother Carl Frans Lundström (1823–1917) started a large-scale match industry in Jönköping, Sweden around 1847, but the improved safety match was not introduced until around 1850–55. The Lundström brothers had obtained a sample of red phosphorus matches from Albright at The Great Exhibition, but had misplaced it and therefore they did not try the matches until just before the Paris Exhibition of 1855 when they found that the matches were still usable. In 1858 their company produced around 12 million matchboxes. The safety of true "safety matches" is derived from the separation of the reactive ingredients between a match head on the end of a paraffin-impregnated splint and the special striking surface (in addition to the safety aspect of replacing the white phosphorus with red phosphorus). The idea for separating the chemicals had been introduced in 1859 in the form of two-headed matches known in France as Allumettes Androgynes. These were sticks with one end made of potassium chlorate and the other of red phosphorus. They had to be broken and the heads rubbed together. There was, however, a risk of the heads rubbing each other accidentally in their box. Such dangers were removed when the striking surface was moved to the outside of the box. The development of a specialized matchbook with both matches and a striking surface occurred in the 1890s with the American Joshua Pusey, who sold his patent to the Diamond Match Company. The striking surface on modern matchboxes is typically composed of 25% powdered glass or other abrasive material, 50% red phosphorus, 5% neutralizer, 4% carbon black, and 16% binder; and the match head is typically composed of 45–55% potassium chlorate, with a little sulfur and starch, a neutralizer (ZnO or ), 20–40% of siliceous filler, diatomite, and glue. Safety matches ignite due to the extreme reactivity of phosphorus with the potassium chlorate in the match head. When the match is struck, the phosphorus and chlorate mix in a small amount and form something akin to the explosive Armstrong's mixture, which ignites due to the friction. The red color of the matchhead is due to addition of red dyes, not the red phosphorus content. The Swedes long held a virtual worldwide monopoly on safety matches, with the industry mainly situated in Jönköping, by 1903 called Jönköpings & Vulcans Tändsticksfabriks AB today Swedish Match. In France, they sold the rights to their safety match patent to Coigent Père & Fils of Lyon, but Coigent contested the payment in the French courts, on the basis that the invention was known in Vienna before the Lundström brothers patented it. The British match manufacturer Bryant and May visited Jönköping in 1858 to try to obtain a supply of safety matches, but was unsuccessful. In 1862 it established its own factory and bought the rights for the British safety match patent from the Lundström brothers. Varieties of matches today Friction matches made with white phosphorus as well as those made from phosphorus sesquisulfide can be struck on any suitable surface. They have remained particularly popular in the United States, even when safety matches had become common in Europe, and are still widely used today around the world, including in many developing countries, for such uses as camping, outdoor activities, emergency/survival situations, and stocking homemade survival kits. However, strike-anywhere matches are banned on all kinds of aircraft under the "dangerous goods" classification U.N. 1331, Matches, strike-anywhere. Safety matches are classified as dangerous goods, "U.N. 1944, Matches, safety". They are not universally forbidden on aircraft; however, they must be declared as dangerous goods and individual airlines or countries may impose tighter restrictions. Storm matches, also known as lifeboat matches or flare matches, are often included in survival kits. They have a strikeable tip similar to a normal match, but the combustible compound – including an oxidiser – continues down the length of the stick, coating half or more of the entire matchstick. The match also has a waterproof coating (which often makes the match more difficult to light), and often storm matches are longer than standard matches. As a result of the combustible coating, storm matches burn strongly even in strong winds, and can even spontaneously re-ignite after being briefly immersed in water. Hobbyist collection The hobby of collecting match-related items, such as matchcovers and matchbox labels, is known as phillumeny.
Technology
Lighting
null
63915
https://en.wikipedia.org/wiki/Static%20random-access%20memory
Static random-access memory
Static random-access memory (static RAM or SRAM) is a type of random-access memory (RAM) that uses latching circuitry (flip-flop) to store each bit. SRAM is volatile memory; data is lost when power is removed. The static qualifier differentiates SRAM from dynamic random-access memory (DRAM): SRAM will hold its data permanently in the presence of power, while data in DRAM decays in seconds and thus must be periodically refreshed. SRAM is faster than DRAM but it is more expensive in terms of silicon area and cost. Typically, SRAM is used for the cache and internal registers of a CPU while DRAM is used for a computer's main memory. History Semiconductor bipolar SRAM was invented in 1963 by Robert Norman at Fairchild Semiconductor. Metal–oxide–semiconductor SRAM (MOS-SRAM) was invented in 1964 by John Schmidt at Fairchild Semiconductor. The first device was a 64-bit MOS p-channel SRAM. SRAM was the main driver behind any new CMOS-based technology fabrication process since the 1960s, when CMOS was invented. In 1964, Arnold Farber and Eugene Schlig, working for IBM, created a hard-wired memory cell, using a transistor gate and tunnel diode latch. They replaced the latch with two transistors and two resistors, a configuration that became known as the Farber-Schlig cell. That year they submitted an invention disclosure, but it was initially rejected. In 1965, Benjamin Agusta and his team at IBM created a 16-bit silicon memory chip based on the Farber-Schlig cell, with 84 transistors, 64 resistors, and 4 diodes. In April 1969, Intel Inc. introduced its first product, Intel 3101, a SRAM memory chip intended to replace bulky magnetic-core memory modules; Its capacity was 64 bits and was based on bipolar junction transistors. It was designed by using rubylith. Characteristics Though it can be characterized as volatile memory, SRAM exhibits data remanence. SRAM offers a simple data access model and does not require a refresh circuit. Performance and reliability are good and power consumption is low when idle. Since SRAM requires more transistors per bit to implement, it is less dense and more expensive than DRAM and also has a higher power consumption during read or write access. The power consumption of SRAM varies widely depending on how frequently it is accessed. Applications Embedded use Many categories of industrial and scientific subsystems, automotive electronics, and similar embedded systems, contain SRAM which, in this context, may be referred to as ESRAM. Some amount (kilobytes or less) is also embedded in practically all modern appliances, toys, etc. that implement an electronic user interface. SRAM in its dual-ported form is sometimes used for real-time digital signal processing circuits. In computers SRAM is also used in personal computers, workstations, routers and peripheral equipment: CPU register files, internal CPU caches, internal GPU caches and external burst mode SRAM caches, hard disk buffers, router buffers, etc. LCD screens and printers also normally employ SRAM to hold the image displayed (or to be printed). LCDs can have SRAM in their LCD controllers. SRAM was used for the main memory of many early personal computers such as the ZX80, TRS-80 Model 100, and VIC-20. Some early memory cards in the late 1980s to early 1990s used SRAM as a storage medium, which required a lithium battery to keep the contents of the SRAM. Integrated on chip SRAM may be integrated on chip for: the RAM in microcontrollers (usually from around 32 bytes to a megabyte), the on-chip caches in more powerful CPUs, such as the x86 family, and many others (from 8 KB, up to many megabytes), the registers and parts of the state-machines used in some microprocessors (see register file), scratchpad memory, application-specific integrated circuits (ASICs) (usually in the order of kilobytes), and in field-programmable gate arrays (FPGAs) and complex programmable logic devices (CPLDs). Hobbyists Hobbyists, specifically home-built processor enthusiasts, often prefer SRAM due to the ease of interfacing. It is much easier to work with than DRAM as there are no refresh cycles and the address and data buses are often directly accessible. In addition to buses and power connections, SRAM usually requires only three controls: Chip Enable (CE), Write Enable (WE) and Output Enable (OE). In synchronous SRAM, Clock (CLK) is also included. Types of SRAM Non-volatile SRAM Non-volatile SRAM (nvSRAM) has standard SRAM functionality, but they save the data when the power supply is lost, ensuring preservation of critical information. nvSRAMs are used in a wide range of situationsnetworking, aerospace, and medical, among many otherswhere the preservation of data is critical and where batteries are impractical. Pseudostatic RAM Pseudostatic RAM (PSRAM) is DRAM combined with a self-refresh circuit. It appears externally as slower SRAM, albeit with a density and cost advantage over true SRAM, and without the access complexity of DRAM. By transistor type Bipolar junction transistor (used in TTL and ECL) very fast but with high power consumption MOSFET (used in CMOS) low power By numeral system Binary Ternary By function Asynchronous independent of clock frequency; data in and data out are controlled by address transition. Examples include the ubiquitous 28-pin 8K × 8 and 32K × 8 chips (often but not always named something along the lines of 6264 and 62C256 respectively), as well as similar products up to 16 Mbit per chip. Synchronous all timings are initiated by the clock edges. Address, data in and other control signals are associated with the clock signals. In the 1990s, asynchronous SRAM used to be employed for fast access time. Asynchronous SRAM was used as main memory for small cache-less embedded processors used in everything from industrial electronics and measurement systems to hard disks and networking equipment, among many other applications. Nowadays, synchronous SRAM (e.g. DDR SRAM) is rather employed similarly to synchronous DRAMDDR SDRAM memory is rather used than asynchronous DRAM. Synchronous memory interface is much faster as access time can be significantly reduced by employing pipeline architecture. Furthermore, as DRAM is much cheaper than SRAM, SRAM is often replaced by DRAM, especially in the case when a large volume of data is required. SRAM memory is, however, much faster for random (not block / burst) access. Therefore, SRAM memory is mainly used for CPU cache, small on-chip memory, FIFOs or other small buffers. By feature Zero bus turnaround (ZBT) the turnaround is the number of clock cycles it takes to change access to SRAM from write to read and vice versa. The turnaround for ZBT SRAMs or the latency between read and write cycle is zero. syncBurst (syncBurst SRAM or synchronous-burst SRAM) features synchronous burst write access to SRAM to increase write operation to SRAM. DDR SRAM synchronous, single read/write port, double data rate I/O. Quad Data Rate SRAM synchronous, separate read and write ports, quadruple data rate I/O. Design A typical SRAM cell is made up of six MOSFETs, and is often called a SRAM cell. Each bit in the cell is stored on four transistors (M1, M2, M3, M4) that form two cross-coupled inverters. This storage cell has two stable states which are used to denote 0 and 1. Two additional access transistors serve to control the access to a storage cell during read and write operations. 6T SRAM is the most common kind of SRAM. In addition to 6T SRAM, other kinds of SRAM use 4, 5, 7, 8, 9, 10 (4T, 5T, 7T 8T, 9T, 10T SRAM), or more transistors per bit. Four-transistor SRAM is quite common in stand-alone SRAM devices (as opposed to SRAM used for CPU caches), implemented in special processes with an extra layer of polysilicon, allowing for very high-resistance pull-up resistors. The principal drawback of using 4T SRAM is increased static power due to the constant current flow through one of the pull-down transistors (M1 or M2). This is sometimes used to implement more than one (read and/or write) port, which may be useful in certain types of video memory and register files implemented with multi-ported SRAM circuitry. Generally, the fewer transistors needed per cell, the smaller each cell can be. Since the cost of processing a silicon wafer is relatively fixed, using smaller cells and so packing more bits on one wafer reduces the cost per bit of memory. Memory cells that use fewer than four transistors are possible; however, such 3T or 1T cells are DRAM, not SRAM (even the so-called 1T-SRAM). Access to the cell is enabled by the word line (WL in figure) which controls the two access transistors M5 and M6 in 6T SRAM figure (or M3 and M4 in 4T SRAM figure) which, in turn, control whether the cell should be connected to the bit lines: BL and BL. They are used to transfer data for both read and write operations. Although it is not strictly necessary to have two bit lines, both the signal and its inverse are typically provided in order to improve noise margins and speed. During read accesses, the bit lines are actively driven high and low by the inverters in the SRAM cell. This improves SRAM bandwidth compared to DRAMs in a DRAM, the bit line is connected to storage capacitors and charge sharing causes the bit line to swing upwards or downwards. The symmetric structure of SRAMs also allows for differential signaling, which makes small voltage swings more easily detectable. Another difference with DRAM that contributes to making SRAM faster is that commercial chips accept all address bits at a time. By comparison, commodity DRAMs have the address multiplexed in two halves, i.e. higher bits followed by lower bits, over the same package pins in order to keep their size and cost down. The size of an SRAM with address lines and data lines is words, or bits. The most common word size is 8 bits, meaning that a single byte can be read or written to each of different words within the SRAM chip. Several common SRAM chips have 11 address lines (thus a capacity of 2k words) and an 8-bit word, so they are referred to as 2k × 8 SRAM. The dimensions of an SRAM cell on an IC is determined by the minimum feature size of the process used to make the IC. SRAM operation An SRAM cell has three states: Standby: The circuit is idle. Reading: The data has been requested. Writing: Updating the contents. SRAM operating in read and write modes should have readability and write stability, respectively. The three different states work as follows: Standby If the word line is not asserted, the access transistors M5 and M6 disconnect the cell from the bit lines. The two cross-coupled inverters formed by M1M4 will continue to reinforce each other as long as they are connected to the supply. Reading In theory, reading only requires asserting the word line WL and reading the SRAM cell state by a single access transistor and bit line, e.g. M6, BL. However, bit lines are relatively long and have large parasitic capacitance. To speed up reading, a more complex process is used in practice: The read cycle is started by precharging both bit lines BL and BL, to high (logic 1) voltage. Then asserting the word line WL enables both the access transistors M5 and M6, which causes one bit line BL voltage to slightly drop. Then the BL and BL lines will have a small voltage difference between them. A sense amplifier will sense which line has the higher voltage and thus determine whether there was 1 or 0 stored. The higher the sensitivity of the sense amplifier, the faster the read operation. As the NMOS is more powerful, the pull-down is easier. Therefore, bit lines are traditionally precharged to high voltage. Many researchers are also trying to precharge at a slightly low voltage to reduce the power consumption. Writing The write cycle begins by applying the value to be written to the bit lines. To write a 0, a 0 is applied to the bit lines, such as setting BL to 1 and BL to 0. This is similar to applying a reset pulse to an SR-latch, which causes the flip flop to change state. A 1 is written by inverting the values of the bit lines. WL is then asserted and the value that is to be stored is latched in. This works because the bit line input-drivers are designed to be much stronger than the relatively weak transistors in the cell itself so they can easily override the previous state of the cross-coupled inverters. In practice, access NMOS transistors M5 and M6 have to be stronger than either bottom NMOS (M1, M3) or top PMOS (M2, M4) transistors. This is easily obtained as PMOS transistors are much weaker than NMOS when same sized. Consequently, when one transistor pair (e.g. M3 and M4) is only slightly overridden by the write process, the opposite transistors pair (M1 and M2) gate voltage is also changed. This means that the M1 and M2 transistors can be easier overridden, and so on. Thus, cross-coupled inverters magnify the writing process. Bus behavior RAM with an access time of 70 ns will output valid data within 70 ns from the time that the address lines are valid. Some SRAM cells have a page mode, where words of a page (256, 512, or 1024 words) can be read sequentially with a significantly shorter access time (typically approximately 30 ns). The page is selected by setting the upper address lines and then words are sequentially read by stepping through the lower address lines. Production challenges Over 30 years (from 1987 to 2017), with a steadily decreasing transistor size (node size), the footprint-shrinking of the SRAM cell topology itself slowed down, making it harder to pack the cells more densely. One of the reasons is that scaling down transistor size leads to SRAM reliability issues. Careful cells designs are necessary to achieve SRAM cells that do not suffer from stability problems especially when they are being read. With the introduction of the FinFET transistor implementation of SRAM cells, they started to suffer from increasing inefficiencies in cell sizes. Besides issues with size a significant challenge of modern SRAM cells is a static current leakage. The current, that flows from positive supply (Vdd), through the cell, and to the ground, increases exponentially when the cell's temperature rises. The cell power drain occurs in both active and idle states, thus wasting useful energy without any useful work done. Even though in the last 20 years the issue was partially addressed by the Data Retention Voltage technique (DRV) with reduction rates ranging from 5 to 10, the decrease in node size caused reduction rates to fall to about 2. With these two issues it became more challenging to develop energy-efficient and dense SRAM memories, prompting semiconductor industry to look for alternatives such as STT-MRAM and F-RAM. Research In 2019 a French institute reported on a research of an IoT-purposed 28nm fabricated IC. It was based on fully depleted silicon on insulator-transistors (FD-SOI), had two-ported SRAM memory rail for synchronous/asynchronous accesses, and selective virtual ground (SVGND). The study claimed reaching an ultra-low SVGND current in a sleep and read modes by finely tuning its voltage.
Technology
Volatile memory
null
63973
https://en.wikipedia.org/wiki/Wi-Fi
Wi-Fi
Wi-Fi () is a family of wireless network protocols based on the IEEE 802.11 family of standards, which are commonly used for local area networking of devices and Internet access, allowing nearby digital devices to exchange data by radio waves. These are the most widely used computer networks, used globally in home and small office networks to link devices and to provide Internet access with wireless routers and wireless access points in public places such as coffee shops, restaurants, hotels, libraries, and airports. Wi-Fi is a trademark of the Wi-Fi Alliance, which restricts the use of the term "Wi-Fi Certified" to products that successfully complete interoperability certification testing. Non-compliant hardware is simply referred to as WLAN, and it may or may not work with "Wi-Fi Certified" devices. the Wi-Fi Alliance consisted of more than 800 companies from around the world. over 3.05 billion Wi-Fi-enabled devices are shipped globally each year. Wi-Fi uses multiple parts of the IEEE 802 protocol family and is designed to work well with its wired sibling, Ethernet. Compatible devices can network through wireless access points with each other as well as with wired devices and the Internet. Different versions of Wi-Fi are specified by various IEEE 802.11 protocol standards, with different radio technologies determining radio bands, maximum ranges, and speeds that may be achieved. Wi-Fi most commonly uses the UHF and SHF radio bands, with the 6 gigahertz SHF band used in newer generations of the standard; these bands are subdivided into multiple channels. Channels can be shared between networks, but, within range, only one transmitter can transmit on a channel at a time. Wi-Fi's radio bands work best for line-of-sight use. Many common obstructions, such as walls, pillars, home appliances, etc., may greatly reduce range, but this also helps minimize interference between different networks in crowded environments. The range of an access point is about indoors, while some access points claim up to a range outdoors. Hotspot coverage can be as small as a single room with walls that block radio waves or as large as many square kilometers using many overlapping access points with roaming permitted between them. Over time, the speed and spectral efficiency of Wi-Fi have increased. some versions of Wi-Fi, running on suitable hardware at close range, can achieve speeds of 9.6 Gbit/s (gigabit per second). History A 1985 ruling by the U.S. Federal Communications Commission released parts of the ISM bands for unlicensed use for communications. These frequency bands include the same 2.4 GHz bands used by equipment such as microwave ovens, and are thus subject to interference. In 1991 in Nieuwegein, the NCR Corporation and AT&T invented the precursor to 802.11, intended for use in cashier systems, under the name WaveLAN. NCR's Vic Hayes, who held the chair of IEEE 802.11 for ten years, along with Bell Labs engineer Bruce Tuch, approached the Institute of Electrical and Electronics Engineers (IEEE) to create a standard and were involved in designing the initial 802.11b and 802.11a specifications within the IEEE. They have both been subsequently inducted into the Wi-Fi NOW Hall of Fame. In 1989 in Australia, a team of scientists began working on wireless LAN technology. A prototype test bed for a wireless local area network (WLAN) was developed in 1992 by a team of researchers from the Radiophysics Division of the CSIRO (Commonwealth Scientific and Industrial Research Organisation) in Australia, led by John O'Sullivan. A patent for Wi Fi was lodged by the CSIRO in 1992 The first version of the 802.11 protocol was released in 1997, and provided up to 2 Mbit/s link speeds. This was updated in 1999 with 802.11b to permit 11 Mbit/s link speeds. In 1999, the Wi-Fi Alliance formed as a trade association to hold the Wi-Fi trademark under which most IEEE 802.11 products are sold. The major commercial breakthrough came with Apple Inc. adopting Wi-Fi for their iBook series of laptops in 1999. It was the first mass consumer product to offer Wi-Fi network connectivity, which was then branded by Apple as AirPort. This was in collaboration with the same group that helped create the standard: Vic Hayes, Bruce Tuch, Cees Links, Rich McGinn, and others from Lucent. In the year 2000, Radiata, a group of Australian scientists connected to the CSIRO, were the first to use the 802.11a standard on chips connected to a Wi-Fi network. Wi-Fi uses a large number of patents held by many different organizations. Australia, the United States and the Netherlands simultaneously claim the invention of Wi-Fi, and a consensus has not been reached globally. In 2009, the Australian CSIRO was awarded $200 million after a patent settlement with 14 technology companies, with a further $220 million awarded in 2012 after legal proceedings with 23 companies. In 2016, the CSIRO's WLAN prototype test bed was chosen as Australia's contribution to the exhibition A History of the World in 100 Objects held in the National Museum of Australia. Etymology and terminology The name Wi-Fi, commercially used at least as early as August 1999, was coined by the brand-consulting firm Interbrand. The Wi-Fi Alliance had hired Interbrand to create a name that was "a little catchier than 'IEEE 802.11b Direct Sequence'." According to Phil Belanger, a founding member of the Wi-Fi Alliance, the term Wi-Fi was chosen from a list of ten names that Interbrand proposed. Interbrand also created the Wi-Fi logo. The yin-yang Wi-Fi logo indicates the certification of a product for interoperability. The name is often written as WiFi, Wifi, or wifi, but these are not approved by the Wi-Fi Alliance. The name Wi-Fi is not short-form for 'Wireless Fidelity', although the Wi-Fi Alliance did use the advertising slogan "The Standard for Wireless Fidelity" for a short time after the brand name was created, and the Wi-Fi Alliance was also called the "Wireless Fidelity Alliance Inc." in some publications. IEEE is a separate, but related, organization and their website has stated "WiFi is a short name for Wireless Fidelity". The name Wi-Fi was partly chosen because it sounds similar to Hi-Fi, which consumers take to mean high fidelity or high quality. Interbrand hoped consumers would find the name catchy, and that they would assume this wireless protocol has high fidelity because of its name. Other technologies intended for fixed points, including Motorola Canopy, are usually called fixed wireless. Alternative wireless technologies include Zigbee, Z-Wave, Bluetooth and mobile phone standards. To connect to a Wi-Fi LAN, a computer must be equipped with a wireless network interface controller. The combination of a computer and an interface controller is called a station. Stations are identified by one or more MAC addresses. Wi-Fi nodes often operate in infrastructure mode in which all communications go through a base station. Ad hoc mode refers to devices communicating directly with each other, without communicating with an access point. A service set is the set of all the devices associated with a particular Wi-Fi network. Devices in a service set need not be on the same wavebands or channels. A service set can be local, independent, extended, mesh, or a combination. Each service set has an associated identifier, a 32-byte service set identifier (SSID), which identifies the network. The SSID is configured within the devices that are part of the network. A basic service set (BSS) is a group of stations that share the same wireless channel, SSID, and other settings that have wirelessly connected, usually to the same access point. Each BSS is identified by a MAC address called the BSSID. Certification The IEEE does not test equipment for compliance with their standards. The Wi-Fi Alliance was formed in 1999 to establish and enforce standards for interoperability and backward compatibility, and to promote wireless local-area-network technology. The Wi-Fi Alliance enforces the use of the Wi-Fi brand to technologies based on the IEEE 802.11 standards from the IEEE. Manufacturers with membership in the Wi-Fi Alliance, whose products pass the certification process, gain the right to mark those products with the Wi-Fi logo. Specifically, the certification process requires conformance to the IEEE 802.11 radio standards, the WPA and WPA2 security standards, and the EAP authentication standard. Certification may optionally include tests of IEEE 802.11 draft standards, interaction with cellular-phone technology in converged devices, and features relating to security set-up, multimedia, and power-saving. Not every Wi-Fi device is submitted for certification. The lack of Wi-Fi certification does not necessarily imply that a device is incompatible with other Wi-Fi devices. The Wi-Fi Alliance may or may not sanction derivative terms, such as Super Wi-Fi, coined by the US Federal Communications Commission (FCC) to describe proposed networking in the UHF TV band in the US. Versions and generations Equipment frequently supports multiple versions of Wi-Fi. To communicate, devices must use a common Wi-Fi version. The versions differ between the radio wavebands they operate on, the radio bandwidth they occupy, the maximum data rates they can support and other details. Some versions permit the use of multiple antennas, which permits greater speeds as well as reduced interference. Historically, the equipment listed the versions of Wi-Fi supported using the name of the IEEE standards. In 2018, the Wi-Fi Alliance introduced simplified Wi-Fi generational numbering to indicate equipment that supports Wi-Fi 4 (802.11n), Wi-Fi 5 (802.11ac) and Wi-Fi 6 (802.11ax). These generations have a high degree of backward compatibility with previous versions. The alliance has stated that the generational level 4, 5, or 6 can be indicated in the user interface when connected, along with the signal strength. The most important standards affecting Wi‑Fi are: 802.11a, 802.11b, 802.11g, 802.11n (Wi-Fi 4), 802.11h, 802.11i, 802.11-2007, 802.11–2012, 802.11ac (Wi-Fi 5), 802.11ad, 802.11af, 802.11-2016, 802.11ah, 802.11ai, 802.11aj, 802.11aq, 802.11ax (Wi-Fi 6), 802.11ay. Uses Internet Wi-Fi technology may be used to provide local network and Internet access to devices that are within Wi-Fi range of one or more routers that are connected to the Internet. The coverage of one or more interconnected access points can extend from an area as small as a few rooms to as large as many square kilometres. Coverage in the larger area may require a group of access points with overlapping coverage. For example, public outdoor Wi-Fi technology has been used successfully in wireless mesh networks in London. An international example is Fon. Wi-Fi provides services in private homes, businesses, as well as in public spaces. Wi-Fi hotspots may be set up either free of charge or commercially, often using a captive portal webpage for access. Organizations, enthusiasts, authorities and businesses, such as airports, hotels, and restaurants, often provide free or paid-use hotspots to attract customers, to provide services to promote business in selected areas. Routers often incorporate a digital subscriber line modem or a cable modem and a Wi-Fi access point, are frequently set up in homes and other buildings, to provide Internet access for the structure. Similarly, battery-powered routers may include a mobile broadband modem and a Wi-Fi access point. When subscribed to a cellular data carrier, they allow nearby Wi-Fi stations to access the Internet. Many smartphones have a built-in mobile hotspot capability of this sort, though carriers often disable the feature, or charge a separate fee to enable it. Standalone devices such as MiFi- and WiBro-branded devices provide the capability. Some laptops that have a cellular modem card can also act as mobile Internet Wi-Fi access points. Many traditional university campuses in the developed world provide at least partial Wi-Fi coverage. Carnegie Mellon University built the first campus-wide wireless Internet network, called Wireless Andrew, at its Pittsburgh campus in 1993 before Wi-Fi branding existed. Many universities collaborate in providing Wi-Fi access to students and staff through the Eduroam international authentication infrastructure. City-wide In the early 2000s, many cities around the world announced plans to construct citywide Wi-Fi networks. There are many successful examples; in 2004, Mysore (Mysuru) became India's first Wi-Fi-enabled city. A company called WiFiyNet has set up hotspots in Mysore, covering the whole city and a few nearby villages. In 2005, St. Cloud, Florida and Sunnyvale, California, became the first cities in the United States to offer citywide free Wi-Fi (from MetroFi). Minneapolis has generated $1.2 million in profit annually for its provider. In May 2010, the then London mayor Boris Johnson pledged to have London-wide Wi-Fi by 2012. Several boroughs including Westminster and Islington already had extensive outdoor Wi-Fi coverage at that point. New York City announced a city-wide campaign to convert old phone booths into digital kiosks in 2014. The project, titled LinkNYC, has created a network of kiosks that serve as public Wi-Fi hotspots, high-definition screens and landlines. Installation of the screens began in late 2015. The city government plans to implement more than seven thousand kiosks over time, eventually making LinkNYC the largest and fastest public, government-operated Wi-Fi network in the world. The UK has planned a similar project across major cities of the country, with the project's first implementation in the London Borough of Camden. Officials in South Korea's capital Seoul were moving to provide free Internet access at more than 10,000 locations around the city, including outdoor public spaces, major streets, and densely populated residential areas. Seoul was planning to grant leases to KT, LG Telecom, and SK Telecom. The companies were supposed to invest $44 million in the project, which was to be completed in 2015. Geolocation Wi-Fi positioning systems use known positions of Wi-Fi hotspots to identify a device's location. It is used when GPS isn't suitable due to issues like signal interference or slow satellite acquisition. This includes assisted GPS, urban hotspot databases, and indoor positioning systems. Wi-Fi positioning relies on measuring signal strength (RSSI) and fingerprinting. Parameters like SSID and MAC address are crucial for identifying access points. The accuracy depends on nearby access points in the database. Signal fluctuations can cause errors, which can be reduced with noise-filtering techniques. For low precision, integrating Wi-Fi data with geographical and time information has been proposed. The Wi-Fi RTT capability introduced in IEEE 802.11mc allows for positioning based on round trip time measurement, an improvement over the RSSI method. The IEEE 802.11az standard promises further improvements in geolocation accuracy. Motion detection Wi-Fi sensing is used in applications such as motion detection and gesture recognition. Operational principles Wi-Fi stations communicate by sending each other data packets, blocks of data individually sent and delivered over radio on various channels. As with all radio, this is done by the modulation and demodulation of carrier waves. Different versions of Wi-Fi use different techniques, 802.11b uses direct-sequence spread spectrum on a single carrier, whereas 802.11a, Wi-Fi 4, 5 and 6 use orthogonal frequency-division multiplexing. Channels are used half duplex and can be time-shared by multiple networks. Any packet sent by one computer is locally received by stations tuned to that channel, even if that information is intended for just one destination. Stations typically ignore information not addressed to them. The use of the same channel also means that the data bandwidth is shared, so for example, available throughput to each device is halved when two stations are actively transmitting. As with other IEEE 802 LANs, stations come programmed with a globally unique 48-bit MAC address. The MAC addresses are used to specify both the destination and the source of each data packet. On the reception of a transmission, the receiver uses the destination address to determine whether the transmission is relevant to the station or should be ignored. A scheme known as carrier-sense multiple access with collision avoidance (CSMA/CA) governs the way stations share channels. With CSMA/CA stations attempt to avoid collisions by beginning transmission only after the channel is sensed to be idle, but then transmit their packet data in its entirety. CSMA/CA cannot completely prevent collisions, as two stations may sense the channel to be idle at the same time and thus begin transmission simultaneously. A collision happens when a station receives signals from multiple stations on a channel at the same time. This corrupts the transmitted data and can require stations to re-transmit. The lost data and re-transmission reduces throughput, in some cases severely. Waveband The 802.11 standard provides several distinct radio frequency ranges for use in Wi-Fi communications: 900 MHz, 2.4 GHz, 3.6 GHz, 4.9 GHz, 5 GHz, 6 GHz and 60 GHz bands. Each range is divided into a multitude of channels. In the standards, channels are numbered at 5 MHz spacing within a band (except in the 60 GHz band, where they are 2.16 GHz apart), and the number refers to the centre frequency of the channel. Although channels are numbered at 5 MHz spacing, transmitters generally occupy at least 20 MHz, and standards allow for neighbouring channels to be bonded together to form a wider channel for higher throughput. Countries apply their own regulations to the allowable channels, allowed users and maximum power levels within these frequency ranges. 802.11b/g/n can use the 2.4 GHz band, operating in the United States under FCC Part 15 rules and regulations. In this frequency band, equipment may occasionally suffer interference from microwave ovens, cordless telephones, USB 3.0 hubs, Bluetooth and other devices. Spectrum assignments and operational limitations are not consistent worldwide: Australia and Europe allow for an additional two channels (12, 13) beyond the 11 permitted in the United States for the 2.4 GHz band, while Japan has three more (12–14). 802.11a/h/j/n/ac/ax can use the 5 GHz U-NII band, which, for much of the world, offers at least 23 non-overlapping 20 MHz channels. This is in contrast to the 2.4 GHz frequency band where the channels are only 5 MHz wide. In general, lower frequencies have longer range but have less capacity. The 5 GHz bands are absorbed to a greater degree by common building materials than the 2.4 GHz bands and usually give a shorter range. As 802.11 specifications evolved to support higher throughput, the protocols have become much more efficient in their bandwidth use. Additionally, they have gained the ability to aggregate channels together to gain still more throughput where the bandwidth for additional channels is available. 802.11n allows for double radio spectrum bandwidth (40 MHz) per channel compared to 802.11a or 802.11g (20 MHz). 802.11n can be set to limit itself to 20 MHz bandwidth to prevent interference in dense communities. In the 5 GHz band, 20 MHz, 40 MHz, 80 MHz, and 160 MHz channels are permitted with some restrictions, giving much faster connections. Communication stack Wi-Fi is part of the IEEE 802 protocol family. The data is organized into 802.11 frames that are very similar to Ethernet frames at the data link layer, but with extra address fields. MAC addresses are used as network addresses for routing over the LAN. Wi-Fi's MAC and physical layer (PHY) specifications are defined by IEEE 802.11 for modulating and receiving one or more carrier waves to transmit the data in the infrared, and 2.4, 3.6, 5, 6, or 60 GHz frequency bands. They are created and maintained by the IEEE LAN/MAN Standards Committee (IEEE 802). The base version of the standard was released in 1997 and has had many subsequent amendments. The standard and amendments provide the basis for wireless network products using the Wi-Fi brand. While each amendment is officially revoked when incorporated in the latest version of the standard, the corporate world tends to market to the revisions because they concisely denote capabilities of their products. As a result, in the market place, each revision tends to become its own standard. In addition to 802.11, the IEEE 802 protocol family has specific provisions for Wi-Fi. These are required because Ethernet's cable-based media are not usually shared, whereas with wireless all transmissions are received by all stations within the range that employ that radio channel. While Ethernet has essentially negligible error rates, wireless communication media are subject to significant interference. Therefore, the accurate transmission is not guaranteed so delivery is, therefore, a best-effort delivery mechanism. Because of this, for Wi-Fi, the Logical Link Control (LLC) specified by IEEE 802.2 employs Wi-Fi's media access control (MAC) protocols to manage retries without relying on higher levels of the protocol stack. For internetworking purposes, Wi-Fi is usually layered as a link layer below the internet layer of the Internet Protocol. This means that nodes have an associated internet address and, with suitable connectivity, this allows full Internet access. Modes Infrastructure In infrastructure mode, which is the most common mode used, all communications go through a base station. For communications within the network, this introduces an extra use of the airwaves but has the advantage that any two stations that can communicate with the base station can also communicate through the base station, which limits issues associated with the hidden node problem and simplifies the protocols. Ad hoc and Wi-Fi direct Wi-Fi also allows communications directly from one computer to another without an access point intermediary. This is called ad hoc Wi-Fi transmission. Different types of ad hoc networks exist. In the simplest case, network nodes must talk directly to each other. In more complex protocols nodes may forward packets, and nodes keep track of how to reach other nodes, even if they move around. Ad hoc mode was first described by Chai Keong Toh in his 1996 patent of wireless ad hoc routing, implemented on Lucent WaveLAN 802.11a wireless on IBM ThinkPads over a size nodes scenario spanning a region of over a mile. The success was recorded in Mobile Computing magazine (1999) and later published formally in IEEE Transactions on Wireless Communications, 2002 and ACM SIGMETRICS Performance Evaluation Review, 2001. This wireless ad hoc network mode has proven popular with multiplayer video games on handheld game consoles, such as the Nintendo DS and PlayStation Portable. It is also popular on digital cameras, and other consumer electronics devices. Some devices can also share their Internet connection using ad hoc, becoming hotspots or virtual routers. Similarly, the Wi-Fi Alliance promotes the specification Wi-Fi Direct for file transfers and media sharing through a new discovery and security methodology. Wi-Fi Direct launched in October 2010. Another mode of direct communication over Wi-Fi is Tunneled Direct Link Setup (TDLS), which enables two devices on the same Wi-Fi network to communicate directly, instead of via the access point. Multiple access points An Extended Service Set may be formed by deploying multiple access points that are configured with the same SSID and security settings. Wi-Fi client devices typically connect to the access point that can provide the strongest signal within that service set. Increasing the number of Wi-Fi access points for a network provides redundancy, better range, support for fast roaming, and increased overall network-capacity by using more channels or by defining smaller cells. Except for the smallest implementations (such as home or small office networks), Wi-Fi implementations have moved toward "thin" access points, with more of the network intelligence housed in a centralized network appliance, relegating individual access points to the role of "dumb" transceivers. Outdoor applications may use mesh topologies. Performance Wi-Fi operational range depends on factors such as the frequency band, radio power output, receiver sensitivity, antenna gain, and antenna type as well as the modulation technique. Also, the propagation characteristics of the signals can have a big impact. At longer distances, and with greater signal absorption, speed is usually reduced. Transmitter power Compared to cell phones and similar technology, Wi-Fi transmitters are low-power devices. In general, the maximum amount of power that a Wi-Fi device can transmit is limited by local regulations, such as FCC Part 15 in the US. Equivalent isotropically radiated power (EIRP) in the European Union is limited to 20 dBm (100 mW). To reach requirements for wireless LAN applications, Wi-Fi has higher power consumption compared to some other standards designed to support wireless personal area network (PAN) applications. For example, Bluetooth provides a much shorter propagation range between 1 and 100 metres (1 and 100 yards) and so in general has a lower power consumption. Other low-power technologies such as Zigbee have fairly long range, but much lower data rate. The high power consumption of Wi-Fi makes battery life in some mobile devices a concern. Antenna An access point compliant with either 802.11b or 802.11g, using the stock omnidirectional antenna might have a range of . The same radio with an external semi-parabolic antenna (15 dB gain) with a similarly equipped receiver at the far end might have a range over 20 miles. Higher gain rating (dBi) indicates further deviation (generally toward the horizontal) from a theoretical, perfect isotropic radiator, and therefore the antenna can project or accept a usable signal further in particular directions, as compared to a similar output power on a more isotropic antenna. For example, an 8 dBi antenna used with a 100 mW driver has a similar horizontal range to a 6 dBi antenna being driven at 500 mW. This assumes that radiation in the vertical is lost; this may not be the case in some situations, especially in large buildings or within a waveguide. In the above example, a directional waveguide could cause the low-power 6 dBi antenna to project much further in a single direction than the 8 dBi antenna, which is not in a waveguide, even if they are both driven at 100 mW. On wireless routers with detachable antennas, it is possible to improve range by fitting upgraded antennas that provide a higher gain in particular directions. Outdoor ranges can be improved to many kilometres (miles) through the use of high gain directional antennas at the router and remote device(s). MIMO (multiple-input and multiple-output) Wi-Fi 4 and higher standards allow devices to have multiple antennas on transmitters and receivers. Multiple antennas enable the equipment to exploit multipath propagation on the same frequency bands giving much higher speeds and longer range. Wi-Fi 4 can more than double the range over previous standards. The Wi-Fi 5 standard uses the 5 GHz band exclusively, and is capable of multi-station WLAN throughput of at least 1 gigabit per second, and a single station throughput of at least 500 Mbit/s. As of the first quarter of 2016, The Wi-Fi Alliance certifies devices compliant with the 802.11ac standard as "Wi-Fi CERTIFIED ac". This standard uses several signal processing techniques such as multi-user MIMO and 4X4 Spatial Multiplexing streams, and wide channel bandwidth (160 MHz) to achieve its gigabit throughput. According to a study by IHS Technology, 70% of all access point sales revenue in the first quarter of 2016 came from 802.11ac devices. Radio propagation With Wi-Fi signals line-of-sight usually works best, but signals can transmit, absorb, reflect, refract, diffract and up and down fade through and around structures, both man-made and natural. Wi-Fi signals are very strongly affected by metallic structures (including rebar in concrete, low-e coatings in glazing), rock structures (including marble) and water (such as found in vegetation). Due to the complex nature of radio propagation at typical Wi-Fi frequencies, particularly around trees and buildings, algorithms can only approximately predict Wi-Fi signal strength for any given area in relation to a transmitter. This effect does not apply equally to long-range Wi-Fi, since longer links typically operate from towers that transmit above the surrounding foliage. Mobile use of Wi-Fi over wider ranges is limited, for instance, to uses such as in an automobile moving from one hotspot to another. Other wireless technologies are more suitable for communicating with moving vehicles. Distance records Distance records (using non-standard devices) include in June 2007, held by Ermanno Pietrosemoli and EsLaRed of Venezuela, transferring about 3 MB of data between the mountain-tops of El Águila and Platillon. The Swedish National Space Agency transferred data , using 6 watt amplifiers to reach an overhead stratospheric balloon. Interference Wi-Fi connections can be blocked or the Internet speed lowered by having other devices in the same area. Wi-Fi protocols are designed to share the wavebands reasonably fairly, and this often works with little to no disruption. To minimize collisions with Wi-Fi and non-Wi-Fi devices, Wi-Fi employs Carrier-sense multiple access with collision avoidance (CSMA/CA), where transmitters listen before transmitting and delay transmission of packets if they detect that other devices are active on the channel, or if noise is detected from adjacent channels or non-Wi-Fi sources. Nevertheless, Wi-Fi networks are still susceptible to the hidden node and exposed node problem. A standard speed Wi-Fi signal occupies five channels in the 2.4 GHz band. Interference can be caused by overlapping channels. Any two channel numbers that differ by five or more, such as 2 and 7, do not overlap (no adjacent-channel interference). The oft-repeated adage that channels 1, 6, and 11 are the only non-overlapping channels is, therefore, not accurate. Channels 1, 6, and 11 are the only group of three non-overlapping channels in North America. However, whether the overlap is significant depends on physical spacing. Channels that are four apart interfere a negligible amountmuch less than reusing channels (which causes co-channel interference)if transmitters are at least a few metres apart. In Europe and Japan where channel 13 is available, using Channels 1, 5, 9, and 13 for 802.11g and 802.11n is viable and recommended. However, many 2.4 GHz 802.11b and 802.11g access-points default to the same channel on initial startup, contributing to congestion on certain channels. Wi-Fi pollution, or an excessive number of access points in the area, can prevent access and interfere with other devices' use of other access points as well as with decreased signal-to-noise ratio (SNR) between access points. These issues can become a problem in high-density areas, such as large apartment complexes or office buildings with many Wi-Fi access points. Other devices use the 2.4 GHz band: microwave ovens, ISM band devices, security cameras, Zigbee devices, Bluetooth devices, video senders, cordless phones, baby monitors, and, in some countries, amateur radio, all of which can cause significant additional interference. It is also an issue when municipalities or other large entities (such as universities) seek to provide large area coverage. On some 5 GHz bands interference from radar systems can occur in some places. For base stations that support those bands they employ Dynamic Frequency Selection which listens for radar, and if it is found, it will not permit a network on that band. These bands can be used by low power transmitters without a licence, and with few restrictions. However, while unintended interference is common, users that have been found to cause deliberate interference (particularly for attempting to locally monopolize these bands for commercial purposes) have been issued large fines. Throughput Various layer-2 variants of IEEE 802.11 have different characteristics. Across all flavours of 802.11, maximum achievable throughputs are either given based on measurements under ideal conditions or in the layer-2 data rates. This, however, does not apply to typical deployments in which data are transferred between two endpoints of which at least one is typically connected to a wired infrastructure, and the other is connected to an infrastructure via a wireless link. This means that typically data frames pass an 802.11 (WLAN) medium and are being converted to 802.3 (Ethernet) or vice versa. Due to the difference in the frame (header) lengths of these two media, the packet size of an application determines the speed of the data transfer. This means that an application that uses small packets (e.g. VoIP) creates a data flow with high overhead traffic (low goodput). Other factors that contribute to the overall application data rate are the speed with which the application transmits the packets (i.e. the data rate) and the energy with which the wireless signal is received. The latter is determined by distance and by the configured output power of the communicating devices. The same references apply to the attached throughput graphs, which show measurements of UDP throughput measurements. Each represents an average throughput of 25 measurements (the error bars are there, but barely visible due to the small variation), is with specific packet size (small or large), and with a specific data rate (10 kbit/s – 100 Mbit/s). Markers for traffic profiles of common applications are included as well. This text and measurements do not cover packet errors but information about this can be found at the above references. The table below shows the maximum achievable (application-specific) UDP throughput in the same scenarios (same references again) with various WLAN (802.11) flavours. The measurement hosts have been 25 metres (yards) apart from each other; loss is again ignored. Hardware Wi-Fi allows wireless deployment of local area networks (LANs). Also, spaces where cables cannot be run, such as outdoor areas and historical buildings, can host wireless LANs. However, building walls of certain materials, such as stone with high metal content, can block Wi-Fi signals. A Wi-Fi device is a short-range wireless device. Wi-Fi devices are fabricated on RF CMOS integrated circuit (RF circuit) chips. Since the early 2000s, manufacturers are building wireless network adapters into most laptops. The price of chipsets for Wi-Fi continues to drop, making it an economical networking option included in ever more devices. Different competitive brands of access points and client network-interfaces can inter-operate at a basic level of service. Products designated as "Wi-Fi Certified" by the Wi-Fi Alliance are backward compatible. Unlike mobile phones, any standard Wi-Fi device works anywhere in the world. Access point A wireless access point (WAP) connects a group of wireless devices to an adjacent wired LAN. An access point resembles a network hub, relaying data between connected wireless devices in addition to a (usually) single connected wired device, most often an Ethernet hub or switch, allowing wireless devices to communicate with other wired devices. Wireless adapter Wireless adapters allow devices to connect to a wireless network. These adapters connect to devices using various external or internal interconnects such as mini PCIe (mPCIe, M.2), USB, ExpressCard and previously PCI, Cardbus, and PC Card. As of 2010, most newer laptop computers come equipped with built-in internal adapters. Router Wireless routers integrate a Wireless Access Point, Ethernet switch, and internal router firmware application that provides IP routing, NAT, and DNS forwarding through an integrated WAN-interface. A wireless router allows wired and wireless Ethernet LAN devices to connect to a (usually) single WAN device such as a cable modem, DSL modem, or optical modem. A wireless router allows all three devices, mainly the access point and router, to be configured through one central utility. This utility is usually an integrated web server that is accessible to wired and wireless LAN clients and often optionally to WAN clients. This utility may also be an application that is run on a computer, as is the case with as Apple's AirPort, which is managed with the AirPort Utility on macOS and iOS. Bridge Wireless network bridges can act to connect two networks to form a single network at the data-link layer over Wi-Fi. The main standard is the wireless distribution system (WDS). Wireless bridging can connect a wired network to a wireless network. A bridge differs from an access point: an access point typically connects wireless devices to one wired network. Two wireless bridge devices may be used to connect two wired networks over a wireless link, useful in situations where a wired connection may be unavailable, such as between two separate homes or for devices that have no wireless networking capability (but have wired networking capability), such as consumer entertainment devices; alternatively, a wireless bridge can be used to enable a device that supports a wired connection to operate at a wireless networking standard that is faster than supported by the wireless network connectivity feature (external dongle or inbuilt) supported by the device (e.g., enabling Wireless-N speeds (up to the maximum supported speed on the wired Ethernet port on both the bridge and connected devices including the wireless access point) for a device that only supports Wireless-G). A dual-band wireless bridge can also be used to enable 5 GHz wireless network operation on a device that only supports 2.4 GHz wireless and has a wired Ethernet port. Repeater Wireless range-extenders or wireless repeaters can extend the range of an existing wireless network. Strategically placed range-extenders can elongate a signal area or allow for the signal area to reach around barriers such as those pertaining in L-shaped corridors. Wireless devices connected through repeaters suffer from an increased latency for each hop, and there may be a reduction in the maximum available data throughput. Besides, the effect of additional users using a network employing wireless range-extenders is to consume the available bandwidth faster than would be the case whereby a single user migrates around a network employing extenders. For this reason, wireless range-extenders work best in networks supporting low traffic throughput requirements, such as for cases whereby a single user with a Wi-Fi-equipped tablet migrates around the combined extended and non-extended portions of the total connected network. Also, a wireless device connected to any of the repeaters in the chain has data throughput limited by the "weakest link" in the chain between the connection origin and connection end. Networks using wireless extenders are more prone to degradation from interference from neighbouring access points that border portions of the extended network and that happen to occupy the same channel as the extended network. Embedded systems The security standard, Wi-Fi Protected Setup, allows embedded devices with a limited graphical user interface to connect to the Internet with ease. Wi-Fi Protected Setup has 2 configurations: The Push Button configuration and the PIN configuration. These embedded devices are also called The Internet of things and are low-power, battery-operated embedded systems. Several Wi-Fi manufacturers design chips and modules for embedded Wi-Fi, such as GainSpan. Increasingly in the last few years (particularly ), embedded Wi-Fi modules have become available that incorporate a real-time operating system and provide a simple means of wirelessly enabling any device that can communicate via a serial port. This allows the design of simple monitoring devices. An example is a portable ECG device monitoring a patient at home. This Wi-Fi-enabled device can communicate via the Internet. These Wi-Fi modules are designed by OEMs so that implementers need only minimal Wi-Fi knowledge to provide Wi-Fi connectivity for their products. In June 2014, Texas Instruments introduced the first ARM Cortex-M4 microcontroller with an onboard dedicated Wi-Fi MCU, the SimpleLink CC3200. It makes embedded systems with Wi-Fi connectivity possible to build as single-chip devices, which reduces their cost and minimum size, making it more practical to build wireless-networked controllers into inexpensive ordinary objects. Security The main issue with wireless network security is its simplified access to the network compared to traditional wired networks such as Ethernet. With wired networking, one must either gain access to a building (physically connecting into the internal network), or break through an external firewall. To access Wi-Fi, one must merely be within the range of the Wi-Fi network. Most business networks protect sensitive data and systems by attempting to disallow external access. Enabling wireless connectivity reduces security if the network uses inadequate or no encryption. An attacker who has gained access to a Wi-Fi network router can initiate a DNS spoofing attack against any other user of the network by forging a response before the queried DNS server has a chance to reply. Securing methods A common measure to deter unauthorized users involves hiding the access point's name by disabling the SSID broadcast. While effective against the casual user, it is ineffective as a security method because the SSID is broadcast in the clear in response to a client SSID query. Another method is to only allow computers with known MAC addresses to join the network, but determined eavesdroppers may be able to join the network by spoofing an authorized address. Wired Equivalent Privacy (WEP) encryption was designed to protect against casual snooping but it is no longer considered secure. Tools such as AirSnort or Aircrack-ng can quickly recover WEP encryption keys. Because of WEP's weakness the Wi-Fi Alliance approved Wi-Fi Protected Access (WPA) which uses TKIP. WPA was specifically designed to work with older equipment usually through a firmware upgrade. Though more secure than WEP, WPA has known vulnerabilities. The more secure WPA2 using Advanced Encryption Standard was introduced in 2004 and is supported by most new Wi-Fi devices. WPA2 is fully compatible with WPA. In 2017, a flaw in the WPA2 protocol was discovered, allowing a key replay attack, known as KRACK. A flaw in a feature added to Wi-Fi in 2007, called Wi-Fi Protected Setup (WPS), let WPA and WPA2 security be bypassed. The only remedy was to turn off Wi-Fi Protected Setup, which is not always possible. Virtual private networks can be used to improve the confidentiality of data carried through Wi-Fi networks, especially public Wi-Fi networks. A URI using the WIFI scheme can specify the SSID, encryption type, password/passphrase, and if the SSID is hidden or not, so users can follow links from QR codes, for instance, to join networks without having to manually enter the data. A MeCard-like format is supported by Android and iOS 11+. Common format: WIFI:S:<SSID>;T:<WEP|WPA|blank>;P:<PASSWORD>;H:<true|false|blank>; Sample WIFI:S:MySSID;T:WPA;P:MyPassW0rd;; Data security risks Wi-Fi access points typically default to an encryption-free (open) mode. Novice users benefit from a zero-configuration device that works out-of-the-box, but this default does not enable any wireless security, providing open wireless access to a LAN. To turn security on requires the user to configure the device, usually via a software graphical user interface (GUI). On unencrypted Wi-Fi networks connecting devices can monitor and record data (including personal information). Such networks can only be secured by using other means of protection, such as a VPN, or Hypertext Transfer Protocol over Transport Layer Security (HTTPS). The older wireless-encryption standard, Wired Equivalent Privacy (WEP), has been shown easily breakable even when correctly configured. Wi-Fi Protected Access (WPA) encryption, which became available in devices in 2003, aimed to solve this problem. Wi-Fi Protected Access 2 (WPA2) ratified in 2004 is considered secure, provided a strong passphrase is used. The 2003 version of WPA has not been considered secure since it was superseded by WPA2 in 2004. In 2018, WPA3 was announced as a replacement for WPA2, increasing security; it rolled out on 26 June. Piggybacking Piggybacking refers to access to a wireless Internet connection by bringing one's computer within the range of another's wireless connection, and using that service without the subscriber's explicit permission or knowledge. During the early popular adoption of 802.11, providing open access points for anyone within range to use was encouraged to cultivate wireless community networks, particularly since people on average use only a fraction of their downstream bandwidth at any given time. Recreational logging and mapping of other people's access points have become known as wardriving. Indeed, many access points are intentionally installed without security turned on so that they can be used as a free service. Providing access to one's Internet connection in this fashion may breach the Terms of Service or contract with the ISP. These activities do not result in sanctions in most jurisdictions; however, legislation and case law differ considerably across the world. A proposal to leave graffiti describing available services was called warchalking. Piggybacking often occurs unintentionally – a technically unfamiliar user might not change the default "unsecured" settings to their access point and operating systems can be configured to connect automatically to any available wireless network. A user who happens to start up a laptop in the vicinity of an access point may find the computer has joined the network without any visible indication. Moreover, a user intending to join one network may instead end up on another one if the latter has a stronger signal. In combination with automatic discovery of other network resources (see DHCP and Zeroconf) this could lead wireless users to send sensitive data to the wrong middle-man when seeking a destination (see man-in-the-middle attack). For example, a user could inadvertently use an unsecured network to log into a website, thereby making the login credentials available to anyone listening, if the website uses an insecure protocol such as plain HTTP without TLS. On an unsecured access point, an unauthorized user can obtain security information (factory preset passphrase or Wi-Fi Protected Setup PIN) from a label on a wireless access point and use this information (or connect by the Wi-Fi Protected Setup pushbutton method) to commit unauthorized or unlawful activities. Societal aspects Wireless Internet access has become much more embedded in society. It has thus changed how the society functions in many ways. Influence on developing countries over half the world did not have access to the Internet, prominently rural areas in developing nations. Technology that has been implemented in more developed nations is often costly and energy inefficient. This has led to developing nations using more low-tech networks, frequently implementing renewable power sources that can solely be maintained through solar power, creating a network that is resistant to disruptions such as power outages. For instance, in 2007, a network between Cabo Pantoja and Iquitos in Peru was erected in which all equipment is powered only by solar panels. These long-range Wi-Fi networks have two main uses: offer Internet access to populations in isolated villages, and to provide healthcare to isolated communities. In the case of the latter example, it connects the central hospital in Iquitos to 15 medical outposts which are intended for remote diagnosis. Work habits Access to Wi-Fi in public spaces such as cafes or parks allows people, in particular freelancers, to work remotely. While the accessibility of Wi-Fi is the strongest factor when choosing a place to work (75% of people would choose a place that provides Wi-Fi over one that does not), other factors influence the choice of specific hotspots. These vary from the accessibility of other resources, like books, the location of the workplace, and the social aspect of meeting other people in the same place. Moreover, the increase of people working from public places results in more customers for local businesses thus providing an economic stimulus to the area. Additionally, in the same study it has been noted that wireless connection provides more freedom of movement while working. Both when working at home or from the office it allows the displacement between different rooms or areas. In some offices (notably Cisco offices in New York) the employees do not have assigned desks but can work from any office connecting their laptop to Wi-Fi hotspot. Housing The Internet has become an integral part of living. , 81.9% of American households have Internet access. Additionally, 89% of American households with broadband connect via wireless technologies. 72.9% of American households have Wi-Fi. Wi-Fi networks have also affected how the interior of homes and hotels are arranged. For instance, architects have described that their clients no longer wanted only one room as their home office, but would like to work near the fireplace or have the possibility to work in different rooms. This contradicts architect's pre-existing ideas of the use of rooms that they designed. Additionally, some hotels have noted that guests prefer to stay in certain rooms since they receive a stronger Wi-Fi signal. Health concerns The World Health Organization (WHO) says, "no health effects are expected from exposure to RF fields from base stations and wireless networks", but notes that they promote research into effects from other RF sources. (a category used when "a causal association is considered credible, but when chance, bias or confounding cannot be ruled out with reasonable confidence"), this classification was based on risks associated with wireless phone use rather than Wi-Fi networks. The United Kingdom's Health Protection Agency reported in 2007 that exposure to Wi-Fi for a year results in the "same amount of radiation from a 20-minute mobile phone call". A review of studies involving 725 people who claimed electromagnetic hypersensitivity, "...suggests that 'electromagnetic hypersensitivity' is unrelated to the presence of an EMF, although more research into this phenomenon is required." Alternatives Several other wireless technologies provide alternatives to Wi-Fi for different use cases: Bluetooth Low Energy, a low-power variant of Bluetooth Bluetooth, a short-distance network Cellular networks, used by smartphones LoRa, for long range wireless with low data rate NearLink, a short-range wireless technology standard WiMAX, for providing long range wireless internet connectivity Zigbee, a low-power, low data rate, short-distance communication protocol Some alternatives are "no new wires", re-using existing cable: G.hn, which uses existing home wiring, such as phone and power lines Several wired technologies for computer networking, which provide viable alternatives to Wi-Fi: Ethernet over twisted pair
Technology
Networks
null
63978
https://en.wikipedia.org/wiki/Fresnel%20lens
Fresnel lens
A Fresnel lens ( ; ; or ) is a type of composite compact lens which reduces the amount of material required compared to a conventional lens by dividing the lens into a set of concentric annular sections. The simpler dioptric (purely refractive) form of the lens was first proposed by Georges-Louis Leclerc, Comte de Buffon, and independently reinvented by the French physicist Augustin-Jean Fresnel (1788–1827) for use in lighthouses. The catadioptric (combining refraction and reflection) form of the lens, entirely invented by Fresnel, has outer prismatic elements that use total internal reflection as well as refraction to capture more oblique light from the light source and add it to the beam, making it visible at greater distances. The design allows the construction of lenses of large aperture and short focal length without the mass and volume of material that would be required by a lens of conventional design. A Fresnel lens can be made much thinner than a comparable conventional lens, in some cases taking the form of a flat sheet. Because of its use in lighthouses, it has been called "the invention that saved a million ships". History Forerunners The first person to focus a lighthouse beam using a lens was apparently the London glass-cutter Thomas Rogers, who proposed the idea to Trinity House in 1788. The first Rogers lenses, 53cm in diameter and 14cm thick at the center, were installed at the Old Lower Lighthouse at Portland Bill in 1789. Behind each lamp was a back-coated spherical glass mirror, which reflected rear radiation back through the lamp and into the lens. Further samples were installed at Howth Baily, North Foreland, and at least four other locations by 1804. But much of the light was wasted by absorption in the glass. In 1748, Georges-Louis Leclerc, Comte de Buffon was the first to replace a convex lens with a series of concentric annular prisms, ground as steps in a single piece of glass,to reduce weight and absorption. In 1790 (although secondary sources give the date as 1773 or 1788), the Marquis de Condorcet suggested that it would be easier to make the annular sections separately and assemble them on a frame; but even that was impractical at the time. These designs were intended not for lighthouses, but for burning glasses. David Brewster, however, proposed a system similar to Condorcet's in 1811, and by 1820 was advocating its use in British lighthouses. Publication and refinement The French (Commission of Lighthouses) was established by Napoleon in 1811, and placed under the authority of French physicist Augustin-Jean Fresnel's employer, the Corps of Bridges and Roads. As the members of the commission were otherwise occupied, it achieved little in its early years. However, on 21 June 1819—three months after winning the physics of the Academy of Sciences for his celebrated memoir on diffraction—Fresnel was "temporarily" seconded to the commission on the recommendation of François Arago (a member since 1813), to review possible improvements in lighthouse illumination. By the end of August 1819, unaware of the Buffon-Condorcet-Brewster proposal, Fresnel made his first presentation to the commission, recommending what he called ('lenses by steps') to replace the reflectors then in use, which reflected only about half of the incident light. Another report by Fresnel, dated 29 August 1819 (Fresnel, 1866–70, vol. 3, pp. 15–21), concerns tests on reflectors, and does not mention stepped lenses except in an unrelated sketch on the last page of the manuscript. The minutes of the meetings of the Commission go back only to 1824, when Fresnel himself took over as Secretary. Thus the exact date on which Fresnel formally recommended is unknown. Much to Fresnel's embarrassment, one of the assembled commissioners, Jacques Charles, recalled Buffon's suggestion. However, whereas Buffon's version was biconvex and in one piece, Fresnel's was plano-convex and made of multiple prisms for easier construction. With an official budget of 500 francs, Fresnel approached three manufacturers. The third, François Soleil, found a way to remove defects by reheating and remolding the glass. Arago assisted Fresnel with the design of a modified Argand lamp with concentric wicks (a concept that Fresnel attributed to Count Rumford), and accidentally discovered that fish glue was heat-resistant, making it suitable for use in the lens. The prototype, finished in March 1820, had a square lens panel 55cm on a side, containing 97 polygonal (not annular) prisms—and so impressed the Commission that Fresnel was asked for a full eight-panel version. This model, completed a year later in spite of insufficient funding, had panels 76cm square. In a public spectacle on the evening of 13 April 1821, it was demonstrated by comparison with the most recent reflectors, which it suddenly rendered obsolete. Soon after this demonstration, Fresnel published the idea that light, including apparently unpolarized light, consists exclusively of transverse waves, and went on to consider the implications for double refraction and partial reflection. Fresnel acknowledged the British lenses and Buffon's invention in a memoir read on 29 July 1822 and printed in the same year. The date of that memoir may be the source of the claim that Fresnel's lighthouse advocacy began two years later than Brewster's; but the text makes it clear that Fresnel's involvement began no later than 1819. Fresnel's next lens was a rotating apparatus with eight "bull's-eye" panels, made in annular arcs by Saint-Gobain, giving eight rotating beams—to be seen by mariners as a periodic flash. Above and behind each main panel was a smaller, sloping bull's-eye panel of trapezoidal outline with trapezoidal elements. This refracted the light to a sloping plane mirror, which then reflected it horizontally, 7 degrees ahead of the main beam, increasing the duration of the flash. Below the main panels were 128 small mirrors arranged in four rings, stacked like the slats of a louver or Venetian blind. Each ring, shaped like a frustum of a cone, reflected the light to the horizon, giving a fainter steady light between the flashes. The official test, conducted on the unfinished on 20 August 1822, was witnessed by the Commission—and by Louis XVIII and his entourage—from away. The apparatus was stored at Bordeaux for the winter, and then reassembled at Cordouan Lighthouse under Fresnel's supervision—in part by Fresnel's own hands. On 25 July 1823, the world's first lighthouse Fresnel lens was lit. As expected, the light was visible to the horizon, more than out. The day before the test of the Cordouan lens in Paris, a committee of the Academy of Sciences reported on Fresnel's memoir and supplements on double refraction—which, although less well known to modern readers than his earlier work on diffraction, struck a more decisive blow for the wave theory of light. Between the test and the reassembly at Cordouan, Fresnel submitted his papers on photoelasticity (16 September 1822), elliptical and circular polarization and optical rotation (9 December), and partial reflection and total internal reflection (7 January 1823), essentially completing his reconstruction of physical optics on the transverse wave hypothesis. Shortly after the Cordouan lens was lit, Fresnel started coughing up blood. In May 1824, Fresnel was promoted to Secretary of the , becoming the first member of that body to draw a salary, albeit in the concurrent role of Engineer-in-Chief. Late that year, being increasingly ill, he curtailed his fundamental research and resigned his seasonal job as an examiner at the , in order to save his remaining time and energy for his lighthouse work. In the same year he designed the first fixed lens—for spreading light evenly around the horizon while minimizing waste above or below. Ideally the curved refracting surfaces would be segments of toroids about a common vertical axis, so that the dioptric panel would look like a cylindrical drum. If this was supplemented by reflecting (catoptric) rings above and below the refracting (dioptric) parts, the entire apparatus would look like a beehive. The second Fresnel lens to enter service was indeed a fixed lens, of third order, installed at Dunkirk by 1 February 1825. However, due to the difficulty of fabricating large toroidal prisms, this apparatus had a 16-sided polygonal plan. In 1825 Fresnel extended his fixed-lens design by adding a rotating array outside the fixed array. Each panel of the rotating array was to refract part of the fixed light from a horizontal fan into a narrow beam. Also in 1825, Fresnel unveiled the ('lighthouse map'), calling for a system of 51 lighthouses plus smaller harbor lights, in a hierarchy of lens sizes called "orders" (the first being the largest), with different characteristics to facilitate recognition: a constant light (from a fixed lens), one flash per minute (from a rotating lens with eight panels), and two per minute (16 panels). In late 1825, to reduce the loss of light in the reflecting elements, Fresnel proposed to replace each mirror with a catadioptric prism, through which the light would travel by refraction through the first surface, then total internal reflection off the second surface, then refraction through the third surface. The result was the lighthouse lens as we now know it. In 1826 he assembled a small model for use on the , but he did not live to see a full-sized version: he died on 14 July 1827, at the age of 39. After Fresnel The first stage of the development of lighthouse lenses after the death of Augustin Fresnel consisted in the implementation of his designs. This was driven in part by his younger brother Léonor—who, like Augustin, was trained as a civil engineer but, unlike Augustin, had a strong aptitude for management. Léonor entered the service of the Lighthouse Commission in 1825, and went on to succeed Augustin as Secretary. The first fixed lens to be constructed with toroidal prisms was a first-order apparatus designed by the Scottish engineer Alan Stevenson under the guidance of Léonor Fresnel, and fabricated by Isaac Cookson & Co. using French glass; it entered service at the Isle of May, Scotland, on 22 September 1836. The first large catadioptric lenses were made in 1842 for the lighthouses at Gravelines and Île Vierge, France; these were fixed third-order lenses whose catadioptric rings (made in segments) were one metre in diameter. Stevenson's first-order Skerryvore lens, lit in 1844, was only partly catadioptric; it was similar to the Cordouan lens except that the lower slats were replaced by French-made catadioptric prisms, while mirrors were retained at the top. The first fully catadioptric first-order lens, installed at Pointe d'Ailly in 1852, also gave eight rotating beams plus a fixed light at the bottom; but its top section had eight catadioptric panels focusing the light about 4 degrees ahead of the main beams, in order to lengthen the flashes. The first fully catadioptric lens with purely revolving beams—also of first order—was installed at Saint-Clément-des-Baleines in 1854, and marked the completion of Augustin Fresnel's original Carte des Phares. Thomas Stevenson (younger brother of Alan) went a step beyond Fresnel with his "holophotal" lens, which focused the light radiated by the lamp in nearly all directions, forward or backward, into a single beam. The first version, described in 1849, consisted of a standard Fresnel bull's-eye lens, a paraboloidal reflector, and a rear hemispherical reflector (functionally equivalent to the Rogers mirror of 60 years earlier, except that it subtended a whole hemisphere). Light radiated into the forward hemisphere but missing the bull's-eye lens was deflected by the paraboloid into a parallel beam surrounding the bull's-eye lens, while light radiated into the backward hemisphere was reflected back through the lamp by the spherical reflector (as in Rogers' arrangement), to be collected by the forward components. The first unit was installed at North Harbour, Peterhead, in August 1849. Stevenson called this version a "catadioptric holophote", although each of its elements was either purely reflective or purely refractive. In the second version of the holophote concept, the bull's-eye lens and paraboloidal reflector were replaced by a catadioptric Fresnel lens—as conceived by Fresnel, but expanded to cover the whole forward hemisphere. The third version, which Stevenson confusingly called a "dioptric holophote", was more innovative: it retained the catadioptric Fresnel lens for the front hemisphere, but replaced the rear hemispherical reflector with a hemispherical array of annular prisms, each of which used two total internal reflections to turn light diverging from the center of the hemisphere back toward the center. The result was an all-glass holophote, with no losses from metallic reflections. James Timmins Chance modified Thomas Stevenson's all-glass holophotal design by arranging the double-reflecting prisms about a vertical axis. The prototype was shown at the 1862 International Exhibition in London. Later, to ease manufacturing, Chance divided the prisms into segments, and arranged them in a cylindrical form while retaining the property of reflecting light from a single point back to that point. Reflectors of this form, paradoxically called "dioptric mirrors", proved particularly useful for returning light from the landward side of the lamp to the seaward side. As lighthouses proliferated, they became harder to distinguish from each other, leading to the use of colored filters, which wasted light. In 1884, John Hopkinson eliminated the need for filters by inventing the "group-flashing" lens, in which the dioptric and/or the catadioptric panels were split so as to give multiple flashes—allowing lighthouses to be identified not only by frequency of flashes, but also by multiplicity of flashes. Double-flashing lenses were installed at Tampico (Mexico) and Little Basses (Sri Lanka) in 1875, and a triple-flashing lens at Casquets Lighthouse (Channel Islands) in 1876. The example shown (right) is the double-flashing lens of the Point Arena Light, which was in service from 1908 to 1977. The development of hyper-radial lenses was driven in part by the need for larger light sources, such as gas lights with multiple jets, which required a longer focal length for a given beam-width, hence a larger lens to collect a given fraction of the generated light. The first hyper-radial lens was built for the Stevensons in 1885 by F. Barbier & Cie of France, and tested at South Foreland Lighthouse with various light sources. Chance Brothers (Hopkinson's employers) then began constructing hyper-radials, installing their first at Bishop Rock Lighthouse in 1887. In the same year, Barbier installed a hyper-radial at Tory Island. But only about 30 hyper-radials went into service before the development of more compact bright lamps rendered such large optics unnecessary (see Hyperradiant Fresnel lens). Production of one-piece stepped dioptric lenses—roughly as envisaged by Buffon—became feasible in 1852, when John L. Gilliland of the Brooklyn Flint-Glass Company patented a method of making lenses from pressed and molded glass. The company made small bull's-eye lenses for use on railroads, steamboats, and docks; such lenses were common in the United States by the 1870s. In 1858 the company produced "a very small number of pressed flint-glass sixth-order lenses" for use in lighthouses—the first Fresnel lighthouse lenses made in America. By the 1950s, the substitution of plastic for glass made it economic to use Fresnel lenses as condensers in overhead projectors. Design The Fresnel lens reduces the amount of material required compared to a conventional lens by dividing the lens into a set of concentric annular sections. An ideal Fresnel lens would have an infinite number of sections. In each section, the overall thickness is decreased compared to an equivalent simple lens. This effectively divides the continuous surface of a standard lens into a set of surfaces of the same curvature, with stepwise discontinuities between them. In some lenses, the curved surfaces are replaced with flat surfaces, with a different angle in each section. Such a lens can be regarded as an array of prisms arranged in a circular fashion with steeper prisms on the edges and a flat or slightly convex center. In the first (and largest) Fresnel lenses, each section was actually a separate prism. 'Single-piece' Fresnel lenses were later produced, being used for automobile headlamps, brake, parking, and turn signal lenses, and so on. In modern times, computer-controlled milling equipment (CNC) or 3-D printers might be used to manufacture more complex lenses. Fresnel lens design allows a substantial reduction in thickness (and thus mass and volume of material) at the expense of reducing the imaging quality of the lens, which is why precise imaging applications such as photography usually still use larger conventional lenses. Fresnel lenses are usually made of glass or plastic; their size varies from large (old historical lighthouses, meter size) to medium (book-reading aids, OHP viewgraph projectors) to small (TLR/SLR camera screens, micro-optics). In many cases they are very thin and flat, almost flexible, with thicknesses in the range. Most modern Fresnel lenses consist only of refractive elements. Lighthouse lenses, however, tend to include both refracting and reflecting elements, the latter being outside the metal rings seen in the photographs. While the inner elements are sections of refractive lenses, the outer elements are reflecting prisms, each of which performs two refractions and one total internal reflection, avoiding the light loss that occurs in reflection from a silvered mirror. Lighthouse lens sizes Fresnel designed six sizes of lighthouse lenses, divided into four orders based on their size and focal length. The 3rd and 4th orders were sub-divided into "large" and "small". In modern use, the orders are classified as first through sixth order. An intermediate size between third and fourth order was added later, as well as sizes above first order and below sixth. A first-order lens has a focal length of and stands about high, and wide. The smallest (sixth) order has a focal length of and a height of . The largest Fresnel lenses are called hyperradiant (or hyper-radial). One such lens was on hand when it was decided to build and outfit the Makapuu Point Light in Hawaii. Rather than order a new lens, the huge optic construction, tall and with over a thousand prisms, was used there. Types There are two main types of Fresnel lens: imaging and non-imaging. Imaging Fresnel lenses use segments with curved cross-sections and produce sharp images, while non-imaging lenses have segments with flat cross-sections, and do not produce sharp images. As the number of segments increases, the two types of lens become more similar to each other. In the abstract case of an infinite number of segments, the difference between curved and flat segments disappears. Imaging lenses can be classified as: Spherical A spherical Fresnel lens is equivalent to a simple spherical lens, using ring-shaped segments that are each a portion of a sphere, that all focus light on a single point. This type of lens produces a sharp image, although not quite as clear as the equivalent simple spherical lens due to diffraction at the edges of the ridges. This type is sometimes called a kinoform when the ridges are microscopic, at the wavelength scale. Cylindrical A cylindrical Fresnel lens is equivalent to a simple cylindrical lens, using straight segments with circular cross-section, focusing light on a single line. This type produces a sharp image, although not quite as clear as the equivalent simple cylindrical lens due to diffraction at the edges of the ridges. Non-imaging lenses can be classified as: Spot A non-imaging spot Fresnel lens uses ring-shaped segments with cross sections that are straight lines rather than circular arcs. Such a lens can focus light on a small spot, but does not produce a sharp image. These lenses have application in solar power, such as focusing sunlight on a solar panel. Fresnel lenses may be used as components of Köhler illumination optics resulting in very effective nonimaging optics Fresnel-Köhler (FK) solar concentrators. Linear A non-imaging linear Fresnel lens uses straight segments whose cross sections are straight lines rather than arcs. These lenses focus light into a narrow band. They do not produce a sharp image, but can be used in solar power, such as for focusing sunlight on a pipe, to heat the water within. Uses Illumination High-quality glass Fresnel lenses were used in lighthouses, where they were considered state of the art in the late 19th and through the middle of the 20th centuries. These lighthouse Fresnel lens systems typically include extra annular prismatic elements, arrayed in faceted domes above and below the central planar Fresnel, in order to catch all light emitted from the light source. The light path through these elements can include an internal reflection, rather than the simple refraction in the planar Fresnel element. These lenses conferred many practical benefits upon the designers, builders, and users of lighthouses and their illumination. Among other things, smaller lenses could fit into more compact spaces. Greater light transmission over longer distances, and varied patterns, made it possible to triangulate a position. Starting in the mid-20th century, most lighthouses have retired glass Fresnel lenses from service and replaced them with much less expensive and more durable aerobeacons or similar systems, including the Vega Industries VRB-25, which contains plastic Fresnel lens panels. Perhaps the most widespread use of Fresnel lenses, for a time, occurred in automobile headlamps, where they can shape the roughly parallel beam from the parabolic reflector to meet requirements for dipped and main-beam patterns, often both in the same headlamp unit (such as the European H4 design). For reasons of economy, weight, and impact resistance, newer cars have dispensed with glass Fresnel lenses for sealed beam headlamp units, instead using multifaceted reflectors with plain polycarbonate lenses. However, Fresnel lenses continue in wide use in automobile tail, marker, and reversing lights. Glass Fresnel lenses also are used in lighting instruments for theatre and motion pictures (see Fresnel lantern); such instruments are often called simply Fresnels. The entire instrument consists of a metal housing, a reflector, a lamp assembly, and a Fresnel lens. Many Fresnel instruments allow the lamp to be moved relative to the lens' focal point, to increase or decrease the size of the light beam. As a result, they are very flexible, and can often produce a beam as narrow as 7° or as wide as 70°. The Fresnel lens produces a very soft-edged beam, so is often used as a wash light. A holder in front of the lens can hold a colored plastic film (gel) to tint the light or wire screens or frosted plastic to diffuse it. The Fresnel lens is useful in the making of motion pictures not only because of its ability to focus the beam brighter than a typical lens, but also because the light is a relatively consistent intensity across the entire width of the beam of light. Aircraft carriers and naval air stations typically use Fresnel lenses in their optical landing systems. The "meatball" light aids the pilot in maintaining proper glide slope for the landing. In the center are amber and red lights composed of Fresnel lenses. Although the lights are always on, the angle of the lens from the pilot's point of view determines the color and position of the visible light. If the lights appear above the green horizontal bar, the pilot is too high. If it is below, the pilot is too low, and if the lights are red, the pilot is very low. Fresnel lenses are also commonly used in searchlights, spotlights, and flashlights. Imaging Fresnel lenses are used as simple hand-held magnifiers. They are also used to correct several visual disorders, including ocular-motility disorders such as strabismus. Fresnel lenses have been used to increase the visual size of CRT displays in pocket televisions, notably the Sinclair TV80. They are also used in traffic lights. Fresnel lenses are used in left-hand-drive European lorries entering the UK and Republic of Ireland (and vice versa, right-hand-drive Irish and British trucks entering mainland Europe) to overcome the blind spots caused by the driver operating the lorry while sitting on the wrong side of the cab relative to the side of the road the car is on. They attach to the passenger-side window. Another automobile application of a Fresnel lens is a rear view enhancer, as the wide view angle of a lens attached to the rear window permits examining the scene behind a vehicle, particularly a tall or bluff-tailed one, more effectively than a rear-view mirror alone. Fresnel lenses have been used on rangefinding equipment and projected map display screens. Fresnel lenses have also been used in the field of popular entertainment. The British rock artist Peter Gabriel made use of them in his early solo live performances to magnify the size of his head in contrast to the rest of his body, for dramatic and comic effect. In the Terry Gilliam film Brazil, Fresnel lenses are used as magnifiers for small CRT monitors in the Ministry of Information. The lenses occasionally appear between the actors and the camera, distorting the scale and composition of the scene to humorous effect. In the Pixar movie Wall-E, the protagonist watches the musical Hello, Dolly! on an iPod, magnified by a Fresnel lens. Virtual reality headsets, such as the Meta Quest 2 and the HTC Vive Pro use Fresnel lenses, as they allow a thinner and lighter form factor than regular lenses. Newer devices, such as the Meta Quest Pro, have switched to a pancake lens design due to its smaller form factor and less chromatic aberration than Fresnel lenses. Multi-focal Fresnel lenses are also used as a part of retina identification cameras, where they provide multiple in- and out-of-focus images of a fixation target inside the camera. For virtually all users, at least one of the images will be in focus, thus allowing correct eye alignment. Many cameras are equipped with viewfinders which project the scene through a lens onto a ground glass screen for focusing and composition, including view, twin-lens reflex, and single-lens reflex cameras; often a Fresnel condenser lens is applied to the ground glass to increase the perceived brightness of the projected image and make the illumination more even from center to corner. For example, the Polaroid SX-70 camera uses a Fresnel reflector as part of its viewing system. Projection The use of Fresnel lenses for image projection reduces image quality, so they tend to occur only where quality is not critical or where the bulk of a solid lens would be prohibitive. Cheap Fresnel lenses can be stamped or molded of transparent plastic and are used in overhead projectors and projection televisions. Fresnel lenses of different focal lengths (one collimator, and one collector) are used in commercial and DIY projection. The collimator lens has the lower focal length and is placed closer to the light source, and the collector lens, which focuses the light into the triplet lens, is placed after the projection image (an active matrix LCD panel in LCD projectors). Fresnel lenses are also used as collimators in overhead projectors. Solar power Since plastic Fresnel lenses can be made larger than glass lenses, as well as being much cheaper and lighter, they are used to concentrate sunlight for heating in solar cookers, in solar forges, and in solar collectors used to heat water for domestic use. They can also be used to generate steam or to power a Stirling engine to generate electricity. Fresnel lenses can concentrate sunlight onto solar cells with a ratio of almost 500:1. This allows the active solar-cell surface to be reduced, lowering cost and allowing the use of more efficient cells that would otherwise be too expensive. In the early 21st century, Fresnel reflectors began to be used in concentrating solar power (CSP) plants to concentrate solar energy. One application was to preheat water at the coal-fired Liddell Power Station, in Hunter Valley Australia. Fresnel lenses can be used to sinter sand, allowing 3D printing in glass.
Technology
Optical components
null
64020
https://en.wikipedia.org/wiki/Multiprocessing
Multiprocessing
Multiprocessing is the use of two or more central processing units (CPUs) within a single computer system. The term also refers to the ability of a system to support more than one processor or the ability to allocate tasks between them. There are many variations on this basic theme, and the definition of multiprocessing can vary with context, mostly as a function of how CPUs are defined (multiple cores on one die, multiple dies in one package, multiple packages in one system unit, etc.). According to some on-line dictionaries, a multiprocessor is a computer system having two or more processing units (multiple processors) each sharing main memory and peripherals, in order to simultaneously process programs. A 2009 textbook defined multiprocessor system similarly, but noted that the processors may share "some or all of the system’s memory and I/O facilities"; it also gave tightly coupled system as a synonymous term. At the operating system level, multiprocessing is sometimes used to refer to the execution of multiple concurrent processes in a system, with each process running on a separate CPU or core, as opposed to a single process at any one instant. When used with this definition, multiprocessing is sometimes contrasted with multitasking, which may use just a single processor but switch it in time slices between tasks (i.e. a time-sharing system). Multiprocessing however means true parallel execution of multiple processes using more than one processor. Multiprocessing doesn't necessarily mean that a single process or task uses more than one processor simultaneously; the term parallel processing is generally used to denote that scenario. Other authors prefer to refer to the operating system techniques as multiprogramming and reserve the term multiprocessing for the hardware aspect of having more than one processor. The remainder of this article discusses multiprocessing only in this hardware sense. In Flynn's taxonomy, multiprocessors as defined above are MIMD machines. As the term "multiprocessor" normally refers to tightly coupled systems in which all processors share memory, multiprocessors are not the entire class of MIMD machines, which also contains message passing multicomputer systems. Key topics Processor symmetry In a multiprocessing system, all CPUs may be equal, or some may be reserved for special purposes. A combination of hardware and operating system software design considerations determine the symmetry (or lack thereof) in a given system. For example, hardware or software considerations may require that only one particular CPU respond to all hardware interrupts, whereas all other work in the system may be distributed equally among CPUs; or execution of kernel-mode code may be restricted to only one particular CPU, whereas user-mode code may be executed in any combination of processors. Multiprocessing systems are often easier to design if such restrictions are imposed, but they tend to be less efficient than systems in which all CPUs are utilized. Systems that treat all CPUs equally are called symmetric multiprocessing (SMP) systems. In systems where all CPUs are not equal, system resources may be divided in a number of ways, including asymmetric multiprocessing (ASMP), non-uniform memory access (NUMA) multiprocessing, and clustered multiprocessing. Master/slave multiprocessor system In a master/slave multiprocessor system, the master CPU is in control of the computer and the slave CPU(s) performs assigned tasks. The CPUs can be completely different in terms of speed and architecture. Some (or all) of the CPUs can share a common bus, each can also have a private bus (for private resources), or they may be isolated except for a common communications pathway. Likewise, the CPUs can share common RAM and/or have private RAM that the other processor(s) cannot access. The roles of master and slave can change from one CPU to another. Two early examples of a mainframe master/slave multiprocessor are the Bull Gamma 60 and the Burroughs B5000. An early example of a master/slave multiprocessor system of microprocessors is the Tandy/Radio Shack TRS-80 Model 16 desktop computer which came out in February 1982 and ran the multi-user/multi-tasking Xenix operating system, Microsoft's version of UNIX (called TRS-XENIX). The Model 16 has two microprocessors: an 8-bit Zilog Z80 CPU running at 4 MHz, and a 16-bit Motorola 68000 CPU running at 6 MHz. When the system is booted, the Z-80 is the master and the Xenix boot process initializes the slave 68000, and then transfers control to the 68000, whereupon the CPUs change roles and the Z-80 becomes a slave processor responsible for all I/O operations including disk, communications, printer and network, as well as the keyboard and integrated monitor, while the operating system and applications run on the 68000 CPU. The Z-80 can be used to do other tasks. The earlier TRS-80 Model II, which was released in 1979, could also be considered a multiprocessor system as it had both a Z-80 CPU and an Intel 8021 microcontroller in the keyboard. The 8021 made the Model II the first desktop computer system with a separate detachable lightweight keyboard connected with by a single thin flexible wire, and likely the first keyboard to use a dedicated microcontroller, both attributes that would later be copied years later by Apple and IBM. Instruction and data streams In multiprocessing, the processors can be used to execute a single sequence of instructions in multiple contexts (single instruction, multiple data or SIMD, often used in vector processing), multiple sequences of instructions in a single context (multiple instruction, single data or MISD, used for redundancy in fail-safe systems and sometimes applied to describe pipelined processors or hyper-threading), or multiple sequences of instructions in multiple contexts (multiple instruction, multiple data or MIMD). Processor coupling Tightly coupled multiprocessor system Tightly coupled multiprocessor systems contain multiple CPUs that are connected at the bus level. These CPUs may have access to a central shared memory (SMP or UMA), or may participate in a memory hierarchy with both local and shared memory (SM)(NUMA). The IBM p690 Regatta is an example of a high end SMP system. Intel Xeon processors dominated the multiprocessor market for business PCs and were the only major x86 option until the release of AMD's Opteron range of processors in 2004. Both ranges of processors had their own onboard cache but provided access to shared memory; the Xeon processors via a common pipe and the Opteron processors via independent pathways to the system RAM. Chip multiprocessors, also known as multi-core computing, involves more than one processor placed on a single chip and can be thought of the most extreme form of tightly coupled multiprocessing. Mainframe systems with multiple processors are often tightly coupled. Loosely coupled multiprocessor system Loosely coupled multiprocessor systems (often referred to as clusters) are based on multiple standalone relatively low processor count commodity computers interconnected via a high speed communication system (Gigabit Ethernet is common). A Linux Beowulf cluster is an example of a loosely coupled system. Tightly coupled systems perform better and are physically smaller than loosely coupled systems, but have historically required greater initial investments and may depreciate rapidly; nodes in a loosely coupled system are usually inexpensive commodity computers and can be recycled as independent machines upon retirement from the cluster. Power consumption is also a consideration. Tightly coupled systems tend to be much more energy-efficient than clusters. This is because a considerable reduction in power consumption can be realized by designing components to work together from the beginning in tightly coupled systems, whereas loosely coupled systems use components that were not necessarily intended specifically for use in such systems. Loosely coupled systems have the ability to run different operating systems or OS versions on different systems. Disadvantages Merging data from multiple threads or processes may incur significant overhead due to conflict resolution, data consistency, versioning, and synchronization.
Technology
Computer architecture concepts
null
64150
https://en.wikipedia.org/wiki/Sphynx%20cat
Sphynx cat
The Sphynx cat (pronounced , ) also known as the Canadian Sphynx, is a breed of cat known for its lack of fur. Hairlessness in cats is a naturally occurring genetic mutation, and the Sphynx was developed through selective breeding of these animals, starting in the 1960s. The skin has a texture of chamois leather, as it has fine hairs, or the cat may be completely hairless. Whiskers may be present, either whole or broken, or may be totally absent. Per the breed standards, they have a somewhat wedge-shaped head with large eyes and ears, quite long legs and tail, and neat rounded paws. Their skin is the color that their fur would be, and all the usual cat markings (solid, point, van, tabby, tortie, etc.) may be found on the Sphynx cat's skin. Because they have no fur, Sphynx cats lose body heat more readily than coated cats, making them both warm to the touch and prone to seeking out warm places. Breed standards The breed standard from The International Cat Association (TICA) calls for: Wedge-shaped heads with prominent cheekbones Large, lemon-shaped eyes Very large ears with hair on inside, but soft down on outside base Well-muscled, powerful neck of medium length Medium length torso, barrel-chested, and full, round abdomen, sometimes called a pot belly Paw pads thicker than other cats, giving the appearance of walking on cushions Whiplike, tapering tail from body to tip, (sometimes with fur all over tail or a puff of fur on the tip, like a lion) Muscular body History of the cat breed The contemporary breed of Sphynx cat is distinct from the Russian hairless cat breeds, like Peterbald and Donskoy. Although hairless cats have been reported throughout history, breeders in Europe have been developing the Sphynx breed since the early 1960s. Two different sets of hairless felines discovered in North America in the 1970s provided the foundation cats for what was shaped into the existing Sphynx breed. The current American and European Sphynx breed is descended from two lines of natural mutations: Dermis and Epidermis (1975) barn cats from the Pearson family of Wadena, Minnesota Bambi, Punkie and Paloma (1978) stray cats found in Toronto, Ontario, Canada, and raised by Shirley Smith Toronto The Canadian Sphynx breed was started in 1966 in Toronto, Ontario when a hairless male kitten named Prune was born to a black and white domestic shorthair queen (Elizabeth). After purchasing these cats in 1966 and initially referring to them as "Moonstones" and "Canadian Hairless", Ridyadh Bawa, a science graduate of the University of Toronto, combined efforts with his mother Yania, a longtime Siamese breeder, and Keese and Rita Tenhoves to develop a breed of cats which was subsequently renamed as Sphynx. The Bawas and the Tenhoves were the first individuals able to determine the autosomal recessive nature of the Sphynx gene for hairlessness while also being successful in transforming this knowledge into a successful breeding program with kittens which were eventually capable of reproducing. The Tenhoves were initially able to obtain for the new breed provisional showing status through the Cat Fanciers' Association (CFA) but ultimately had the status revoked in 1971, when it was felt by the CFA Board that the breed had concerns over fertility. The first breeders had rather vague ideas about Sphynx genetics and faced a number of problems. The genetic pool was very limited and many kittens died. There was also a problem with many of the females suffering convulsions. In 1978, cat breeder Shirley Smith found three hairless kittens on the streets of her neighborhood. In 1983, she sent two of them to Dr. Hugo Hernandez in the Netherlands to breed the two kittens, named Punkie and Paloma, to a white Devon Rex named Curare van Jetrophin. The resulting litter produced five kittens: two males from this litter (Q. Ramses and Q. Ra) were used, along with Punkie's half-sister, Paloma. Minnesota The first noted naturally occurring foundation Sphynx originated as hairless stray barn cats in Wadena, Minnesota, at the farm of Milt and Ethelyn Pearson. The Pearsons identified hairless kittens occurring in several litters of their domestic shorthair barn cats in the mid-1970s. Two hairless female kittens born in 1975 and 1976, Epidermis and Dermis, were sold to Oregon breeder Kim Mueske, and became an important part of the Sphynx breeding program. Also working with the Pearson line of cats was breeder Georgiana Gattenby of Brainerd, Minnesota, who outcrossed with Cornish Rex cats. Genetics and breeding Other hairless breeds may have body shapes or temperaments that differ from those of Sphynx standards. There are, for example, new hairless breeds, including the Don Sphynx and the Peterbald from Russia, which arose from their own spontaneous gene mutations. The standard for the Sphynx differs between cat associations such as The International Cat Association (TICA), Fédération Internationale Féline (FIFE) and Cat Fanciers' Association (CFA). Breeding In 2010, DNA analysis confirmed that Sphynx hairlessness was produced by a different allele of the same gene that produces the short curly hair of the Devon Rex (termed the "re" allele), with the Sphynx's allele being incompletely dominant over the Devon allele and both being recessive to the wild type. Other associations may vary, and the Russian Blue is a permitted outcross in the Governing Council of the Cat Fancy (GCCF). Genetics The Sphynx's distinctive hairlessness is primarily due to a mutation in the KRT71 gene, which also affects other breeds, such as the Devon Rex and Selkirk Rex, albeit with different outcomes. This gene is responsible for the keratinization of the hair follicle. In the Sphynx, the mutation, known as "hr", leads to a complete loss of function, damaging the structure of the hair. Normally, KRT71 helps produce strong hair that is securely anchored to the skin. However, due to the "hr" mutation, the hair of Sphynx cats lacks a solid root or bulb, making it extremely weak. Consequently, the hair is fragile and loosely attached, causing it to fall out easily and contributing to the breed's nearly hairless appearance. Sphynx cats may still retain very soft, short hair on parts of their body, such as the nose, tails, and toes, but overall, their coat is significantly reduced and lacks the typical structure seen in other cats. In the Devon Rex mutation, a residual activity of the protein still exists. The Selkirk Rex allele (sadr) is dominant over the wild type gene, which is dominant over the Devon Rex allele (re) and the Sphynx (hr), which forms an allelic series of : KRT71SADRE > KRT71+ > KRT71re > KRT71hr. Behavior Sphynx are known for their extroverted behavior. They display a high level of energy, intelligence, curiosity and affection for their owners. They are one of the more dog-like breeds of cats, frequently greeting their owners at the door and are friendly when meeting strangers. Sphynx cats tend to be highly attached to their owners, often demanding large amounts of attention, and if said attention is not given, can get into trouble. The mischievous cats love to cuddle for body warmth, due to their lack of fur. A study was conducted by the Journal of Veterinary Behavior in 2012, and while further research needs to be conducted, purebred Sphynx cats were rated by their owners as friendlier than purebred European cats. Care Care should be taken to limit the Sphynx cat's exposure to outdoor sunlight at length, as they can develop sunburn and skin damage similar to that of humans. In general, Sphynx cats should never be allowed outdoors unattended, as they have limited means to conserve body heat when it is cold. In some climates, owners provide coats or other clothing in the winter to help them conserve body heat. While they lack much of the fur of other cat breeds, Sphynxes are not necessarily hypoallergenic. Allergies to cats are triggered by a protein called Fel d1, not cat hair itself. Fel d1 is a protein primarily found in cat saliva and sebaceous glands. Those with cat allergies may react to direct contact with Sphynx cats. Even though reports exist that some people with allergies successfully tolerate Sphynx cats, they are fewer than those who have allergic reactions. The skin of the Sphynx cat is known for its excessive production of a greasy secretion, which often results in the accumulation of a sticky, dark brown, or reddish-brown layer that necessitates regular cleaning. Furthermore, Sphynx cats typically produce more earwax than most hairy domestic cats. This increased wax production is attributed to the minimal to absent hair within their ears, which allows for the accumulation of dirt, skin oils (sebum), and ear wax, thereby requiring frequent cleaning. Additionally, they often accumulate oils and debris under their nails and within their numerous skin folds due to the lack of fur. Regular maintenance of these areas, including the nails and skin folds, is essential for the health and hygiene of the breed. Health The Sphynx faces challenges because of its lack of protective fur. Skin cancer may be a problem if exposed to sunlight for long durations of time. The lack of hair can cause health issues with kittens in the first weeks of life because of susceptibility to respiratory infections. Reputable breeders should not let their kittens go to new homes without being at least 14 weeks of age to ensure the kitten is mature enough to cope in a new environment. In a review of over 5,000 cases of urate urolithiasis the Sphynx was over-represented, with four recorded cases out of a population of 28. Hypertrophic cardiomyopathy The breed does have instances of the genetic disorder hypertrophic cardiomyopathy (HCM). Other domestic cat breeds prone to HCM include Persian, Ragdoll, Norwegian Forest cat, Siberian cats, British Shorthair and Maine Coon; however, any domestic cat including mixed breeds can acquire HCM. Studies are being undertaken to understand the links in breeding and the disorder. Cats are screened for HCM disease with echocardiography (ultrasound of the heart), as well as with additional tests determined by the veterinarian cardiologist including electrocardiogram (EKG, ECG), chest radiographs (X-rays), and/or blood tests. The Sphynx cat has a high rate of heart disease, either as HCM or mitral valve dysplasia. In a 2012 study of 114 Sphynx cats, 34% were found to have an abnormal heart, with 16 cats having mitral valve dysplasia and 23 cats having HCM. These prevalences were found in cats with an average age of 2.62 years. Male cats developed more severe disease than female cats and often developed it earlier, at an average age of 19 months for males and 29 months for females. Since the prevalence of genetic heart disease is high in this breed, many breeders will recommend screening for HCM yearly. As HCM progresses into an advanced stage, cats may experience congestive heart failure (CHF) or thromboembolism. Congenital myasthenic syndrome Congenital myasthenic syndrome (CMS) previously referred to as muscular dystrophy, myopathy or spasticity, is a type of inherited neuromuscular disorder associated with alpha-dystroglycan deficiency, found in Sphynx and in Devon Rex cats as well as variants of these breeds, which can occur between the first 3 to 23 weeks of their life. This condition has also been described, but is rarely seen. Cats affected by CMS show generalized muscle weakness and fatigue, as well as ventroflexion of the head and neck, head bobbing, and scapulae protrusion.
Biology and health sciences
Cats
Animals
64204
https://en.wikipedia.org/wiki/Kinetic%20theory%20of%20gases
Kinetic theory of gases
The kinetic theory of gases is a simple classical model of the thermodynamic behavior of gases. Its introduction allowed many principal concepts of thermodynamics to be established. It treats a gas as composed of numerous particles, too small to be seen with a microscope, in constant, random motion. These particles are now known to be the atoms or molecules of the gas. The kinetic theory of gases uses their collisions with each other and with the walls of their container to explain the relationship between the macroscopic properties of gases, such as volume, pressure, and temperature, as well as transport properties such as viscosity, thermal conductivity and mass diffusivity. The basic version of the model describes an ideal gas. It treats the collisions as perfectly elastic and as the only interaction between the particles, which are additionally assumed to be much smaller than their average distance apart. Due to the time reversibility of microscopic dynamics (microscopic reversibility), the kinetic theory is also connected to the principle of detailed balance, in terms of the fluctuation-dissipation theorem (for Brownian motion) and the Onsager reciprocal relations. The theory was historically significant as the first explicit exercise of the ideas of statistical mechanics. History Kinetic theory of matter Antiquity In about 50 BCE, the Roman philosopher Lucretius proposed that apparently static macroscopic bodies were composed on a small scale of rapidly moving atoms all bouncing off each other. This Epicurean atomistic point of view was rarely considered in the subsequent centuries, when Aristotlean ideas were dominant. Modern era "Heat is motion" One of the first and boldest statements on the relationship between motion of particles and heat was by the English philosopher Francis Bacon in 1620. "It must not be thought that heat generates motion, or motion heat (though in some respects this be true), but that the very essence of heat ... is motion and nothing else." "not a ... motion of the whole, but of the small particles of the body." In 1623, in The Assayer, Galileo Galilei, in turn, argued that heat, pressure, smell and other phenomena perceived by our senses are apparent properties only, caused by the movement of particles, which is a real phenomenon. In 1665, in Micrographia, the English polymath Robert Hooke repeated Bacon's assertion, and in 1675, his colleague, Anglo-Irish scientist Robert Boyle noted that a hammer's "impulse" is transformed into the motion of a nail's constituent particles, and that this type of motion is what heat consists of. Boyle also believed that all macroscopic properties, including color, taste and elasticity, are caused by and ultimately consist of nothing but the arrangement and motion of indivisible particles of matter. In a lecture of 1681, Hooke asserted a direct relationship between the temperature of an object and the speed of its internal particles. "Heat ... is nothing but the internal Motion of the Particles of [a] Body; and the hotter a Body is, the more violently are the Particles moved." In a manuscript published 1720, the English philosopher John Locke made a very similar statement: "What in our sensation is heat, in the object is nothing but motion." Locke too talked about the motion of the internal particles of the object, which he referred to as its "insensible parts". In his 1744 paper Meditations on the Cause of Heat and Cold, Russian polymath Mikhail Lomonosov made a relatable appeal to everyday experience to gain acceptance of the microscopic and kinetic nature of matter and heat:Lomonosov also insisted that movement of particles is necessary for the processes of dissolution, extraction and diffusion, providing as examples the dissolution and diffusion of salts by the action of water particles on the of the “molecules of salt”, the dissolution of metals in mercury, and the extraction of plant pigments by alcohol. Also the transfer of heat was explained by the motion of particles. Around 1760, Scottish physicist and chemist Joseph Black wrote: "Many have supposed that heat is a tremulous ... motion of the particles of matter, which ... motion they imagined to be communicated from one body to another." Kinetic theory of gases In 1738 Daniel Bernoulli published Hydrodynamica, which laid the basis for the kinetic theory of gases. In this work, Bernoulli posited the argument, that gases consist of great numbers of molecules moving in all directions, that their impact on a surface causes the pressure of the gas, and that their average kinetic energy determines the temperature of the gas. The theory was not immediately accepted, in part because conservation of energy had not yet been established, and it was not obvious to physicists how the collisions between molecules could be perfectly elastic. Pioneers of the kinetic theory, whose work was also largely neglected by their contemporaries, were Mikhail Lomonosov (1747), Georges-Louis Le Sage (ca. 1780, published 1818), John Herapath (1816) and John James Waterston (1843), which connected their research with the development of mechanical explanations of gravitation. In 1856 August Krönig created a simple gas-kinetic model, which only considered the translational motion of the particles. In 1857 Rudolf Clausius developed a similar, but more sophisticated version of the theory, which included translational and, contrary to Krönig, also rotational and vibrational molecular motions. In this same work he introduced the concept of mean free path of a particle. In 1859, after reading a paper about the diffusion of molecules by Clausius, Scottish physicist James Clerk Maxwell formulated the Maxwell distribution of molecular velocities, which gave the proportion of molecules having a certain velocity in a specific range. This was the first-ever statistical law in physics. Maxwell also gave the first mechanical argument that molecular collisions entail an equalization of temperatures and hence a tendency towards equilibrium. In his 1873 thirteen page article 'Molecules', Maxwell states: "we are told that an 'atom' is a material point, invested and surrounded by 'potential forces' and that when 'flying molecules' strike against a solid body in constant succession it causes what is called pressure of air and other gases." In 1871, Ludwig Boltzmann generalized Maxwell's achievement and formulated the Maxwell–Boltzmann distribution. The logarithmic connection between entropy and probability was also first stated by Boltzmann. At the beginning of the 20th century, atoms were considered by many physicists to be purely hypothetical constructs, rather than real objects. An important turning point was Albert Einstein's (1905) and Marian Smoluchowski's (1906) papers on Brownian motion, which succeeded in making certain accurate quantitative predictions based on the kinetic theory. Following the development of the Boltzmann equation, a framework for its use in developing transport equations was developed independently by David Enskog and Sydney Chapman in 1917 and 1916. The framework provided a route to prediction of the transport properties of dilute gases, and became known as Chapman–Enskog theory. The framework was gradually expanded throughout the following century, eventually becoming a route to prediction of transport properties in real, dense gases. Assumptions The application of kinetic theory to ideal gases makes the following assumptions: The gas consists of very small particles. This smallness of their size is such that the sum of the volume of the individual gas molecules is negligible compared to the volume of the container of the gas. This is equivalent to stating that the average distance separating the gas particles is large compared to their size, and that the elapsed time during a collision between particles and the container's wall is negligible when compared to the time between successive collisions. The number of particles is so large that a statistical treatment of the problem is well justified. This assumption is sometimes referred to as the thermodynamic limit. The rapidly moving particles constantly collide among themselves and with the walls of the container, and all these collisions are perfectly elastic. Interactions (i.e. collisions) between particles are strictly binary and uncorrelated, meaning that there are no three-body (or higher) interactions, and the particles have no memory. Except during collisions, the interactions among molecules are negligible. They exert no other forces on one another. Thus, the dynamics of particle motion can be treated classically, and the equations of motion are time-reversible. As a simplifying assumption, the particles are usually assumed to have the same mass as one another; however, the theory can be generalized to a mass distribution, with each mass type contributing to the gas properties independently of one another in agreement with Dalton's law of partial pressures. Many of the model's predictions are the same whether or not collisions between particles are included, so they are often neglected as a simplifying assumption in derivations (see below). More modern developments, such as the revised Enskog theory and the extended Bhatnagar–Gross–Krook model, relax one or more of the above assumptions. These can accurately describe the properties of dense gases, and gases with internal degrees of freedom, because they include the volume of the particles as well as contributions from intermolecular and intramolecular forces as well as quantized molecular rotations, quantum rotational-vibrational symmetry effects, and electronic excitation. While theories relaxing the assumptions that the gas particles occupy negligible volume and that collisions are strictly elastic have been successful, it has been shown that relaxing the requirement of interactions being binary and uncorrelated will eventually lead to divergent results. Equilibrium properties Pressure and kinetic energy In the kinetic theory of gases, the pressure is assumed to be equal to the force (per unit area) exerted by the individual gas atoms or molecules hitting and rebounding from the gas container's surface. Consider a gas particle traveling at velocity, , along the -direction in an enclosed volume with characteristic length, , cross-sectional area, , and volume, . The gas particle encounters a boundary after characteristic time The momentum of the gas particle can then be described as We combine the above with Newton's second law, which states that the force experienced by a particle is related to the time rate of change of its momentum, such that Now consider a large number, , of gas particles with random orientation in a three-dimensional volume. Because the orientation is random, the average particle speed, , in every direction is identical Further, assume that the volume is symmetrical about its three dimensions, , such that The total surface area on which the gas particles act is therefore The pressure exerted by the collisions of the gas particles with the surface can then be found by adding the force contribution of every particle and dividing by the interior surface area of the volume, The total translational kinetic energy of the gas is defined as providing the result This is an important, non-trivial result of the kinetic theory because it relates pressure, a macroscopic property, to the translational kinetic energy of the molecules, which is a microscopic property. Temperature and kinetic energy Rewriting the above result for the pressure as , we may combine it with the ideal gas law where is the Boltzmann constant and is the absolute temperature defined by the ideal gas law, to obtain which leads to a simplified expression of the average translational kinetic energy per molecule, The translational kinetic energy of the system is times that of a molecule, namely . The temperature, is related to the translational kinetic energy by the description above, resulting in which becomes Equation () is one important result of the kinetic theory: The average molecular kinetic energy is proportional to the ideal gas law's absolute temperature. From equations () and (), we have Thus, the product of pressure and volume per mole is proportional to the average translational molecular kinetic energy. Equations () and () are called the "classical results", which could also be derived from statistical mechanics; for more details, see: The equipartition theorem requires that kinetic energy is partitioned equally between all kinetic degrees of freedom, D. A monatomic gas is axially symmetric about each spatial axis, so that D = 3 comprising translational motion along each axis. A diatomic gas is axially symmetric about only one axis, so that D = 5, comprising translational motion along three axes and rotational motion along two axes. A polyatomic gas, like water, is not radially symmetric about any axis, resulting in D = 6, comprising 3 translational and 3 rotational degrees of freedom. Because the equipartition theorem requires that kinetic energy is partitioned equally, the total kinetic energy is Thus, the energy added to the system per gas particle kinetic degree of freedom is Therefore, the kinetic energy per kelvin of one mole of monatomic ideal gas (D = 3) is where is the Avogadro constant, and R is the ideal gas constant. Thus, the ratio of the kinetic energy to the absolute temperature of an ideal monatomic gas can be calculated easily: per mole: 12.47 J/K per molecule: 20.7 yJ/K = 129 μeV/K At standard temperature (273.15 K), the kinetic energy can also be obtained: per mole: 3406 J per molecule: 5.65 zJ = 35.2 meV. At higher temperatures (typically thousands of kelvins), vibrational modes become active to provide additional degrees of freedom, creating a temperature-dependence on D and the total molecular energy. Quantum statistical mechanics is needed to accurately compute these contributions. Collisions with container wall For an ideal gas in equilibrium, the rate of collisions with the container wall and velocity distribution of particles hitting the container wall can be calculated based on naive kinetic theory, and the results can be used for analyzing effusive flow rates, which is useful in applications such as the gaseous diffusion method for isotope separation. Assume that in the container, the number density (number per unit volume) is and that the particles obey Maxwell's velocity distribution: Then for a small area on the container wall, a particle with speed at angle from the normal of the area , will collide with the area within time interval , if it is within the distance from the area . Therefore, all the particles with speed at angle from the normal that can reach area within time interval are contained in the tilted pipe with a height of and a volume of . The total number of particles that reach area within time interval also depends on the velocity distribution; All in all, it calculates to be: Integrating this over all appropriate velocities within the constraint , , yields the number of atomic or molecular collisions with a wall of a container per unit area per unit time: This quantity is also known as the "impingement rate" in vacuum physics. Note that to calculate the average speed of the Maxwell's velocity distribution, one has to integrate over , , . The momentum transfer to the container wall from particles hitting the area with speed at angle from the normal, in time interval is: Integrating this over all appropriate velocities within the constraint , , yields the pressure (consistent with Ideal gas law): If this small area is punched to become a small hole, the effusive flow rate will be: Combined with the ideal gas law, this yields The above expression is consistent with Graham's law. To calculate the velocity distribution of particles hitting this small area, we must take into account that all the particles with that hit the area within the time interval are contained in the tilted pipe with a height of and a volume of ; Therefore, compared to the Maxwell distribution, the velocity distribution will have an extra factor of : with the constraint , , . The constant can be determined by the normalization condition to be , and overall: Speed of molecules From the kinetic energy formula it can be shown that where v is in m/s, T is in kelvin, and m is the mass of one molecule of gas in kg. The most probable (or mode) speed is 81.6% of the root-mean-square speed , and the mean (arithmetic mean, or average) speed is 92.1% of the rms speed (isotropic distribution of speeds). See: Average, Root-mean-square speed Arithmetic mean Mean Mode (statistics) Mean free path In kinetic theory of gases, the mean free path is the average distance traveled by a molecule, or a number of molecules per volume, before they make their first collision. Let be the collision cross section of one molecule colliding with another. As in the previous section, the number density is defined as the number of molecules per (extensive) volume, or . The collision cross section per volume or collision cross section density is , and it is related to the mean free path by Notice that the unit of the collision cross section per volume is reciprocal of length. Transport properties The kinetic theory of gases deals not only with gases in thermodynamic equilibrium, but also very importantly with gases not in thermodynamic equilibrium. This means using Kinetic Theory to consider what are known as "transport properties", such as viscosity, thermal conductivity, mass diffusivity and thermal diffusion. In its most basic form, Kinetic gas theory is only applicable to dilute gases. The extension of Kinetic gas theory to dense gas mixtures, Revised Enskog Theory, was developed in 1983-1987 by E. G. D. Cohen, J. M. Kincaid and M. Lòpez de Haro, building on work by H. van Beijeren and M. H. Ernst. Viscosity and kinetic momentum In books on elementary kinetic theory one can find results for dilute gas modeling that are used in many fields. Derivation of the kinetic model for shear viscosity usually starts by considering a Couette flow where two parallel plates are separated by a gas layer. The upper plate is moving at a constant velocity to the right due to a force F. The lower plate is stationary, and an equal and opposite force must therefore be acting on it to keep it at rest. The molecules in the gas layer have a forward velocity component which increase uniformly with distance above the lower plate. The non-equilibrium flow is superimposed on a Maxwell-Boltzmann equilibrium distribution of molecular motions. Inside a dilute gas in a Couette flow setup, let be the forward velocity of the gas at a horizontal flat layer (labeled as ); is along the horizontal direction. The number of molecules arriving at the area on one side of the gas layer, with speed at angle from the normal, in time interval is These molecules made their last collision at , where is the mean free path. Each molecule will contribute a forward momentum of where plus sign applies to molecules from above, and minus sign below. Note that the forward velocity gradient can be considered to be constant over a distance of mean free path. Integrating over all appropriate velocities within the constraint , , yields the forward momentum transfer per unit time per unit area (also known as shear stress): The net rate of momentum per unit area that is transported across the imaginary surface is thus Combining the above kinetic equation with Newton's law of viscosity gives the equation for shear viscosity, which is usually denoted when it is a dilute gas: Combining this equation with the equation for mean free path gives Maxwell-Boltzmann distribution gives the average (equilibrium) molecular speed as where is the most probable speed. We note that and insert the velocity in the viscosity equation above. This gives the well known equation (with subsequently estimated below) for shear viscosity for dilute gases: and is the molar mass. The equation above presupposes that the gas density is low (i.e. the pressure is low). This implies that the transport of momentum through the gas due to the translational motion of molecules is much larger than the transport due to momentum being transferred between molecules during collisions. The transfer of momentum between molecules is explicitly accounted for in Revised Enskog theory, which relaxes the requirement of a gas being dilute. The viscosity equation further presupposes that there is only one type of gas molecules, and that the gas molecules are perfect elastic and hard core particles of spherical shape. This assumption of elastic, hard core spherical molecules, like billiard balls, implies that the collision cross section of one molecule can be estimated by The radius is called collision cross section radius or kinetic radius, and the diameter is called collision cross section diameter or kinetic diameter of a molecule in a monomolecular gas. There are no simple general relation between the collision cross section and the hard core size of the (fairly spherical) molecule. The relation depends on shape of the potential energy of the molecule. For a real spherical molecule (i.e. a noble gas atom or a reasonably spherical molecule) the interaction potential is more like the Lennard-Jones potential or Morse potential which have a negative part that attracts the other molecule from distances longer than the hard core radius. The radius for zero Lennard-Jones potential may then be used as a rough estimate for the kinetic radius. However, using this estimate will typically lead to an erroneous temperature dependency of the viscosity. For such interaction potentials, significantly more accurate results are obtained by numerical evaluation of the required collision integrals. The expression for viscosity obtained from Revised Enskog Theory reduces to the above expression in the limit of infinite dilution, and can be written as where is a term that tends to zero in the limit of infinite dilution that accounts for excluded volume, and is a term accounting for the transfer of momentum over a non-zero distance between particles during a collision. Thermal conductivity and heat flux Following a similar logic as above, one can derive the kinetic model for thermal conductivity of a dilute gas: Consider two parallel plates separated by a gas layer. Both plates have uniform temperatures, and are so massive compared to the gas layer that they can be treated as thermal reservoirs. The upper plate has a higher temperature than the lower plate. The molecules in the gas layer have a molecular kinetic energy which increases uniformly with distance above the lower plate. The non-equilibrium energy flow is superimposed on a Maxwell-Boltzmann equilibrium distribution of molecular motions. Let be the molecular kinetic energy of the gas at an imaginary horizontal surface inside the gas layer. The number of molecules arriving at an area on one side of the gas layer, with speed at angle from the normal, in time interval is These molecules made their last collision at a distance above and below the gas layer, and each will contribute a molecular kinetic energy of where is the specific heat capacity. Again, plus sign applies to molecules from above, and minus sign below. Note that the temperature gradient can be considered to be constant over a distance of mean free path. Integrating over all appropriate velocities within the constraint , , yields the energy transfer per unit time per unit area (also known as heat flux): Note that the energy transfer from above is in the direction, and therefore the overall minus sign in the equation. The net heat flux across the imaginary surface is thus Combining the above kinetic equation with Fourier's law gives the equation for thermal conductivity, which is usually denoted when it is a dilute gas: Similarly to viscosity, Revised Enskog Theory yields an expression for thermal conductivity that reduces to the above expression in the limit of infinite dilution, and which can be written as where is a term that tends to unity in the limit of infinite dilution, accounting for excluded volume, and is a term accounting for the transfer of energy across a non-zero distance between particles during a collision. Diffusion coefficient and diffusion flux Following a similar logic as above, one can derive the kinetic model for mass diffusivity of a dilute gas: Consider a steady diffusion between two regions of the same gas with perfectly flat and parallel boundaries separated by a layer of the same gas. Both regions have uniform number densities, but the upper region has a higher number density than the lower region. In the steady state, the number density at any point is constant (that is, independent of time). However, the number density in the layer increases uniformly with distance above the lower plate. The non-equilibrium molecular flow is superimposed on a Maxwell–Boltzmann equilibrium distribution of molecular motions. Let be the number density of the gas at an imaginary horizontal surface inside the layer. The number of molecules arriving at an area on one side of the gas layer, with speed at angle from the normal, in time interval is These molecules made their last collision at a distance above and below the gas layer, where the local number density is Again, plus sign applies to molecules from above, and minus sign below. Note that the number density gradient can be considered to be constant over a distance of mean free path. Integrating over all appropriate velocities within the constraint , , yields the molecular transfer per unit time per unit area (also known as diffusion flux): Note that the molecular transfer from above is in the direction, and therefore the overall minus sign in the equation. The net diffusion flux across the imaginary surface is thus Combining the above kinetic equation with Fick's first law of diffusion gives the equation for mass diffusivity, which is usually denoted when it is a dilute gas: The corresponding expression obtained from Revised Enskog Theory may be written as where is a factor that tends to unity in the limit of infinite dilution, which accounts for excluded volume and the variation chemical potentials with density. Detailed balance Fluctuation and dissipation The kinetic theory of gases entails that due to the microscopic reversibility of the gas particles' detailed dynamics, the system must obey the principle of detailed balance. Specifically, the fluctuation-dissipation theorem applies to the Brownian motion (or diffusion) and the drag force, which leads to the Einstein–Smoluchowski equation: where is the mass diffusivity; is the "mobility", or the ratio of the particle's terminal drift velocity to an applied force, ; is the Boltzmann constant; is the absolute temperature. Note that the mobility can be calculated based on the viscosity of the gas; Therefore, the Einstein–Smoluchowski equation also provides a relation between the mass diffusivity and the viscosity of the gas. Onsager reciprocal relations The mathematical similarities between the expressions for shear viscocity, thermal conductivity and diffusion coefficient of the ideal (dilute) gas is not a coincidence; It is a direct result of the Onsager reciprocal relations (i.e. the detailed balance of the reversible dynamics of the particles), when applied to the convection (matter flow due to temperature gradient, and heat flow due to pressure gradient) and advection (matter flow due to the velocity of particles, and momentum transfer due to pressure gradient) of the ideal (dilute) gas.
Physical sciences
Thermodynamics
Physics
64212
https://en.wikipedia.org/wiki/Potassium%20nitrate
Potassium nitrate
Potassium nitrate is a chemical compound with a sharp, salty, bitter taste and the chemical formula . It is a potassium salt of nitric acid. This salt consists of potassium cations and nitrate anions , and is therefore an alkali metal nitrate. It occurs in nature as a mineral, niter (or nitre outside the US). It is a source of nitrogen, and nitrogen was named after niter. Potassium nitrate is one of several nitrogen-containing compounds collectively referred to as saltpetre (or saltpeter in the US). Major uses of potassium nitrate are in fertilizers, tree stump removal, rocket propellants and fireworks. It is one of the major constituents of traditional gunpowder (black powder). In processed meats, potassium nitrate reacts with hemoglobin and myoglobin generating a red color, becoming highly toxic and carcinogenic. Etymology Nitre, or potassium nitrate, because of its early and global use and production, has many names. As for nitrate, Egyptian and Hebrew words for it had the consonants n-t-r, indicating likely cognation in the Greek nitron, which was Latinised to nitrum or nitrium. Thence Old French had niter and Middle English nitre. By the 15th century, Europeans referred to it as saltpetre, specifically Indian saltpetre (Chilean saltpetre is sodium nitrate) and later as nitrate of potash, as the chemistry of the compound was more fully understood. The Arabs called it "Chinese snow" () as well as bārūd (), a term of uncertain origin that later came to mean gunpowder. It was called "Chinese salt" by the Iranians/Persians or "salt from Chinese salt marshes" ( ). The Tiangong Kaiwu, published in the 17th century by members of the Qing dynasty, detailed the production of gunpowder and other useful products from nature. Historical production From mineral sources In Mauryan India saltpeter manufacturers formed the Nuniya & Labana caste. Saltpeter finds mention in Kautilya's Arthashastra (compiled 300BC – 300AD), which mentions using its poisonous smoke as a weapon of war, although its use for propulsion did not appear until medieval times. A purification process for potassium nitrate was outlined in 1270 by the chemist and engineer Hasan al-Rammah of Syria in his book al-Furusiyya wa al-Manasib al-Harbiyya (The Book of Military Horsemanship and Ingenious War Devices). In this book, al-Rammah describes first the purification of barud (crude saltpeter mineral) by boiling it with minimal water and using only the hot solution, then the use of potassium carbonate (in the form of wood ashes) to remove calcium and magnesium by precipitation of their carbonates from this solution, leaving a solution of purified potassium nitrate, which could then be dried. This was used for the manufacture of gunpowder and explosive devices. The terminology used by al-Rammah indicated the gunpowder he wrote about originated in China. At least as far back as 1845, nitratite deposits were exploited in Chile and California. From caves Major natural sources of potassium nitrate were the deposits crystallizing from cave walls and the accumulations of bat guano in caves. Extraction is accomplished by immersing the guano in water for a day, filtering, and harvesting the crystals in the filtered water. Traditionally, guano was the source used in Laos for the manufacture of gunpowder for Bang Fai rockets. Calcium nitrate, or lime saltpetre, was discovered on the walls of stables, from the urine of barnyard animals. Nitraries Potassium nitrate was produced in a nitrary or "saltpetre works". The process involved burial of excrements (human or animal) in a field beside the nitraries, watering them and waiting until leaching allowed saltpeter to migrate to the surface by efflorescence. Operators then gathered the resulting powder and transported it to be concentrated by ebullition in the boiler plant. Besides "Montepellusanus", during the thirteenth century (and beyond) the only supply of saltpeter across Christian Europe (according to "De Alchimia" in 3 manuscripts of Michael Scot, 1180–1236) was "found in Spain in Aragon in a certain mountain near the sea". In 1561, Elizabeth I, Queen of England and Ireland, who was at war with Philip II of Spain, became unable to import saltpeter (of which the Kingdom of England had no home production), and had to pay "300 pounds gold" to the German captain Gerrard Honrik for the manual "Instructions for making saltpeter to growe" (the secret of the "Feuerwerkbuch" -the nitraries-). Nitre bed A nitre bed is a similar process used to produce nitrate from excrement. Unlike the leaching-based process of the nitrary, however, one mixes the excrements with soil and waits for soil microbes to convert amino-nitrogen into nitrates by nitrification. The nitrates are extracted from soil with water and then purified into saltpeter by adding wood ash. The process was discovered in the early 15th century and was very widely used until the Chilean mineral deposits were found. The Confederate side of the American Civil War had a significant shortage of saltpeter. As a result, the Nitre and Mining Bureau was set up to encourage local production, including by nitre beds and by providing excrement to government nitraries. On November 13, 1862, the government advertised in the Charleston Daily Courier for 20 or 30 "able bodied Negro men" to work in the new nitre beds at Ashley Ferry, S.C. The nitre beds were large rectangles of rotted manure and straw, moistened weekly with urine, "dung water", and liquid from privies, cesspools and drains, and turned over regularly. The National Archives published payroll records that account for more than 29,000 people compelled to such labor in the state of Virginia. The South was so desperate for saltpeter for gunpowder that one Alabama official reportedly placed a newspaper ad asking that the contents of chamber pots be saved for collection. In South Carolina, in April 1864, the Confederate government forced 31 enslaved people to work at the Ashley Ferry Nitre Works, outside Charleston. Perhaps the most exhaustive discussion of the niter-bed production is the 1862 LeConte text. He was writing with the express purpose of increasing production in the Confederate States to support their needs during the American Civil War. Since he was calling for the assistance of rural farming communities, the descriptions and instructions are both simple and explicit. He details the "French Method", along with several variations, as well as a "Swiss method". N.B. Many references have been made to a method using only straw and urine, but there is no such method in this work. French method Turgot and Lavoisier created the Régie des Poudres et Salpêtres a few years before the French Revolution. Niter-beds were prepared by mixing manure with either mortar or wood ashes, common earth and organic materials such as straw to give porosity to a compost pile typically high, wide, and long. The heap was usually under a cover from the rain, kept moist with urine, turned often to accelerate the decomposition, then finally leached with water after approximately one year, to remove the soluble calcium nitrate which was then converted to potassium nitrate by filtering through potash. Swiss method Joseph LeConte describes a process using only urine and not dung, referring to it as the Swiss method. Urine is collected directly, in a sandpit under a stable. The sand itself is dug out and leached for nitrates which are then converted to potassium nitrate using potash, as above. From nitric acid From 1903 until the World War I era, potassium nitrate for black powder and fertilizer was produced on an industrial scale from nitric acid produced using the Birkeland–Eyde process, which used an electric arc to oxidize nitrogen from the air. During World War I the newly industrialized Haber process (1913) was combined with the Ostwald process after 1915, allowing Germany to produce nitric acid for the war after being cut off from its supplies of mineral sodium nitrates from Chile (see nitratite). Modern production Potassium nitrate can be made by combining ammonium nitrate and potassium hydroxide. An alternative way of producing potassium nitrate without a by-product of ammonia is to combine ammonium nitrate, found in instant ice packs, and potassium chloride, easily obtained as a sodium-free salt substitute. Potassium nitrate can also be produced by neutralizing nitric acid with potassium hydroxide. This reaction is highly exothermic. On industrial scale it is prepared by the double displacement reaction between sodium nitrate and potassium chloride. Properties Potassium nitrate has an orthorhombic crystal structure at room temperature, which transforms to a trigonal system at . On cooling from , another trigonal phase forms between and . Sodium nitrate is isomorphous with calcite, the most stable form of calcium carbonate, whereas room-temperature potassium nitrate is isomorphous with aragonite, a slightly less stable polymorph of calcium carbonate. The difference is attributed to the similarity in size between nitrate () and carbonate () ions and the fact that the potassium ion () is larger than sodium () and calcium () ions. In the room-temperature structure of potassium nitrate, each potassium ion is surrounded by 6 nitrate ions. In turn, each nitrate ion is surrounded by 6 potassium ions. Potassium nitrate is moderately soluble in water, but its solubility increases with temperature. The aqueous solution is almost neutral, exhibiting pH 6.2 at for a 10% solution of commercial powder. It is not very hygroscopic, absorbing about 0.03% water in 80% relative humidity over 50 days. It is insoluble in alcohol and is not poisonous; it can react explosively with reducing agents, but it is not explosive on its own. Thermal decomposition Between , potassium nitrate reaches a temperature-dependent equilibrium with potassium nitrite: Uses Potassium nitrate has a wide variety of uses, largely as a source of nitrate. Nitric acid production Historically, nitric acid was produced by combining sulfuric acid with nitrates such as saltpeter. In modern times this is reversed: nitrates are produced from nitric acid produced via the Ostwald process. Oxidizer The most famous use of potassium nitrate is probably as the oxidizer in blackpowder. From the most ancient times until the late 1880s, blackpowder provided the explosive power for all the world's firearms. After that time, small arms and large artillery increasingly began to depend on cordite, a smokeless powder. Blackpowder remains in use today in black powder rocket motors, but also in combination with other fuels like sugars in "rocket candy" (a popular amateur rocket propellant). It is also used in fireworks such as smoke bombs. It is also added to cigarettes to maintain an even burn of the tobacco and is used to ensure complete combustion of paper cartridges for cap and ball revolvers. It can also be heated to several hundred degrees to be used for niter bluing, which is less durable than other forms of protective oxidation, but allows for specific and often beautiful coloration of steel parts, such as screws, pins, and other small parts of firearms. Meat processing Potassium nitrate has been a common ingredient of salted meat since antiquity or the Middle Ages. The widespread adoption of nitrate use is more recent and is linked to the development of large-scale meat processing. The use of potassium nitrate has been mostly discontinued because it gives slow and inconsistent results compared with sodium nitrite preparations such as "Prague powder" or pink "curing salt". Even so, potassium nitrate is still used in some food applications, such as salami, dry-cured ham, charcuterie, and (in some countries) in the brine used to make corned beef (sometimes together with sodium nitrite). In the Shetland Islands (UK) it is used in the curing of mutton to make reestit mutton, a local delicacy. When used as a food additive in the European Union, the compound is referred to as E252; it is also approved for use as a food additive in the United States and Australia and New Zealand (where it is listed under its INS number 252). Possible cancer risk Since October 2015, WHO classifies processed meat as Group 1 carcinogen (based on epidemiological studies, convincingly carcinogenic to humans). In April 2023 the French Court of Appeals of Limoges confirmed that food-watch NGO Yuka was legally legitimate in describing Potassium Nitrate E249 to E252 as a "cancer risk", and thus rejected an appeal by the French industry against the organisation. Fertilizer Potassium nitrate is used in fertilizers as a source of nitrogen and potassium – two of the macronutrients for plants. When used by itself, it has an NPK rating of 13-0-44. Pharmacology Used in some toothpastes for sensitive teeth. It has been used since 1980, although the efficacy is not strongly supported by the literature. Used historically to treat asthma. Used in some toothpastes to relieve asthma symptoms. Used in Thailand as main ingredient in kidney tablets to relieve the symptoms of cystitis, pyelitis and urethritis. Combats high blood pressure and was once used as a hypotensive. Other uses Used as an electrolyte in a salt bridge. Active ingredient of condensed aerosol fire suppression systems. When burned with the free radicals of a fire's flame, it produces potassium carbonate. Works as an aluminium cleaner. Component (usually about 98%) of some tree stump removal products. It accelerates the natural decomposition of the stump by supplying nitrogen for the fungi attacking the wood of the stump. In heat treatment of metals as a medium temperature molten salt bath, usually in combination with sodium nitrite. A similar bath is used to produce a durable blue/black finish typically seen on firearms. Its oxidizing quality, water solubility, and low cost make it an ideal short-term rust inhibitor. In glass toughening: molten potassium nitrate bath is used to increase glass strength and scratch-resistance. To induce flowering of mango trees in the Philippines. Thermal storage medium in power generation systems. Sodium and potassium nitrate salts are stored in a molten state with the solar energy collected by the heliostats at the Gemasolar Thermosolar Plant. Ternary salts, with the addition of calcium nitrate or lithium nitrate, have been found to improve the heat storage capacity in the molten salts. As a source of potassium ions for exchange with sodium ions in chemically strengthened glass. As an oxidizer in model rocket fuel called Rocket candy. As a constituent in homemade smoke bombs. In folklore and popular culture Potassium nitrate was once thought to induce impotence, and is still rumored to be in institutional food (such as military fare). There is no scientific evidence for such properties. In Bank Shot, El (Joanna Cassidy) propositions Walter Ballantine (George C. Scott), who tells her that he has been fed saltpeter in prison. In One Flew Over the Cuckoo's Nest, Randle is asked by the nurses to take his medications, but not knowing what they are, he mentions he does not want anyone to "slip me saltpeter". He then proceeds to imitate the motions of masturbation. In 1776, John Adams asks his wife Abigail to make saltpeter for the Continental Army. She, eventually, is able to do so in exchange for pins for sewing. In the Star Trek episode "Arena", Captain Kirk injures a gorn using a rudimentary cannon that he constructs using potassium nitrate as a key ingredient of gunpowder. In 21 Jump Street, Jenko, played by Channing Tatum, gives a rhyming presentation about potassium nitrate for his chemistry class. In Eating Raoul, Paul hires a dominatrix to impersonate a nurse and trick Raoul into consuming saltpeter in a ploy to reduce his sexual appetite for his wife. In The Simpsons episode "El Viaje Misterioso de Nuestro Jomer (The Mysterious Voyage of Our Homer)", Mr. Burns is seen pouring saltpeter into his chili entry, titled Old Elihu's Yale-Style Saltpeter Chili. In the Sharpe novel series by Bernard Cornwell, numerous mentions are made of an advantageous supply of saltpeter from India being a crucial component of British military supremacy in the Napoleonic Wars. In Sharpe's Havoc, the French Captain Argenton laments that France needs to scrape its supply from cesspits. In the Dr. Stone anime and manga series, the struggle for control over a natural saltpeter source from guano features prominently in the plot. In the farming lore from the Corn Belt of the 1800s, drought-killed corn in manured fields could accumulate saltpeter to the extent that upon opening the stalk for examination it would "fall as a fine powder upon the table". In the Slovenian short story Martin Krpan from Vrh pri Sveti Trojici, the titular character and Slovene folk hero Martin Krpan illegally smuggles "English salt" for a living. The exact nature of "English salt" is a matter of debate, but it may have been a euphemism for potassium nitrate (saltpeter) due to its role in manufacturing gunpowder. In Dexter: Original Sin's first episode, Dexter's first victim uses potassium nitrate to kill her victims.
Physical sciences
Salts
null
64219
https://en.wikipedia.org/wiki/Bernoulli%27s%20principle
Bernoulli's principle
Bernoulli's principle is a key concept in fluid dynamics that relates pressure, density, speed and height. Bernoulli's principle states that an increase in the speed of a parcel of fluid occurs simultaneously with a decrease in either the pressure or the height above a datum. The principle is named after the Swiss mathematician and physicist Daniel Bernoulli, who published it in his book Hydrodynamica in 1738. Although Bernoulli deduced that pressure decreases when the flow speed increases, it was Leonhard Euler in 1752 who derived Bernoulli's equation in its usual form. Bernoulli's principle can be derived from the principle of conservation of energy. This states that, in a steady flow, the sum of all forms of energy in a fluid is the same at all points that are free of viscous forces. This requires that the sum of kinetic energy, potential energy and internal energy remains constant. Thus an increase in the speed of the fluid—implying an increase in its kinetic energy—occurs with a simultaneous decrease in (the sum of) its potential energy (including the static pressure) and internal energy. If the fluid is flowing out of a reservoir, the sum of all forms of energy is the same because in a reservoir the energy per unit volume (the sum of pressure and gravitational potential ) is the same everywhere. Bernoulli's principle can also be derived directly from Isaac Newton's second Law of Motion. When fluid is flowing horizontally from a region of high pressure to a region of low pressure, there is more pressure behind than in front. This gives a net force on the volume, accelerating it along the streamline. Fluid particles are subject only to pressure and their own weight. If a fluid is flowing horizontally and along a section of a streamline, where the speed increases it can only be because the fluid on that section has moved from a region of higher pressure to a region of lower pressure; and if its speed decreases, it can only be because it has moved from a region of lower pressure to a region of higher pressure. Consequently, within a fluid flowing horizontally, the highest speed occurs where the pressure is lowest, and the lowest speed occurs where the pressure is highest. Bernoulli's principle is only applicable for isentropic flows: when the effects of irreversible processes (like turbulence) and non-adiabatic processes (e.g. thermal radiation) are small and can be neglected. However, the principle can be applied to various types of flow within these bounds, resulting in various forms of Bernoulli's equation. The simple form of Bernoulli's equation is valid for incompressible flows (e.g. most liquid flows and gases moving at low Mach number). More advanced forms may be applied to compressible flows at higher Mach numbers. Incompressible flow equation In most flows of liquids, and of gases at low Mach number, the density of a fluid parcel can be considered to be constant, regardless of pressure variations in the flow. Therefore, the fluid can be considered to be incompressible, and these flows are called incompressible flows. Bernoulli performed his experiments on liquids, so his equation in its original form is valid only for incompressible flow. A common form of Bernoulli's equation is: where: is the fluid flow speed at a point, is the acceleration due to gravity, is the elevation of the point above a reference plane, with the positive -direction pointing upward—so in the direction opposite to the gravitational acceleration, is the static pressure at the chosen point, and is the density of the fluid at all points in the fluid. Bernoulli's equation and the Bernoulli constant are applicable throughout any region of flow where the energy per unit mass is uniform. Because the energy per unit mass of liquid in a well-mixed reservoir is uniform throughout, Bernoulli's equation can be used to analyze the fluid flow everywhere in that reservoir (including pipes or flow fields that the reservoir feeds) except where viscous forces dominate and erode the energy per unit mass. The following assumptions must be met for this Bernoulli equation to apply: the flow must be steady, that is, the flow parameters (velocity, density, etc.) at any point cannot change with time, the flow must be incompressible—even though pressure varies, the density must remain constant along a streamline; friction by viscous forces must be negligible. For conservative force fields (not limited to the gravitational field), Bernoulli's equation can be generalized as: where is the force potential at the point considered. For example, for the Earth's gravity . By multiplying with the fluid density , equation () can be rewritten as: or: where is dynamic pressure, is the piezometric head or hydraulic head (the sum of the elevation and the pressure head) and is the stagnation pressure (the sum of the static pressure and dynamic pressure ). The constant in the Bernoulli equation can be normalized. A common approach is in terms of total head or energy head : The above equations suggest there is a flow speed at which pressure is zero, and at even higher speeds the pressure is negative. Most often, gases and liquids are not capable of negative absolute pressure, or even zero pressure, so clearly Bernoulli's equation ceases to be valid before zero pressure is reached. In liquids—when the pressure becomes too low—cavitation occurs. The above equations use a linear relationship between flow speed squared and pressure. At higher flow speeds in gases, or for sound waves in liquid, the changes in mass density become significant so that the assumption of constant density is invalid. Simplified form In many applications of Bernoulli's equation, the change in the term is so small compared with the other terms that it can be ignored. For example, in the case of aircraft in flight, the change in height is so small the term can be omitted. This allows the above equation to be presented in the following simplified form: where is called total pressure, and is dynamic pressure. Many authors refer to the pressure as static pressure to distinguish it from total pressure and dynamic pressure . In Aerodynamics, L.J. Clancy writes: "To distinguish it from the total and dynamic pressures, the actual pressure of the fluid, which is associated not with its motion but with its state, is often referred to as the static pressure, but where the term pressure alone is used it refers to this static pressure." The simplified form of Bernoulli's equation can be summarized in the following memorable word equation: Every point in a steadily flowing fluid, regardless of the fluid speed at that point, has its own unique static pressure and dynamic pressure . Their sum is defined to be the total pressure . The significance of Bernoulli's principle can now be summarized as "total pressure is constant in any region free of viscous forces". If the fluid flow is brought to rest at some point, this point is called a stagnation point, and at this point the static pressure is equal to the stagnation pressure. If the fluid flow is irrotational, the total pressure is uniform and Bernoulli's principle can be summarized as "total pressure is constant everywhere in the fluid flow". It is reasonable to assume that irrotational flow exists in any situation where a large body of fluid is flowing past a solid body. Examples are aircraft in flight and ships moving in open bodies of water. However, Bernoulli's principle importantly does not apply in the boundary layer such as in flow through long pipes. Unsteady potential flow The Bernoulli equation for unsteady potential flow is used in the theory of ocean surface waves and acoustics. For an irrotational flow, the flow velocity can be described as the gradient of a velocity potential . In that case, and for a constant density , the momentum equations of the Euler equations can be integrated to: which is a Bernoulli equation valid also for unsteady—or time dependent—flows. Here denotes the partial derivative of the velocity potential with respect to time , and is the flow speed. The function depends only on time and not on position in the fluid. As a result, the Bernoulli equation at some moment applies in the whole fluid domain. This is also true for the special case of a steady irrotational flow, in which case and are constants so equation () can be applied in every point of the fluid domain. Further can be made equal to zero by incorporating it into the velocity potential using the transformation: resulting in: Note that the relation of the potential to the flow velocity is unaffected by this transformation: . The Bernoulli equation for unsteady potential flow also appears to play a central role in Luke's variational principle, a variational description of free-surface flows using the Lagrangian mechanics. Compressible flow equation Bernoulli developed his principle from observations on liquids, and Bernoulli's equation is valid for ideal fluids: those that are incompressible, irrotational, inviscid, and subjected to conservative forces. It is sometimes valid for the flow of gases: provided that there is no transfer of kinetic or potential energy from the gas flow to the compression or expansion of the gas. If both the gas pressure and volume change simultaneously, then work will be done on or by the gas. In this case, Bernoulli's equation—in its incompressible flow form—cannot be assumed to be valid. However, if the gas process is entirely isobaric, or isochoric, then no work is done on or by the gas (so the simple energy balance is not upset). According to the gas law, an isobaric or isochoric process is ordinarily the only way to ensure constant density in a gas. Also the gas density will be proportional to the ratio of pressure and absolute temperature; however, this ratio will vary upon compression or expansion, no matter what non-zero quantity of heat is added or removed. The only exception is if the net heat transfer is zero, as in a complete thermodynamic cycle or in an individual isentropic (frictionless adiabatic) process, and even then this reversible process must be reversed, to restore the gas to the original pressure and specific volume, and thus density. Only then is the original, unmodified Bernoulli equation applicable. In this case the equation can be used if the flow speed of the gas is sufficiently below the speed of sound, such that the variation in density of the gas (due to this effect) along each streamline can be ignored. Adiabatic flow at less than Mach 0.3 is generally considered to be slow enough. It is possible to use the fundamental principles of physics to develop similar equations applicable to compressible fluids. There are numerous equations, each tailored for a particular application, but all are analogous to Bernoulli's equation and all rely on nothing more than the fundamental principles of physics such as Newton's laws of motion or the first law of thermodynamics. Compressible flow in fluid dynamics For a compressible fluid, with a barotropic equation of state, and under the action of conservative forces, where: is the pressure is the density and indicates that it is a function of pressure is the flow speed is the potential associated with the conservative force field, often the gravitational potential In engineering situations, elevations are generally small compared to the size of the Earth, and the time scales of fluid flow are small enough to consider the equation of state as adiabatic. In this case, the above equation for an ideal gas becomes: where, in addition to the terms listed above: is the ratio of the specific heats of the fluid is the acceleration due to gravity is the elevation of the point above a reference plane In many applications of compressible flow, changes in elevation are negligible compared to the other terms, so the term can be omitted. A very useful form of the equation is then: where: is the total pressure is the total density Compressible flow in thermodynamics The most general form of the equation, suitable for use in thermodynamics in case of (quasi) steady flow, is: Here is the enthalpy per unit mass (also known as specific enthalpy), which is also often written as (not to be confused with "head" or "height"). Note that where is the thermodynamic energy per unit mass, also known as the specific internal energy. So, for constant internal energy the equation reduces to the incompressible-flow form. The constant on the right-hand side is often called the Bernoulli constant and denoted . For steady inviscid adiabatic flow with no additional sources or sinks of energy, is constant along any given streamline. More generally, when may vary along streamlines, it still proves a useful parameter, related to the "head" of the fluid (see below). When the change in can be ignored, a very useful form of this equation is: where is total enthalpy. For a calorically perfect gas such as an ideal gas, the enthalpy is directly proportional to the temperature, and this leads to the concept of the total (or stagnation) temperature. When shock waves are present, in a reference frame in which the shock is stationary and the flow is steady, many of the parameters in the Bernoulli equation suffer abrupt changes in passing through the shock. The Bernoulli parameter remains unaffected. An exception to this rule is radiative shocks, which violate the assumptions leading to the Bernoulli equation, namely the lack of additional sinks or sources of energy. Unsteady potential flow For a compressible fluid, with a barotropic equation of state, the unsteady momentum conservation equation With the irrotational assumption, namely, the flow velocity can be described as the gradient of a velocity potential . The unsteady momentum conservation equation becomes which leads to In this case, the above equation for isentropic flow becomes: Derivations Applications In modern everyday life there are many observations that can be successfully explained by application of Bernoulli's principle, even though no real fluid is entirely inviscid, and a small viscosity often has a large effect on the flow. Bernoulli's principle can be used to calculate the lift force on an airfoil, if the behaviour of the fluid flow in the vicinity of the foil is known. For example, if the air flowing past the top surface of an aircraft wing is moving faster than the air flowing past the bottom surface, then Bernoulli's principle implies that the pressure on the surfaces of the wing will be lower above than below. This pressure difference results in an upwards lifting force. Whenever the distribution of speed past the top and bottom surfaces of a wing is known, the lift forces can be calculated (to a good approximation) using Bernoulli's equations, which were established by Bernoulli over a century before the first man-made wings were used for the purpose of flight. The carburetor used in many reciprocating engines contains a venturi to create a region of low pressure to draw fuel into the carburetor and mix it thoroughly with the incoming air. The low pressure in the venturi can be explained by Bernoulli's principle. In the narrow throat the air is moving at its fastest speed and therefore it is at its lowest pressure. The carburetor may or may not use the difference between the two static pressures of the Venturi effect on the air flow in order to force the fuel to flow, and for a basic carburetor uses the difference in pressure between the throat and local air pressure in the float bowl.. An injector on a steam locomotive or a static boiler. The pitot tube and static port on an aircraft are used to determine the airspeed of the aircraft. These two devices are connected to the airspeed indicator, which determines the dynamic pressure of the airflow past the aircraft. Bernoulli's principle is used to calibrate the airspeed indicator so that it displays the indicated airspeed appropriate to the dynamic pressure. A De Laval nozzle utilizes Bernoulli's principle to create a force by turning pressure energy generated by the combustion of propellants into velocity. This then generates thrust by way of Newton's third law of motion. The flow speed of a fluid can be measured using a device such as a Venturi meter or an orifice plate, which can be placed into a pipeline to reduce the diameter of the flow. For a horizontal device, the continuity equation shows that for an incompressible fluid, the reduction in diameter will cause an increase in the fluid flow speed. Subsequently, Bernoulli's principle then shows that there must be a decrease in the pressure in the reduced diameter region. This phenomenon is known as the Venturi effect. The maximum possible drain rate for a tank with a hole or tap at the base can be calculated directly from Bernoulli's equation and is found to be proportional to the square root of the height of the fluid in the tank. This is Torricelli's law, which is compatible with Bernoulli's principle. Increased viscosity lowers this drain rate; this is reflected in the discharge coefficient, which is a function of the Reynolds number and the shape of the orifice. The Bernoulli grip relies on this principle to create a non-contact adhesive force between a surface and the gripper. During a cricket match, bowlers continually polish one side of the ball. After some time, one side is quite rough and the other is still smooth. Hence, when the ball is bowled and passes through air, the speed on one side of the ball is faster than on the other, and this results in a pressure difference between the sides; this leads to the ball rotating ("swinging") while travelling through the air, giving advantage to the bowlers. Misconceptions Airfoil lift One of the most common erroneous explanations of aerodynamic lift asserts that the air must traverse the upper and lower surfaces of a wing in the same amount of time, implying that since the upper surface presents a longer path the air must be moving over the top of the wing faster than over the bottom. Bernoulli's principle is then cited to conclude that the pressure on top of the wing must be lower than on the bottom. Equal transit time applies to the flow around a body generating no lift, but there is no physical principle that requires equal transit time in cases of bodies generating lift. In fact, theory predicts – and experiments confirm – that the air traverses the top surface of a body experiencing lift in a shorter time than it traverses the bottom surface; the explanation based on equal transit time is false. While the equal-time explanation is false, it is not the Bernoulli principle that is false, because this principle is well established; Bernoulli's equation is used correctly in common mathematical treatments of aerodynamic lift. Common classroom demonstrations There are several common classroom demonstrations that are sometimes incorrectly explained using Bernoulli's principle. One involves holding a piece of paper horizontally so that it droops downward and then blowing over the top of it. As the demonstrator blows over the paper, the paper rises. It is then asserted that this is because "faster moving air has lower pressure". One problem with this explanation can be seen by blowing along the bottom of the paper: if the deflection was caused by faster moving air, then the paper should deflect downward; but the paper deflects upward regardless of whether the faster moving air is on the top or the bottom. Another problem is that when the air leaves the demonstrator's mouth it has the same pressure as the surrounding air; the air does not have lower pressure just because it is moving; in the demonstration, the static pressure of the air leaving the demonstrator's mouth is equal to the pressure of the surrounding air. A third problem is that it is false to make a connection between the flow on the two sides of the paper using Bernoulli's equation since the air above and below are different flow fields and Bernoulli's principle only applies within a flow field. As the wording of the principle can change its implications, stating the principle correctly is important. What Bernoulli's principle actually says is that within a flow of constant energy, when fluid flows through a region of lower pressure it speeds up and vice versa. Thus, Bernoulli's principle concerns itself with changes in speed and changes in pressure within a flow field. It cannot be used to compare different flow fields. A correct explanation of why the paper rises would observe that the plume follows the curve of the paper and that a curved streamline will develop a pressure gradient perpendicular to the direction of flow, with the lower pressure on the inside of the curve. Bernoulli's principle predicts that the decrease in pressure is associated with an increase in speed; in other words, as the air passes over the paper, it speeds up and moves faster than it was moving when it left the demonstrator's mouth. But this is not apparent from the demonstration. Other common classroom demonstrations, such as blowing between two suspended spheres, inflating a large bag, or suspending a ball in an airstream are sometimes explained in a similarly misleading manner by saying "faster moving air has lower pressure".
Physical sciences
Fluid mechanics
null
64343
https://en.wikipedia.org/wiki/Moir%C3%A9%20pattern
Moiré pattern
In mathematics, physics, and art, moiré patterns ( , , ) or moiré fringes are large-scale interference patterns that can be produced when a partially opaque ruled pattern with transparent gaps is overlaid on another similar pattern. For the moiré interference pattern to appear, the two patterns must not be completely identical, but rather displaced, rotated, or have slightly different pitch. Moiré patterns appear in many situations. In printing, the printed pattern of dots can interfere with the image. In television and digital photography, a pattern on an object being photographed can interfere with the shape of the light sensors to generate unwanted artifacts. They are also sometimes created deliberately; in micrometers, they are used to amplify the effects of very small movements. In physics, its manifestation is wave interference like that seen in the double-slit experiment and the beat phenomenon in acoustics. Etymology The term originates from moire (moiré in its French adjectival form), a type of textile, traditionally made of silk but now also made of cotton or synthetic fiber, with a rippled or "watered" appearance. Moire, or "watered textile", is made by pressing two layers of the textile when wet. The similar but imperfect spacing of the threads creates a characteristic pattern which remains after the fabric dries. In French, the noun moire is in use from the 17th century, for "watered silk". It was a loan of the English mohair (attested 1610). In French usage, the noun gave rise to the verb moirer, "to produce a watered textile by weaving or pressing", by the 18th century. The adjective moiré formed from this verb is in use from at least 1823. Pattern formation Moiré patterns are often an artifact of images produced by various digital imaging and computer graphics techniques, for example when scanning a halftone picture or ray tracing a checkered plane (the latter being a special case of aliasing, due to undersampling a fine regular pattern). This can be overcome in texture mapping through the use of mipmapping and anisotropic filtering. The drawing on the upper right shows a moiré pattern. The lines could represent fibers in moiré silk, or lines drawn on paper or on a computer screen. The nonlinear interaction of the optical patterns of lines creates a real and visible pattern of roughly parallel dark and light bands, the moiré pattern, superimposed on the lines. The moiré effect also occurs between overlapping transparent objects. For example, an invisible phase mask is made of a transparent polymer with a wavy thickness profile. As light shines through two overlaid masks of similar phase patterns, a broad moiré pattern occurs on a screen some distance away. This phase moiré effect and the classical moiré effect from opaque lines are two ends of a continuous spectrum in optics, which is called the universal moiré effect. The phase moiré effect is the basis for a type of broadband interferometer in x-ray and particle wave applications. It also provides a way to reveal hidden patterns in invisible layers. Line moiré Line moiré is one type of moiré pattern; a pattern that appears when superposing two transparent layers containing correlated opaque patterns. Line moiré is the case when the superposed patterns comprise straight or curved lines. When moving the layer patterns, the moiré patterns transform or move at a faster speed. This effect is called optical moiré speedup. More complex line moiré patterns are created if the lines are curved or not exactly parallel. Shape moiré Shape moiré is one type of moiré pattern demonstrating the phenomenon of moiré magnification. 1D shape moiré is the particular simplified case of 2D shape moiré. One-dimensional patterns may appear when superimposing an opaque layer containing tiny horizontal transparent lines on top of a layer containing a complex shape which is periodically repeating along the vertical axis. Moiré patterns revealing complex shapes, or sequences of symbols embedded in one of the layers (in form of periodically repeated compressed shapes) are created with shape moiré, otherwise called band moiré patterns. One of the most important properties of shape moiré is its ability to magnify tiny shapes along either one or both axes, that is, stretching. A common 2D example of moiré magnification occurs when viewing a chain-link fence through a second chain-link fence of identical design. The fine structure of the design is visible even at great distances. Calculations Moiré of parallel patterns Geometrical approach Consider two patterns made of parallel and equidistant lines, e.g., vertical lines. The step of the first pattern is , the step of the second is , with . If the lines of the patterns are superimposed at the left of the figure, the shift between the lines increases when going to the right. After a given number of lines, the patterns are opposed: the lines of the second pattern are between the lines of the first pattern. If we look from a far distance, we have the feeling of pale zones when the lines are superimposed (there is white between the lines), and of dark zones when the lines are "opposed". The middle of the first dark zone is when the shift is equal to . The th line of the second pattern is shifted by compared to the th line of the first network. The middle of the first dark zone thus corresponds to that is The distance between the middle of a pale zone and a dark zone is the distance between the middle of two dark zones, which is also the distance between two pale zones, is From this formula, we can see that: the bigger the step, the bigger the distance between the pale and dark zones; the bigger the discrepancy , the closer the dark and pale zones; a great spacing between dark and pale zones mean that the patterns have very close steps. The principle of the moiré is similar to the Vernier scale. Mathematical function approach The essence of the moiré effect is the (mainly visual) perception of a distinctly different third pattern which is caused by inexact superimposition of two similar patterns. The mathematical representation of these patterns is not trivially obtained and can seem somewhat arbitrary. In this section we shall give a mathematical example of two parallel patterns whose superimposition forms a moiré pattern, and show one way (of many possible ways) these patterns and the moiré effect can be rendered mathematically. The visibility of these patterns is dependent on the medium or substrate in which they appear, and these may be opaque (as for example on paper) or transparent (as for example in plastic film). For purposes of discussion we shall assume the two primary patterns are each printed in greyscale ink on a white sheet, where the opacity (e.g., shade of grey) of the "printed" part is given by a value between 0 (white) and 1 (black) inclusive, with representing neutral grey. Any value less than 0 or greater than 1 using this grey scale is essentially "unprintable". We shall also choose to represent the opacity of the pattern resulting from printing one pattern atop the other at a given point on the paper as the average (i.e. the arithmetic mean) of each pattern's opacity at that position, which is half their sum, and, as calculated, does not exceed 1. (This choice is not unique. Any other method to combine the functions that satisfies keeping the resultant function value within the bounds [0,1] will also serve; arithmetic averaging has the virtue of simplicity—with hopefully minimal damage to one's concepts of the printmaking process.) We now consider the "printing" superimposition of two almost similar, sinusoidally varying, grey-scale patterns to show how they produce a moiré effect in first printing one pattern on the paper, and then printing the other pattern over the first, keeping their coordinate axes in register. We represent the grey intensity in each pattern by a positive opacity function of distance along a fixed direction (say, the x-coordinate) in the paper plane, in the form where the presence of 1 keeps the function positive definite, and the division by 2 prevents function values greater than 1. The quantity represents the periodic variation (i.e., spatial frequency) of the pattern's grey intensity, measured as the number of intensity cycles per unit distance. Since the sine function is cyclic over argument changes of , the distance increment per intensity cycle (the wavelength) obtains when , or . Consider now two such patterns, where one has a slightly different periodic variation from the other: such that . The average of these two functions, representing the superimposed printed image, evaluates as follows (see reverse identities here :Prosthaphaeresis ): where it is easily shown that and This function average, , clearly lies in the range [0,1]. Since the periodic variation is the average of and therefore close to and , the moiré effect is distinctively demonstrated by the sinusoidal envelope "beat" function , whose periodic variation is half the difference of the periodic variations and (and evidently much lower in frequency). Other one-dimensional moiré effects include the classic beat frequency tone which is heard when two pure notes of almost identical pitch are sounded simultaneously. This is an acoustic version of the moiré effect in the one dimension of time: the original two notes are still present—but the listener's perception is of two pitches that are the average of and half the difference of the frequencies of the two notes. Aliasing in sampling of time-varying signals also belongs to this moiré paradigm. Rotated patterns Consider two patterns with the same step , but the second pattern is rotated by an angle . Seen from afar, we can also see darker and paler lines: the pale lines correspond to the lines of nodes, that is, lines passing through the intersections of the two patterns. If we consider a cell of the lattice formed, we can see that it is a rhombus with the four sides equal to ; (we have a right triangle whose hypotenuse is and the side opposite to the angle is ). The pale lines correspond to the small diagonal of the rhombus. As the diagonals are the bisectors of the neighbouring sides, we can see that the pale line makes an angle equal to with the perpendicular of each pattern's line. Additionally, the spacing between two pale lines is , half of the long diagonal. The long diagonal is the hypotenuse of a right triangle and the sides of the right angle are and . The Pythagorean theorem gives: that is: thus When is very small () the following small-angle approximations can be made: thus We can see that the smaller is, the farther apart the pale lines; when both patterns are parallel (), the spacing between the pale lines is infinite (there is no pale line). There are thus two ways to determine : by the orientation of the pale lines and by their spacing If we choose to measure the angle, the final error is proportional to the measurement error. If we choose to measure the spacing, the final error is proportional to the inverse of the spacing. Thus, for the small angles, it is best to measure the spacing. Implications and applications Printing full-color images In graphic arts and prepress, the usual technology for printing full-color images involves the superimposition of halftone screens. These are regular rectangular dot patterns—often four of them, printed in cyan, yellow, magenta, and black. Some kind of moiré pattern is inevitable, but in favorable circumstances the pattern is "tight"; that is, the spatial frequency of the moiré is so high that it is not noticeable. In the graphic arts, the term moiré means an excessively visible moiré pattern. Part of the prepress art consists of selecting screen angles and halftone frequencies which minimize moiré. The visibility of moiré is not entirely predictable. The same set of screens may produce good results with some images, but visible moiré with others. Television screens and photographs Moiré patterns are commonly seen on television screens when a person is wearing a shirt or jacket of a particular weave or pattern, such as a houndstooth jacket. This is due to interlaced scanning in televisions and non-film cameras, referred to as interline twitter. As the person moves about, the moiré pattern is quite noticeable. Because of this, newscasters and other professionals who regularly appear on TV are instructed to avoid clothing which could cause the effect. Photographs of a TV screen taken with a digital camera often exhibit moiré patterns. Since both the TV screen and the digital camera use a scanning technique to produce or to capture pictures with horizontal scan lines, the conflicting sets of lines cause the moiré patterns. To avoid the effect, the digital camera can be aimed at an angle of 30 degrees to the TV screen. Marine navigation The moiré effect is used in shoreside beacons called "Inogon leading marks" or "Inogon lights", manufactured by Inogon Licens AB, Sweden, to designate the safest path of travel for ships heading to locks, marinas, ports, etc., or to indicate underwater hazards (such as pipelines or cables). The moiré effect creates arrows that point towards an imaginary line marking the hazard or line of safe passage; as navigators pass over the line, the arrows on the beacon appear to become vertical bands before changing back to arrows pointing in the reverse direction. An example can be found in the UK on the eastern shore of Southampton Water, opposite Fawley oil refinery (). Similar moiré effect beacons can be used to guide mariners to the centre point of an oncoming bridge; when the vessel is aligned with the centreline, vertical lines are visible. Inogon lights are deployed at airports to help pilots on the ground keep to the centreline while docking on stand. Strain measurement In manufacturing industries, these patterns are used for studying microscopic strain in materials: by deforming a grid with respect to a reference grid and measuring the moiré pattern, the stress levels and patterns can be deduced. This technique is attractive because the scale of the moiré pattern is much larger than the deflection that causes it, making measurement easier. The moiré effect can be used in strain measurement: the operator just has to draw a pattern on the object, and superimpose the reference pattern to the deformed pattern on the deformed object. A similar effect can be obtained by the superposition of a holographic image of the object to the object itself: the hologram is the reference step, and the difference with the object are the deformations, which appear as pale and dark lines. Image processing Some image scanner computer programs provide an optional filter, called a "descreen" filter, to remove moiré pattern artifacts which would otherwise be produced when scanning printed halftone images to produce digital images. Banknotes Many banknotes exploit the tendency of digital scanners to produce moiré patterns by including fine circular or wavy designs that are likely to exhibit a moiré pattern when scanned and printed. Microscopy In super-resolution microscopy, the moiré pattern can be used to obtain images with a resolution higher than the diffraction limit, using a technique known as structured illumination microscopy. In scanning tunneling microscopy, moiré fringes appear if surface atomic layers have a different crystal structure than the bulk crystal. This can for example be due to surface reconstruction of the crystal, or when a thin layer of a second crystal is on the surface, e.g. single-layer, double-layer graphene, or Van der Waals heterostructure of graphene and hBN, or bismuth and antimony nanostructures. In transmission electron microscopy (TEM), translational moiré fringes can be seen as parallel contrast lines formed in phase-contrast TEM imaging by the interference of diffracting crystal lattice planes that are overlapping, and which might have different spacing and/or orientation. Most of the moiré contrast observations reported in the literature are obtained using high-resolution phase contrast imaging in TEM. However, if probe aberration-corrected high-angle annular dark field scanning transmission electron microscopy (HAADF-STEM) imaging is used, more direct interpretation of the crystal structure in terms of atom types and positions is obtained. Materials science and condensed matter physics In condensed matter physics, the moiré phenomenon is commonly discussed for two-dimensional materials. The effect occurs when there is mismatch between the lattice parameter or angle of the 2D layer and that of the underlying substrate, or another 2D layer, such as in 2D material heterostructures. The phenomenon is exploited as a means of engineering the electronic structure or optical properties of materials, which some call moiré materials. The often significant changes in electronic properties when twisting two atomic layers and the prospect of electronic applications has led to the name twistronics of this field. A prominent example is in twisted bi-layer graphene, which forms a moiré pattern and at a particular magic angle exhibits superconductivity and other important electronic properties. In materials science, known examples exhibiting moiré contrast are thin films or nanoparticles of MX-type (M = Ti, Nb; X = C, N) overlapping with austenitic matrix. Both phases, MX and the matrix, have face-centered cubic crystal structure and cube-on-cube orientation relationship. However, they have significant lattice misfit of about 20 to 24% (based on the chemical composition of alloy), which produces a moiré effect.
Physical sciences
Optics
Physics
64493
https://en.wikipedia.org/wiki/Percentage
Percentage
In mathematics, a percentage () is a number or ratio expressed as a fraction of 100. It is often denoted using the percent sign (%), although the abbreviations pct., pct, and sometimes pc are also used. A percentage is a dimensionless number (pure number), primarily used for expressing proportions, but percent is nonetheless a unit of measurement in its orthography and usage. Examples For example, 45% (read as "forty-five percent") is equal to the fraction , the ratio 45:55 (or 45:100 when comparing to the total rather than the other portion), or 0.45. Percentages are often used to express a proportionate part of a total. (Similarly, one can also express a number as a fraction of 1,000, using the term "per mille" or the symbol "".) Example 1 If 50% of the total number of students in the class are male, that means that 50 out of every 100 students are male. If there are 500 students, then 250 of them are male. Example 2 An increase of $0.15 on a price of $2.50 is an increase by a fraction of = 0.06. Expressed as a percentage, this is a 6% increase. While many percentage values are between 0 and 100, there is no mathematical restriction and percentages may take on other values. For example, it is common to refer to 111% or −35%, especially for percent changes and comparisons. History In Ancient Rome, long before the existence of the decimal system, computations were often made in fractions in the multiples of . For example, Augustus levied a tax of on goods sold at auction known as centesima rerum venalium. Computation with these fractions was equivalent to computing percentages. As denominations of money grew in the Middle Ages, computations with a denominator of 100 became increasingly standard, such that from the late 15th century to the early 16th century, it became common for arithmetic texts to include such computations. Many of these texts applied these methods to profit and loss, interest rates, and the Rule of Three. By the 17th century, it was standard to quote interest rates in hundredths. Percent sign The term "percent" is derived from the Latin per centum, meaning "hundred" or "by the hundred". The sign for "percent" evolved by gradual contraction of the Italian term per cento, meaning "for a hundred". The "per" was often abbreviated as "p."—eventually disappeared entirely. The "cento" was contracted to two circles separated by a horizontal line, from which the modern "%" symbol is derived. Calculations The percent value is computed by multiplying the numeric value of the ratio by 100. For example, to find 50 apples as a percentage of 1,250 apples, one first computes the ratio = 0.04, and then multiplies by 100 to obtain 4%. The percent value can also be found by multiplying first instead of later, so in this example, the 50 would be multiplied by 100 to give 5,000, and this result would be divided by 1,250 to give 4%. To calculate a percentage of a percentage, convert both percentages to fractions of 100, or to decimals, and multiply them. For example, 50% of 40% is: It is not correct to divide by 100 and use the percent sign at the same time; it would literally imply division by 10,000. For example, , not , which actually is . A term such as would also be incorrect, since it would be read as 1 percent, even if the intent was to say 100%. Whenever communicating about a percentage, it is important to specify what it is relative to (i.e., what is the total that corresponds to 100%). The following problem illustrates this point. We are asked to compute the ratio of female computer science majors to all computer science majors. We know that 60% of all students are female, and among these 5% are computer science majors, so we conclude that × = or 3% of all students are female computer science majors. Dividing this by the 10% of all students that are computer science majors, we arrive at the answer: = or 30% of all computer science majors are female. This example is closely related to the concept of conditional probability. Because of the commutative property of multiplication, reversing expressions does not change the result; for example, 50% of 20 is 10, and 20% of 50 is 10. Variants of the percentage calculation The calculation of percentages is carried out and taught in different ways depending on the prerequisites and requirements. In this way, the usual formulas can be obtained with proportions, which saves them from having to remember them. In so-called mental arithmetic, the intermediary question is usually asked what 100% or 1% is (corresponds to). Example: 42 kg is 7%. How much is (corresponds to) 100%?Given are W (percentage) and p % (percentage).We are looking for G (basic value). Percentage increase and decrease Due to inconsistent usage, it is not always clear from the context what a percentage is relative to. When speaking of a "10% rise" or a "10% fall" in a quantity, the usual interpretation is that this is relative to the initial value of that quantity. For example, if an item is initially priced at $200 and the price rises 10% (an increase of $20), the new price will be $220. Note that this final price is 110% of the initial price (100% + 10% = 110%). Some other examples of percent changes: An increase of 100% in a quantity means that the final amount is 200% of the initial amount (100% of initial + 100% of increase = 200% of initial). In other words, the quantity has doubled. An increase of 800% means the final amount is 9 times the original (100% + 800% = 900% = 9 times as large). A decrease of 60% means the final amount is 40% of the original (100% – 60% = 40%). A decrease of 100% means the final amount is zero (100% – 100% = 0%). In general, a change of percent in a quantity results in a final amount that is 100 +  percent of the original amount (equivalently, (1 + 0.01) times the original amount). Compounding percentages Percent changes applied sequentially do not add up in the usual way. For example, if the 10% increase in price considered earlier (on the $200 item, raising its price to $220) is followed by a 10% decrease in the price (a decrease of $22), then the final price will be $198—not the original price of $200. The reason for this apparent discrepancy is that the two percent changes (+10% and −10%) are measured relative to different initial values ($200 and $220, respectively), and thus do not "cancel out". In general, if an increase of percent is followed by a decrease of percent, and the initial amount was , the final amount is ; hence the net change is an overall decrease by percent of percent (the square of the original percent change when expressed as a decimal number). Thus, in the above example, after an increase and decrease of , the final amount, $198, was 10% of 10%, or 1%, less than the initial amount of $200. The net change is the same for a decrease of percent, followed by an increase of percent; the final amount is . This can be expanded for a case where one does not have the same percent change. If the initial amount leads to a percent change , and the second percent change is , then the final amount is . To change the above example, after an increase of and decrease of , the final amount, $209, is 4.5% more than the initial amount of $200. As shown above, percent changes can be applied in any order and have the same effect. In the case of interest rates, a very common but ambiguous way to say that an interest rate rose from 10% per annum to 15% per annum, for example, is to say that the interest rate increased by 5%, which could theoretically mean that it increased from 10% per annum to 10.5% per annum. It is clearer to say that the interest rate increased by 5 percentage points (pp). The same confusion between the different concepts of percent(age) and percentage points can potentially cause a major misunderstanding when journalists report about election results, for example, expressing both new results and differences with earlier results as percentages. For example, if a party obtains 41% of the vote and this is said to be a 2.5% increase, does that mean the earlier result was 40% (since 41 = ) or 38.5% (since 41 = )? In financial markets, it is common to refer to an increase of one percentage point (e.g. from 3% per annum to 4% per annum) as an increase of "100 basis points". Word and symbol In most forms of English, percent is usually written as two words (per cent), although percentage and percentile are written as one word. In American English, percent is the most common variant (but per mille is written as two words). In the early 20th century, there was a dotted abbreviation form "per cent.", as opposed to "per cent". The form "per cent." is still in use in the highly formal language found in certain documents like commercial loan agreements (particularly those subject to, or inspired by, common law), as well as in the Hansard transcripts of British Parliamentary proceedings. The term has been attributed to Latin per centum. The symbol for percent (%) evolved from a symbol abbreviating the Italian per cento. In some other languages, the form procent or prosent is used instead. Some languages use both a word derived from percent and an expression in that language meaning the same thing, e.g. Romanian procent and la sută (thus, 10% can be read or sometimes written ten for [each] hundred, similarly with the English one out of ten). Other abbreviations are rarer, but sometimes seen. Grammar and style guides often differ as to how percentages are to be written. For instance, it is commonly suggested that the word percent (or per cent) be spelled out in all texts, as in "1 percent" and not "1%". Other guides prefer the word to be written out in humanistic texts, but the symbol to be used in scientific texts. Most guides agree that they always be written with a numeral, as in "5 percent" and not "five percent", the only exception being at the beginning of a sentence: "Ten percent of all writers love style guides." Decimals are also to be used instead of fractions, as in "3.5 percent of the gain" and not " percent of the gain". However the titles of bonds issued by governments and other issuers use the fractional form, e.g. "% Unsecured Loan Stock 2032 Series 2". (When interest rates are very low, the number 0 is included if the interest rate is less than 1%, e.g. "% Treasury Stock", not "% Treasury Stock".) It is also widely accepted to use the percent symbol (%) in tabular and graphic material. In line with common English practice, style guides—such as The Chicago Manual of Style—generally state that the number and percent sign are written without any space in between. However, the International System of Units and the ISO 31-0 standard require a space. Other uses The word "percentage" is often a misnomer in the context of sports statistics, when the referenced number is expressed as a decimal proportion, not a percentage: "The Phoenix Suns' Shaquille O'Neal led the NBA with a .609 field goal percentage (FG%) during the 2008–09 season." (O'Neal made 60.9% of his shots, not 0.609%.) Likewise, the winning percentage of a team, the fraction of matches that the club has won, is also usually expressed as a decimal proportion; a team that has a .500 winning percentage has won 50% of their matches. The practice is probably related to the similar way that batting averages are quoted. As "percent" it is used to describe the grade or slope, the steepness of a road or railway, formula for which is 100 ×  which could also be expressed as the tangent of the angle of inclination times 100. This is the ratio of distances a vehicle would advance vertically and horizontally, respectively, when going up- or downhill, expressed in percent. Percentage is also used to express composition of a mixture by mass percent and mole percent. Related units Percentage point difference of 1 part in 100 Per mille (‰) 1 part in 1,000 Basis point (bp) difference of 1 part in 10,000 Permyriad (‱) 1 part in 10,000 Per cent mille (pcm) 1 part in 100,000 Centiturn Practical applications Baker percentage Volume percent
Mathematics
Basics
null
64592
https://en.wikipedia.org/wiki/Hallway
Hallway
A hallway (also passage, passageway, corridor or hall) is an interior space in a building that is used to connect other rooms. Hallways are generally long and narrow. Hallways must be sufficiently wide to ensure buildings can be evacuated during a fire, and to allow people in wheelchairs to navigate them. The minimum width of a hallway is governed by building codes. Minimum widths in residences are in the United States. Hallways are wider in higher-traffic settings, such as schools and hospitals. In 1597 John Thorpe is the first recorded architect to replace multiple connected rooms with rooms along a corridor each accessed by a separate door.
Technology
Architectural elements
null
64656
https://en.wikipedia.org/wiki/Hue
Hue
In color theory, hue is one of the main properties (called color appearance parameters) of a color, defined technically in the CIECAM02 model as "the degree to which a stimulus can be described as similar to or different from stimuli that are described as red, orange, yellow, green, blue, violet," within certain theories of color vision. Hue can typically be represented quantitatively by a single number, often corresponding to an angular position around a central or neutral point or axis on a color space coordinate diagram (such as a chromaticity diagram) or color wheel, or by its dominant wavelength or by that of its complementary color. The other color appearance parameters are colorfulness, saturation (also known as intensity or chroma), lightness, and brightness. Usually, colors with the same hue are distinguished with adjectives referring to their lightness or colorfulness - for example: "light blue", "pastel blue", "vivid blue", and "cobalt blue". Exceptions include brown, which is a dark orange. In painting, a hue is a pure pigment—one without tint or shade (added white or black pigment, respectively). The human brain first processes hues in areas in the extended V4 called globs. Deriving a hue The concept of a color system with a hue was explored as early as 1830 with Philipp Otto Runge's color sphere. The Munsell color system from the 1930s was a great step forward, as it was realized that perceptual uniformity means the color space can no longer be a sphere. As a convention, the hue for red is set to 0° for most color spaces with a hue. Opponent color spaces In opponent color spaces in which two of the axes are perceptually orthogonal to lightness, such as the CIE 1976 (L*, a*, b*) (CIELAB) and 1976 (L*, u*, v*) (CIELUV) color spaces, hue may be computed together with chroma by converting these coordinates from rectangular form to polar form. Hue is the angular component of the polar representation, while chroma is the radial component. Specifically, in CIELAB while, analogously, in CIELUV where, atan2 is a two-argument inverse tangent. Defining hue in terms of RGB Preucil describes a color hexagon, similar to a trilinear plot described by Evans, Hanson, and Brewer, which may be used to compute hue from RGB. To place red at 0°, green at 120°, and blue at 240°, Equivalently, one may solve Preucil used a polar plot, which he termed a color circle. Using R, G, and B, one may compute hue angle using the following scheme: determine which of the six possible orderings of R, G, and B prevail, then apply the formula given in the table below. In each case the formula contains the fraction , where H is the highest of R, G, and B; L is the lowest, and M is the mid one between the other two. This is referred to as the "Preucil hue error" and was used in the computation of mask strength in photomechanical color reproduction. Hue angles computed for the Preucil circle agree with the hue angle computed for the Preucil hexagon at integer multiples of 30° (red, yellow, green, cyan, blue, magenta, and the colors midway between contiguous pairs) and differ by approximately 1.2° at odd integer multiples of 15° (based on the circle formula), the maximal divergence between the two. The process of converting an RGB color into an HSL or HSV color space is usually based on a 6-piece piecewise mapping, treating the HSV cone as a hexacone, or the HSL double cone as a double hexacone. The formulae used are those in the table above. One might notice that the HSL/HSV hue "circle" do not appear to all be of the same brightness. This is a known issue of this RGB-based derivation of hue. Usage in art Manufacturers of pigments use the word hue, for example, "cadmium yellow (hue)" to indicate that the original pigmentation ingredient, often toxic, has been replaced by safer (or cheaper) alternatives whilst retaining the hue of the original. Replacements are often used for chromium, cadmium and alizarin. Hue vs. dominant wavelength Dominant wavelength (or sometimes equivalent wavelength) is a physical analog to the perceptual attribute hue. On a chromaticity diagram, a line is drawn from a white point through the coordinates of the color in question, until it intersects the spectral locus. The wavelength at which the line intersects the spectrum locus is identified as the color's dominant wavelength if the point is on the same side of the white point as the spectral locus, and as the color's complementary wavelength if the point is on the opposite side. Hue difference notation There are two main ways in which hue difference is quantified. The first is the simple difference between the two hue angles. The symbol for this expression of hue difference is in CIELAB and in CIELUV. The other is computed as the residual total color difference after Lightness and Chroma differences have been accounted for; its symbol is in CIELAB and in CIELUV. Names and other notations There exists some correspondence, more or less precise, between hue values and color terms (names). One approach in color science is to use traditional color terms but try to give them more precise definitions. See spectral color#Spectral color terms for names of highly saturated colors with the hue from ≈ 0° (red) up to ≈ 275° (violet), and line of purples#Table of highly-saturated purple colors for color terms of the remaining part of the color wheel. Alternative approach is to use a systematic notation. It can be a standard angle notation for certain color model such as HSL/HSV mentioned above, CIELUV, or CIECAM02. Alphanumeric notations such as of Munsell color system, NCS, and Pantone Matching System are also used.
Physical sciences
Basics
Physics
64669
https://en.wikipedia.org/wiki/De%20Morgan%27s%20laws
De Morgan's laws
In propositional logic and Boolean algebra, De Morgan's laws, also known as De Morgan's theorem, are a pair of transformation rules that are both valid rules of inference. They are named after Augustus De Morgan, a 19th-century British mathematician. The rules allow the expression of conjunctions and disjunctions purely in terms of each other via negation. The rules can be expressed in English as: The negation of "A and B" is the same as "not A or not B". The negation of "A or B" is the same as "not A and not B". or The complement of the union of two sets is the same as the intersection of their complements The complement of the intersection of two sets is the same as the union of their complements or not (A or B) = (not A) and (not B) not (A and B) = (not A) or (not B) where "A or B" is an "inclusive or" meaning at least one of A or B rather than an "exclusive or" that means exactly one of A or B. Another form of De Morgan's law is the following as seen below. Applications of the rules include simplification of logical expressions in computer programs and digital circuit designs. De Morgan's laws are an example of a more general concept of mathematical duality. Formal notation The negation of conjunction rule may be written in sequent notation: The negation of disjunction rule may be written as: In rule form: negation of conjunction and negation of disjunction and expressed as truth-functional tautologies or theorems of propositional logic: where and are propositions expressed in some formal system. The generalized De Morgan's laws provide an equivalence for negating a conjunction or disjunction involving multiple terms.For a set of propositions , the generalized De Morgan's Laws are as follows: These laws generalize De Morgan's original laws for negating conjunctions and disjunctions. Substitution form De Morgan's laws are normally shown in the compact form above, with the negation of the output on the left and negation of the inputs on the right. A clearer form for substitution can be stated as: This emphasizes the need to invert both the inputs and the output, as well as change the operator when doing a substitution. Set theory In set theory, it is often stated as "union and intersection interchange under complementation", which can be formally expressed as: where: is the negation of , the overline being written above the terms to be negated, is the intersection operator (AND), is the union operator (OR). Unions and intersections of any number of sets The generalized form is where is some, possibly countably or uncountably infinite, indexing set. In set notation, De Morgan's laws can be remembered using the mnemonic "break the line, change the sign". Boolean algebra In Boolean algebra, similarly, this law which can be formally expressed as: where: is the negation of , the overline being written above the terms to be negated, is the logical conjunction operator (AND), is the logical disjunction operator (OR). which can be generalized to Engineering In electrical and computer engineering, De Morgan's laws are commonly written as: and where: is the logical AND, is the logical OR, the is the logical NOT of what is underneath the overbar. Text searching De Morgan's laws commonly apply to text searching using Boolean operators AND, OR, and NOT. Consider a set of documents containing the words "cats" and "dogs". De Morgan's laws hold that these two searches will return the same set of documents: Search A: NOT (cats OR dogs) Search B: (NOT cats) AND (NOT dogs) The corpus of documents containing "cats" or "dogs" can be represented by four documents: Document 1: Contains only the word "cats". Document 2: Contains only "dogs". Document 3: Contains both "cats" and "dogs". Document 4: Contains neither "cats" nor "dogs". To evaluate Search A, clearly the search "(cats OR dogs)" will hit on Documents 1, 2, and 3. So the negation of that search (which is Search A) will hit everything else, which is Document 4. Evaluating Search B, the search "(NOT cats)" will hit on documents that do not contain "cats", which is Documents 2 and 4. Similarly the search "(NOT dogs)" will hit on Documents 1 and 4. Applying the AND operator to these two searches (which is Search B) will hit on the documents that are common to these two searches, which is Document 4. A similar evaluation can be applied to show that the following two searches will both return Documents 1, 2, and 4: Search C: NOT (cats AND dogs), Search D: (NOT cats) OR (NOT dogs). History The laws are named after Augustus De Morgan (1806–1871), who introduced a formal version of the laws to classical propositional logic. De Morgan's formulation was influenced by the algebraization of logic undertaken by George Boole, which later cemented De Morgan's claim to the find. Nevertheless, a similar observation was made by Aristotle, and was known to Greek and Medieval logicians. For example, in the 14th century, William of Ockham wrote down the words that would result by reading the laws out. Jean Buridan, in his , also describes rules of conversion that follow the lines of De Morgan's laws. Still, De Morgan is given credit for stating the laws in the terms of modern formal logic, and incorporating them into the language of logic. De Morgan's laws can be proved easily, and may even seem trivial. Nonetheless, these laws are helpful in making valid inferences in proofs and deductive arguments. Proof for Boolean algebra De Morgan's theorem may be applied to the negation of a disjunction or the negation of a conjunction in all or part of a formula. Negation of a disjunction In the case of its application to a disjunction, consider the following claim: "it is false that either of A or B is true", which is written as: In that it has been established that neither A nor B is true, then it must follow that both A is not true and B is not true, which may be written directly as: If either A or B were true, then the disjunction of A and B would be true, making its negation false. Presented in English, this follows the logic that "since two things are both false, it is also false that either of them is true". Working in the opposite direction, the second expression asserts that A is false and B is false (or equivalently that "not A" and "not B" are true). Knowing this, a disjunction of A and B must be false also. The negation of said disjunction must thus be true, and the result is identical to the first claim. Negation of a conjunction The application of De Morgan's theorem to conjunction is very similar to its application to a disjunction both in form and rationale. Consider the following claim: "it is false that A and B are both true", which is written as: In order for this claim to be true, either or both of A or B must be false, for if they both were true, then the conjunction of A and B would be true, making its negation false. Thus, one (at least) or more of A and B must be false (or equivalently, one or more of "not A" and "not B" must be true). This may be written directly as, Presented in English, this follows the logic that "since it is false that two things are both true, at least one of them must be false". Working in the opposite direction again, the second expression asserts that at least one of "not A" and "not B" must be true, or equivalently that at least one of A and B must be false. Since at least one of them must be false, then their conjunction would likewise be false. Negating said conjunction thus results in a true expression, and this expression is identical to the first claim. Proof for set theory Here we use to denote the complement of A, as above in . The proof that is completed in 2 steps by proving both and . Part 1 Let . Then, . Because , it must be the case that or . If , then , so . Similarly, if , then , so . Thus, ; that is, . Part 2 To prove the reverse direction, let , and for contradiction assume . Under that assumption, it must be the case that , so it follows that and , and thus and . However, that means , in contradiction to the hypothesis that , therefore, the assumption must not be the case, meaning that . Hence, , that is, . Conclusion If and , then ; this concludes the proof of De Morgan's law. The other De Morgan's law, , is proven similarly. Generalising De Morgan duality In extensions of classical propositional logic, the duality still holds (that is, to any logical operator one can always find its dual), since in the presence of the identities governing negation, one may always introduce an operator that is the De Morgan dual of another. This leads to an important property of logics based on classical logic, namely the existence of negation normal forms: any formula is equivalent to another formula where negations only occur applied to the non-logical atoms of the formula. The existence of negation normal forms drives many applications, for example in digital circuit design, where it is used to manipulate the types of logic gates, and in formal logic, where it is needed to find the conjunctive normal form and disjunctive normal form of a formula. Computer programmers use them to simplify or properly negate complicated logical conditions. They are also often useful in computations in elementary probability theory. Let one define the dual of any propositional operator P(p, q, ...) depending on elementary propositions p, q, ... to be the operator defined by Extension to predicate and modal logic This duality can be generalised to quantifiers, so for example the universal quantifier and existential quantifier are duals: To relate these quantifier dualities to the De Morgan laws, consider a domain of discourse D (with some small number of entities) to which properties are ascribed universally and existentially, such as D = {a, b, c}. Then express universal quantifier equivalently by conjunction of individual statements and existential quantifier by disjunction of individual statements But, using De Morgan's laws, and verifying the quantifier dualities in the model. Then, the quantifier dualities can be extended further to modal logic, relating the box ("necessarily") and diamond ("possibly") operators: In its application to the alethic modalities of possibility and necessity, Aristotle observed this case, and in the case of normal modal logic, the relationship of these modal operators to the quantification can be understood by setting up models using Kripke semantics. In intuitionistic logic Three out of the four implications of de Morgan's laws hold in intuitionistic logic. Specifically, we have and The converse of the last implication does not hold in pure intuitionistic logic. That is, the failure of the joint proposition cannot necessarily be resolved to the failure of either of the two conjuncts. For example, from knowing it not to be the case that both Alice and Bob showed up to their date, it does not follow who did not show up. The latter principle is equivalent to the principle of the weak excluded middle , This weak form can be used as a foundation for an intermediate logic. For a refined version of the failing law concerning existential statements, see the lesser limited principle of omniscience , which however is different from . The validity of the other three De Morgan's laws remains true if negation is replaced by implication for some arbitrary constant predicate C, meaning that the above laws are still true in minimal logic. Similarly to the above, the quantifier laws: and are tautologies even in minimal logic with negation replaced with implying a fixed , while the converse of the last law does not have to be true in general. Further, one still has but their inversion implies excluded middle, . In computer engineering De Morgan's laws are widely used in computer engineering and digital logic for the purpose of simplifying circuit designs. In modern programming languages, due to the optimisation of compilers and interpreters, the performance differences between these options are negligible or completely absent.
Mathematics
Mathematical logic
null
64919
https://en.wikipedia.org/wiki/Environmental%20science
Environmental science
Environmental science is an interdisciplinary academic field that integrates physics, biology, meteorology, mathematics and geography (including ecology, chemistry, plant science, zoology, mineralogy, oceanography, limnology, soil science, geology and physical geography, and atmospheric science) to the study of the environment, and the solution of environmental problems. Environmental science emerged from the fields of natural history and medicine during the Enlightenment. Today it provides an integrated, quantitative, and interdisciplinary approach to the study of environmental systems. Environmental studies incorporates more of the social sciences for understanding human relationships, perceptions and policies towards the environment. Environmental engineering focuses on design and technology for improving environmental quality in every aspect. Environmental scientists seek to understand the earth's physical, chemical, biological, and geological processes, and to use that knowledge to understand how issues such as alternative energy systems, pollution control and mitigation, natural resource management, and the effects of global warming and climate change influence and affect the natural systems and processes of earth. Environmental issues almost always include an interaction of physical, chemical, and biological processes. Environmental scientists bring a systems approach to the analysis of environmental problems. Key elements of an effective environmental scientist include the ability to relate space, and time relationships as well as quantitative analysis. Environmental science came alive as a substantive, active field of scientific investigation in the 1960s and 1970s driven by (a) the need for a multi-disciplinary approach to analyze complex environmental problems, (b) the arrival of substantive environmental laws requiring specific environmental protocols of investigation and (c) the growing public awareness of a need for action in addressing environmental problems. Events that spurred this development included the publication of Rachel Carson's landmark environmental book Silent Spring along with major environmental issues becoming very public, such as the 1969 Santa Barbara oil spill, and the Cuyahoga River of Cleveland, Ohio, "catching fire" (also in 1969), and helped increase the visibility of environmental issues and create this new field of study. Terminology In common usage, "environmental science" and "ecology" are often used interchangeably, but technically, ecology refers only to the study of organisms and their interactions with each other as well as how they interrelate with environment. Ecology could be considered a subset of environmental science, which also could involve purely chemical or public health issues (for example) ecologists would be unlikely to study. In practice, there are considerable similarities between the work of ecologists and other environmental scientists. There is substantial overlap between ecology and environmental science with the disciplines of fisheries, forestry, and wildlife. History Ancient civilizations Historical concern for environmental issues is well documented in archives around the world. Ancient civilizations were mainly concerned with what is now known as environmental science insofar as it related to agriculture and natural resources. Scholars believe that early interest in the environment began around 6000 BCE when ancient civilizations in Israel and Jordan collapsed due to deforestation. As a result, in 2700 BCE the first legislation limiting deforestation was established in Mesopotamia. Two hundred years later, in 2500 BCE, a community residing in the Indus River Valley observed the nearby river system in order to improve sanitation. This involved manipulating the flow of water to account for public health. In the Western Hemisphere, numerous ancient Central American city-states collapsed around 1500 BCE due to soil erosion from intensive agriculture. Those remaining from these civilizations took greater attention to the impact of farming practices on the sustainability of the land and its stable food production. Furthermore, in 1450 BCE the Minoan civilization on the Greek island of Crete declined due to deforestation and the resulting environmental degradation of natural resources. Pliny the Elder somewhat addressed the environmental concerns of ancient civilizations in the text Naturalis Historia, written between 77 and 79 ACE, which provided an overview of many related subsets of the discipline. Although warfare and disease were of primary concern in ancient society, environmental issues played a crucial role in the survival and power of different civilizations. As more communities recognized the importance of the natural world to their long-term success, an interest in studying the environment came into existence. Beginnings of environmental science 18th century In 1735, the concept of binomial nomenclature is introduced by Carolus Linnaeus as a way to classify all living organisms, influenced by earlier works of Aristotle. His text, Systema Naturae, represents one of the earliest culminations of knowledge on the subject, providing a means to identify different species based partially on how they interact with their environment. 19th century In the 1820s, scientists were studying the properties of gases, particularly those in the Earth's atmosphere and their interactions with heat from the Sun. Later that century, studies suggested that the Earth had experienced an Ice Age and that warming of the Earth was partially due to what are now known as greenhouse gases (GHG). The greenhouse effect was introduced, although climate science was not yet recognized as an important topic in environmental science due to minimal industrialization and lower rates of greenhouse gas emissions at the time. 20th century In the 1900s, the discipline of environmental science as it is known today began to take shape. The century is marked by significant research, literature, and international cooperation in the field. In the early 20th century, criticism from dissenters downplayed the effects of global warming. At this time, few researchers were studying the dangers of fossil fuels. After a 1.3 degrees Celsius temperature anomaly was found in the Atlantic Ocean in the 1940s, however, scientists renewed their studies of gaseous heat trapping from the greenhouse effect (although only carbon dioxide and water vapor were known to be greenhouse gases then). Nuclear development following the Second World War allowed environmental scientists to intensively study the effects of carbon and make advancements in the field. Further knowledge from archaeological evidence brought to light the changes in climate over time, particularly ice core sampling. Environmental science was brought to the forefront of society in 1962 when Rachel Carson published an influential piece of environmental literature, Silent Spring. Carson's writing led the American public to pursue environmental safeguards, such as bans on harmful chemicals like the insecticide DDT. Another important work, The Tragedy of the Commons, was published by Garrett Hardin in 1968 in response to accelerating natural degradation. In 1969, environmental science once again became a household term after two striking disasters: Ohio's Cuyahoga River caught fire due to the amount of pollution in its waters and a Santa Barbara oil spill endangered thousands of marine animals, both receiving prolific media coverage. Consequently, the United States passed an abundance of legislation, including the Clean Water Act and the Great Lakes Water Quality Agreement. The following year, in 1970, the first ever Earth Day was celebrated worldwide and the United States Environmental Protection Agency (EPA) was formed, legitimizing the study of environmental science in government policy. In the next two years, the United Nations created the United Nations Environment Programme (UNEP) in Stockholm, Sweden to address global environmental degradation. Much of the interest in environmental science throughout the 1970s and the 1980s was characterized by major disasters and social movements. In 1978, hundreds of people were relocated from Love Canal, New York after carcinogenic pollutants were found to be buried underground near residential areas. The next year, in 1979, the nuclear power plant on Three Mile Island in Pennsylvania suffered a meltdown and raised concerns about the dangers of radioactive waste and the safety of nuclear energy. In response to landfills and toxic waste often disposed of near their homes, the official Environmental Justice Movement was started by a Black community in North Carolina in 1982. Two years later, the toxic methyl isocyanate gas was released to the public from a power plant disaster in Bhopal, India, harming hundreds of thousands of people living near the disaster site, the effects of which are still felt today. In a groundbreaking discovery in 1985, a British team of researchers studying Antarctica found evidence of a hole in the ozone layer, inspiring global agreements banning the use of chlorofluorocarbons (CFCs), which were previously used in nearly all aerosols and refrigerants. Notably, in 1986, the meltdown at the Chernobyl nuclear power plant in Ukraine released radioactive waste to the public, leading to international studies on the ramifications of environmental disasters. Over the next couple of years, the Brundtland Commission (previously known as the World Commission on Environment and Development) published a report titled Our Common Future and the Montreal Protocol formed the International Panel on Climate Change (IPCC) as international communication focused on finding solutions for climate change and degradation. In the late 1980s, the Exxon Valdez company was fined for spilling large quantities of crude oil off the coast of Alaska and the resulting cleanup, involving the work of environmental scientists. After hundreds of oil wells were burned in combat in 1991, warfare between Iraq and Kuwait polluted the surrounding atmosphere just below the air quality threshold s believed was life-threatening. 21st century Many niche disciplines of environmental science have emerged over the years, although climatology is one of the most known topics. Since the 2000s, environmental scientists have focused on modeling the effects of climate change and encouraging global cooperation to minimize potential damages. In 2002, the Society for the Environment as well as the Institute of Air Quality Management were founded to share knowledge and develop solutions around the world. Later, in 2008, the United Kingdom became the first country to pass legislation (the Climate Change Act) that aims to reduce carbon dioxide output to a specified threshold. In 2016 the Kyoto Protocol became the Paris Agreement, which sets concrete goals to reduce greenhouse gas emissions and restricts Earth's rise in temperature to a 2 degrees Celsius maximum. The agreement is one of the most expansive international efforts to limit the effects of global warming to date. Most environmental disasters in this time period involve crude oil pollution or the effects of rising temperatures. In 2010, BP was responsible for the largest American oil spill in the Gulf of Mexico, known as the Deepwater Horizon spill, which killed a number of the company's workers and released large amounts of crude oil into the water. Furthermore, throughout this century, much of the world has been ravaged by widespread wildfires and water scarcity, prompting regulations on the sustainable use of natural resources as determined by environmental scientists. The 21st century is marked by significant technological advancements. New technology in environmental science has transformed how researchers gather information about various topics in the field. Research in engines, fuel efficiency, and decreasing emissions from vehicles since the times of the Industrial Revolution has reduced the amount of carbon and other pollutants into the atmosphere. Furthermore, investment in researching and developing clean energy (i.e. wind, solar, hydroelectric, and geothermal power) has significantly increased in recent years, indicating the beginnings of the divestment from fossil fuel use. Geographic information systems (GIS) are used to observe sources of air or water pollution through satellites and digital imagery analysis. This technology allows for advanced farming techniques like precision agriculture as well as monitoring water usage in order to set market prices. In the field of water quality, developed strains of natural and manmade bacteria contribute to bioremediation, the treatment of wastewaters for future use. This method is more eco-friendly and cheaper than manual cleanup or treatment of wastewaters. Most notably, the expansion of computer technology has allowed for large data collection, advanced analysis, historical archives, public awareness of environmental issues, and international scientific communication. The ability to crowdsource on the Internet, for example, represents the process of collectivizing knowledge from researchers around the world to create increased opportunity for scientific progress. With crowdsourcing, data is released to the public for personal analyses which can later be shared as new information is found. Another technological development, blockchain technology, monitors and regulates global fisheries. By tracking the path of fish through global markets, environmental scientists can observe whether certain species are being overharvested to the point of extinction. Additionally, remote sensing allows for the detection of features of the environment without physical intervention. The resulting digital imagery is used to create increasingly accurate models of environmental processes, climate change, and much more. Advancements to remote sensing technology are particularly useful in locating the nonpoint sources of pollution and analyzing ecosystem health through image analysis across the electromagnetic spectrum. Lastly, thermal imaging technology is used in wildlife management to catch and discourage poachers and other illegal wildlife traffickers from killing endangered animals, proving useful for conservation efforts. Artificial intelligence has also been used to predict the movement of animal populations and protect the habitats of wildlife. Components Atmospheric sciences Atmospheric sciences focus on the Earth's atmosphere, with an emphasis upon its interrelation to other systems. Atmospheric sciences can include studies of meteorology, greenhouse gas phenomena, atmospheric dispersion modeling of airborne contaminants, sound propagation phenomena related to noise pollution, and even light pollution. Taking the example of the global warming phenomena, physicists create computer models of atmospheric circulation and infrared radiation transmission, chemists examine the inventory of atmospheric chemicals and their reactions, biologists analyze the plant and animal contributions to carbon dioxide fluxes, and specialists such as meteorologists and oceanographers add additional breadth in understanding the atmospheric dynamics. Ecology As defined by the Ecological Society of America, "Ecology is the study of the relationships between living organisms, including humans, and their physical environment; it seeks to understand the vital connections between plants and animals and the world around them." Ecologists might investigate the relationship between a population of organisms and some physical characteristic of their environment, such as concentration of a chemical; or they might investigate the interaction between two populations of different organisms through some symbiotic or competitive relationship. For example, an interdisciplinary analysis of an ecological system which is being impacted by one or more stressors might include several related environmental science fields. In an estuarine setting where a proposed industrial development could impact certain species by water and air pollution, biologists would describe the flora and fauna, chemists would analyze the transport of water pollutants to the marsh, physicists would calculate air pollution emissions and geologists would assist in understanding the marsh soils and bay muds. Environmental chemistry Environmental chemistry is the study of chemical alterations in the environment. Principal areas of study include soil contamination and water pollution. The topics of analysis include chemical degradation in the environment, multi-phase transport of chemicals (for example, evaporation of a solvent containing lake to yield solvent as an air pollutant), and chemical effects upon biota. As an example study, consider the case of a leaking solvent tank which has entered the habitat soil of an endangered species of amphibian. As a method to resolve or understand the extent of soil contamination and subsurface transport of solvent, a computer model would be implemented. Chemists would then characterize the molecular bonding of the solvent to the specific soil type, and biologists would study the impacts upon soil arthropods, plants, and ultimately pond-dwelling organisms that are the food of the endangered amphibian. Geosciences Geosciences include environmental geology, environmental soil science, volcanic phenomena and evolution of the Earth's crust. In some classification systems this can also include hydrology, including oceanography. As an example study, of soils erosion, calculations would be made of surface runoff by soil scientists. Fluvial geomorphologists would assist in examining sediment transport in overland flow. Physicists would contribute by assessing the changes in light transmission in the receiving waters. Biologists would analyze subsequent impacts to aquatic flora and fauna from increases in water turbidity. Regulations driving the studies In the United States the National Environmental Policy Act (NEPA) of 1969 set forth requirements for analysis of federal government actions (such as highway construction projects and land management decisions) in terms of specific environmental criteria. Numerous state laws have echoed these mandates, applying the principles to local-scale actions. The upshot has been an explosion of documentation and study of environmental consequences before the fact of development actions. One can examine the specifics of environmental science by reading examples of Environmental Impact Statements prepared under NEPA such as: Wastewater treatment expansion options discharging into the San Diego/Tijuana Estuary, Expansion of the San Francisco International Airport, Development of the Houston, Metro Transportation system, Expansion of the metropolitan Boston MBTA transit system, and Construction of Interstate 66 through Arlington, Virginia. In England and Wales the Environment Agency (EA), formed in 1996, is a public body for protecting and improving the environment and enforces the regulations listed on the communities and local government site. (formerly the office of the deputy prime minister). The agency was set up under the Environment Act 1995 as an independent body and works closely with UK Government to enforce the regulations.
Physical sciences
Earth science basics: General
Earth science
16413778
https://en.wikipedia.org/wiki/Ageing
Ageing
Ageing (or aging in American English) is the process of becoming older. The term refers mainly to humans, many other animals, and fungi, whereas for example, bacteria, perennial plants and some simple animals are potentially biologically immortal. In a broader sense, ageing can refer to single cells within an organism which have ceased dividing, or to the population of a species. In humans, ageing represents the accumulation of changes in a human being over time and can encompass physical, psychological, and social changes. Reaction time, for example, may slow with age, while memories and general knowledge typically increase. Ageing is associated with increased risk of cancer, Alzheimer's disease, diabetes, cardiovascular disease, increased mental health risks, and many more. Of the roughly 150,000 people who die each day across the globe, about two-thirds die from age-related causes. Certain lifestyle choices and socioeconomic conditions have been linked to ageing. Current ageing theories are assigned to the damage concept, whereby the accumulation of damage (such as DNA oxidation) may cause biological systems to fail, or to the programmed ageing concept, whereby the internal processes (epigenetic maintenance such as DNA methylation) inherently may cause ageing. Programmed ageing should not be confused with programmed cell death (apoptosis). Ageing versus immortality Human beings and members of other species, especially animals, age and die. Fungi, too, can age. In contrast, many species can be considered potentially immortal: for example, bacteria fission to produce daughter cells, strawberry plants grow runners to produce clones of themselves, and animals in the genus Hydra have a regenerative ability by which they avoid dying of old age. Early life forms on Earth, starting at least 3.7 billion years ago, were single-celled organisms. Such organisms (Prokaryotes, Protozoans, algae) multiply by fission into daughter cells; thus single celled organisms have been thought to not age and to be potentially immortal under favorable conditions. However, evidence has been reported that aging leading to death occurs in the single-cell bacterium Escherichia coli, an organism that reproduces by morphologically symmetrical division. Evidence of aging has also been reported for the bacterium Caulobacter crescintus. and the single cell yeast Saccharomyces cerevisiae. Ageing and mortality of the individual organism became more evident with the evolution of eukaryotic sexual reproduction, which occurred with the emergence of the fungal/animal kingdoms approximately a billion years ago, and the evolution of seed-producing plants 320 million years ago. The sexual organism could henceforth pass on some of its genetic material to produce new individuals and could itself become disposable with respect to the survival of its species. This classic biological idea has however been perturbed recently by the discovery that the bacterium E. coli may split into distinguishable daughter cells, which opens the theoretical possibility of "age classes" among bacteria. Even within humans and other mortal species, there are cells with the potential for immortality: cancer cells which have lost the ability to die when maintained in a cell culture such as the HeLa cell line, and specific stem cells such as germ cells (producing ova and spermatozoa). In artificial cloning, adult cells can be rejuvenated to embryonic status and then used to grow a new tissue or animal without ageing. Normal human cells however die after about 50 cell divisions in laboratory culture (the Hayflick Limit, discovered by Leonard Hayflick in 1961). Symptoms A number of characteristic ageing symptoms are experienced by a majority, or by a significant proportion of humans during their lifetimes. Teenagers lose the young child's ability to hear high-frequency sounds above 20 kHz. Wrinkles develop mainly due to photoageing, particularly affecting sun-exposed areas such as the face. After peaking from the late teens to the late 20s, female fertility declines. After age 30, the mass of the human body is decreased until 70 years and then shows damping oscillations. People over 35 years of age are at increasing risk for losing strength in the ciliary muscle of the eyes, which leads to difficulty focusing on close objects, or presbyopia. Most people experience presbyopia by age 45–50. The cause is lens hardening by decreasing levels of alpha-crystallin, a process which may be sped up by higher temperatures. Around age 55, hair turns grey. Pattern hair loss by the age of 55 affects about 30–50% of males and a quarter of females. Menopause typically occurs between 44 and 58 years of age. In the 60–64 age cohort, the incidence of osteoarthritis rises to 53%. Only 20%, however, report disabling osteoarthritis at this age. Almost half of people older than 75 have hearing loss (presbycusis), inhibiting spoken communication. Many vertebrates such as fish, birds and amphibians do not develop presbycusis in old age, as they are able to regenerate their cochlear sensory cells; mammals, including humans, have genetically lost this ability. By age 80, more than half of all Americans either have a cataract or have had cataract surgery. Frailty, a syndrome of decreased strength, physical activity, physical performance and energy, affects 25% of those over 85. Muscles have a reduced capacity of responding to exercise or injury and loss of muscle mass and strength (sarcopenia) is common. Maximum oxygen use and maximum heart rate decline. Hand strength and mobility decrease. Atherosclerosis is classified as an ageing disease. It leads to cardiovascular disease (for example, stroke and heart attacks), which, globally, is the most common cause of death. Vessel ageing causes vascular remodelling and loss of arterial elasticity, and as a result, causes the stiffness of the vasculature. Evidence suggests that age-related risk of death plateaus after the age of 105. The maximum human lifespan is suggested to be 115 years. The oldest reliably recorded human was Jeanne Calment, who died in 1997 at 122. Dementia becomes more common with age. About 3% of people between the ages of 65 and 74, 19% of those between 75 and 84, and nearly half of those over 85 years old have dementia. The spectrum ranges from mild cognitive impairment to the neurodegenerative diseases of Alzheimer's disease, cerebrovascular disease, Parkinson's disease and Lou Gehrig's disease. Furthermore, many types of memory decline with ageing, but not semantic memory or general knowledge such as vocabulary definitions. These typically increase or remain steady until late adulthood (see Ageing brain). Intelligence declines with age, though the rate varies depending on the type and may, in fact, remain steady throughout most of the human lifespan, dropping suddenly only as people near the end of their lives. Individual variations in the rate of cognitive decline may therefore be explained in terms of people having different lengths of life. There are changes to the brain: after 20 years of age, there is a 10% reduction each decade in the total length of the brain's myelinated axons. Age can result in visual impairment, whereby non-verbal communication is reduced, which can lead to isolation and possible depression. Older adults, however, may not experience depression as much as younger adults, and were paradoxically found to have improved mood, despite declining physical health. Macular degeneration causes vision loss and increases with age, affecting nearly 12% of those above the age of 80. This degeneration is caused by systemic changes in the circulation of waste products and by the growth of abnormal vessels around the retina. Other visual diseases that often appear with age are cataracts and glaucoma. A cataract occurs when the lens of the eye becomes cloudy, making vision blurry; it eventually causes blindness if untreated. They develop over time and are seen most often with those that are older. Cataracts can be treated through surgery. Glaucoma is another common visual disease that appears in older adults. Glaucoma is caused by damage to the optic nerve, causing vision loss. Glaucoma usually develops over time, but there are variations to glaucoma, and some have a sudden onset. There are a few procedures for glaucoma, but there is no cure or fix for the damage, once it has occurred. Prevention is the best measure in the case of glaucoma. In addition to physical symptoms, aging can also cause a number of mental health issues as older adults deal with challenges such as the death of loved ones, retirement and loss of purpose, as well as their own health issues. Some warning signs are: changes in mood or energy, changes in sleep or eating habits, pain, sadness, unhealthy coping mechanisms such as smoking, suicidal ideations, and others. Older adults are more prone to social isolation as well, which can further increase the risk for physical and mental conditions such as anxiety, depression, and cognitive decline. A distinction can be made between "proximal ageing" (age-based effects that come about because of factors in the recent past) and "distal ageing" (age-based differences that can be traced to a cause in a person's early life, such as childhood poliomyelitis). Ageing is among the greatest known risk factors for most human diseases. Of the roughly 150,000 people who die each day across the globe, about two-thirds--100,000 per day--die from age-related causes. In industrialized nations, the proportion is higher, reaching 90%. Biological basis In the 21st century, researchers are only beginning to investigate the biological basis of ageing even in relatively simple and short-lived organisms, such as yeast. Little is known of mammalian ageing, in part due to the much longer lives of even small mammals, such as the mouse (around 3 years). A model organism for the study of ageing is the nematode C. elegans having a short lifespan of 2–3 weeks enabling genetic manipulations or suppression of gene activity with RNA interference, and other factors. Most known mutations and RNA interference targets that extend lifespan were first discovered in C. elegans. The factors proposed to influence biological ageing fall into two main categories, programmed and error-related. Programmed factors follow a biological timetable that might be a continuation of inherent mechanisms that regulate childhood growth and development. This regulation would depend on changes in gene expression that affect the systems responsible for maintenance, repair and defense responses. Factors causing errors or damage include internal and environmental events that induce cumulative deterioration in one or more organs. Molecular and cellular hallmarks of ageing One 2013 review assessed ageing through the lens of the damage theory, proposing nine metabolic "hallmarks" of ageing in various organisms but especially mammals: genomic instability (mutations accumulated in nuclear DNA, in mtDNA, and in the nuclear lamina) telomere attrition (the authors note that artificial telomerase confers non-cancerous immortality to otherwise mortal cells) epigenetic alterations (including DNA methylation patterns, post-translational modification of histones, and chromatin remodelling). Ageing and disease are related to a misregulation of gene expression through impaired methylation patterns, from hypomethylation to hypermethylation. loss of proteostasis (protein folding and proteolysis) deregulated nutrient sensing (relating to the Growth hormone/Insulin-like growth factor 1 signalling pathway, which is the most conserved ageing-controlling pathway in evolution and among its targets are the FOXO3/Sirtuin transcription factors and the mTOR complexes, probably responsive to caloric restriction) mitochondrial dysfunction (the authors point out however that a causal link between ageing and increased mitochondrial production of reactive oxygen species is no longer supported by recent research) cellular senescence (accumulation of no longer dividing cells in certain tissues, a process induced especially by p16INK4a/Rb and p19ARF/p53 to stop cancerous cells from proliferating) stem cell exhaustion (in the authors' view caused by damage factors such as those listed above) altered intercellular communication (encompassing especially inflammation but possibly also other intercellular interactions) inflammageing, a chronic inflammatory phenotype in the elderly in the absence of viral infection, due to over-activation and a decrease in the precision of the innate immune system dysbiosis of gut microbiome (e.g., loss of microbial diversity, expansion of enteropathogens, and altered vitamin B12 biosynthesis) is correlated with biological age rather than chronological age. Metabolic pathways involved in ageing There are three main metabolic pathways which can influence the rate of ageing, discussed below: the FOXO3/Sirtuin pathway, probably responsive to caloric restriction the Growth hormone/Insulin-like growth factor 1 signalling pathway the activity levels of the electron transport chain in mitochondria and (in plants) in chloroplasts. It is likely that most of these pathways affect ageing separately, because targeting them simultaneously leads to additive increases in lifespan. Programmed factors The rate of ageing varies substantially across different species, and this, to a large extent, is genetically based. For example, numerous perennial plants ranging from strawberries and potatoes to willow trees typically produce clones of themselves by vegetative reproduction and are thus potentially immortal, while annual plants such as wheat and watermelons die each year and reproduce by sexual reproduction. In 2008 it was discovered that inactivation of only two genes in the annual plant Arabidopsis thaliana leads to its conversion into a potentially immortal perennial plant. The oldest animals known so far are 15,000-year-old Antarctic sponges, which can reproduce both sexually and clonally. Clonal immortality apart, there are certain species whose individual lifespans stand out among Earth's life-forms, including the bristlecone pine at 5062 years or 5067 years, invertebrates like the hard clam (known as quahog in New England) at 508 years, the Greenland shark at 400 years, various deep-sea tube worms at over 300 years, fish like the sturgeon and the rockfish, and the sea anemone and lobster. Such organisms are sometimes said to exhibit negligible senescence. The genetic aspect has also been demonstrated in studies of human centenarians. Evolution of ageing Life span, like other phenotypes, is selected for in evolution. Traits that benefit early survival and reproduction will be selected for even if they contribute to an earlier death. Such a genetic effect is called the antagonistic pleiotropy effect when referring to a gene (pleiotropy signifying the gene has a double function – enabling reproduction at a young age but costing the organism life expectancy in old age) and is called the disposable soma effect when referring to an entire genetic programme (the organism diverting limited resources from maintenance to reproduction). The biological mechanisms which regulate lifespan probably evolved with the first multicellular organisms more than a billion years ago. However, even single-celled organisms such as yeast have been used as models in ageing, hence ageing has its biological roots much earlier than multi-cellularity. Damage-related factors DNA damage theory of ageing: DNA damage is thought to be the common basis of both cancer and ageing, and it has been argued that intrinsic causes of DNA damage are the most important causes of ageing. Genetic damage (aberrant structural alterations of the DNA), mutations (changes in the DNA sequence), and epimutations (methylation of gene promoter regions or alterations of the DNA scaffolding which regulate gene expression), can cause abnormal gene expression. DNA damage causes the cells to stop dividing or induces apoptosis, often affecting stem cell pools and therefore hindering regeneration. However, lifelong studies of mice suggest that most mutations happen during embryonic and childhood development, when cells divide often, as each cell division is a chance for errors in DNA replication. A meta analysis study of 36 studies with 4,676 participants showed an association between age and DNA damage in humans. In the human hematopoietic stem cell compartment DNA damage accumulates with age. In healthy humans after 50 years of age, chronological age shows a linear association with DNA damage accumulation in blood mononuclear cells. Genome-wide profiles of DNA damage can be used as highly accurate predictors of mammalian age. Genetic instability: Dogs annually lose approximately 3.3% of the DNA in their heart muscle cells while humans lose approximately 0.6% of their heart muscle DNA each year. These numbers are close to the ratio of the maximum longevities of the two species (120 years vs. 20 years, a 6/1 ratio). The comparative percentage is also similar between the dog and human for yearly DNA loss in the brain and lymphocytes. As stated by lead author, Bernard L. Strehler, "... genetic damage (particularly gene loss) is almost certainly (or probably the) central cause of ageing." Accumulation of waste: A buildup of waste products in cells presumably interferes with metabolism. For example, a waste product called lipofuscin is formed by a complex reaction in cells that binds fat to proteins. Lipofuscin may accumulate in the cells as small granules during ageing. The hallmark of ageing yeast cells appears to be overproduction of certain proteins. Autophagy induction can enhance clearance of toxic intracellular waste associated with neurodegenerative diseases and has been comprehensively demonstrated to improve lifespan in yeast, worms, flies, rodents and primates. The situation, however, has been complicated by the identification that autophagy up-regulation can also occur during ageing. Wear-and-tear theory: The general idea that changes associated with ageing are the result of chance damage that accumulates over time. Accumulation of errors: The idea that ageing results from chance events that escape proofreading mechanisms, which gradually damages the genetic code. Heterochromatin loss, model of ageing. Cross-linkage: The idea that ageing results from accumulation of cross-linked compounds that interfere with normal cell function. Studies of mtDNA mutator mice have shown that increased levels of somatic mtDNA mutations directly can cause a variety of ageing phenotypes. The authors propose that mtDNA mutations lead to respiratory-chain-deficient cells and thence to apoptosis and cell loss. They cast doubt experimentally however on the common assumption that mitochondrial mutations and dysfunction lead to increased generation of reactive oxygen species (ROS). Free-radical theory: Damage by free radicals, or more generally reactive oxygen species or oxidative stress, create damage that may give rise to the symptoms we recognise as ageing. The effect of calorie restriction may be due to increased formation of free radicals within the mitochondria, causing a secondary induction of increased antioxidant defence capacity. Mitochondrial theory of ageing: free radicals produced by mitochondrial activity damage cellular components, leading to ageing. DNA oxidation and caloric restriction: Caloric restriction reduces 8-OH-dG DNA damage in organs of ageing rats and mice. Thus, reduction of oxidative DNA damage is associated with a slower rate of ageing and increased lifespan. In a 2021 review article, Vijg stated that "Based on an abundance of evidence, DNA damage is now considered as the single most important driver of the degenerative processes that collectively cause ageing." Research Diet The Mediterranean diet is credited with lowering the risk of heart disease and early death. The major contributors to mortality risk reduction appear to be a higher consumption of vegetables, fish, fruits, nuts and monounsaturated fatty acids, such as by consuming olive oil. As of 2021, there is insufficient clinical evidence that calorie restriction or any dietary practice affects the process of ageing. Exercise People who participate in moderate to high levels of physical exercise have a lower mortality rate compared to individuals who are not physically active. The majority of the benefits from exercise are achieved with around 3500 metabolic equivalent (MET) minutes per week. For example, climbing stairs 10 minutes, vacuuming 15 minutes, gardening 20 minutes, running 20 minutes, and walking or bicycling for 25 minutes on a daily basis would together achieve about 3000 MET minutes a week. Exercise has also been found to be an effective measure to treat declines in neuromuscular function due to age. A meta-analysis found that resistance training with elastic bands or kettlebells provided significant improvements to grip strength, gait speed, and skeletal muscle mass in patients with sarcopenia. Furthermore, another analysis found that the positive effects of resistance exercise on strength, muscle mass, and motor coordination reduce the risk of falls in the elderly, which is a key factor for living a longer and healthier life. In terms of programming, there is no one-size-fits-all approach. General recommendations for improvements to gait speed, strength, and muscle size for reduced fall risk are resistance training programs with two to three 40-60 minute workouts per week, consisting of 1-2 sets of 5-8 repetitions of 2-3 different exercises for each major muscle group, but individual considerations must be taken due to differences in health status, motivation, and accessibility to exercise facilities. There is also evidence to suggest that exercise of any type may mitigate the degradation of the neuromuscular junction (NMJ) that occurs with age. Current evidence suggests that aerobic exercise causes the most hypertrophy of the NMJ, although resistance training is still somewhat effective. However, further evidence is necessary to identify optimal training protocols for NMJ function and to further understand how exercise affects the mechanisms that cause NMJ degradation. Social factors A meta-analysis showed that loneliness carries a higher mortality risk than smoking. Society and culture Different cultures express age in different ways. The age of an adult human is commonly measured in whole years since the day of birth. (The most notable exceptionEast Asian age reckoningis becoming less common, particularly in official contexts.) Arbitrary divisions set to mark periods of life may include juvenile (from infancy through childhood, preadolescence, and adolescence), early adulthood, middle adulthood, and late adulthood. Informal terms include "tweens", "teenagers", "twentysomething", "thirtysomething", etc. as well as "denarian", "vicenarian", "tricenarian", "quadragenarian", etc. Most legal systems define a specific age for when an individual is allowed or obliged to do particular activities. These age specifications include voting age, drinking age, age of consent, age of majority, age of criminal responsibility, marriageable age, age of candidacy, and mandatory retirement age. Admission to a movie, for instance, may depend on age according to a motion picture rating system. A bus fare might be discounted for the young or old. Each nation, government, and non-governmental organization has different ways of classifying age. In other words, chronological ageing may be distinguished from "social ageing" (cultural age-expectations of how people should act as they grow older) and "biological ageing" (an organism's physical state as it ages). Ageism cost the United States $63 billion in one year according to a Yale School of Public Health study. In a UNFPA report about ageing in the 21st century, it highlighted the need to "Develop a new rights-based culture of ageing and a change of mindset and societal attitudes towards ageing and older persons, from welfare recipients to active, contributing members of society". UNFPA said that this "requires, among others, working towards the development of international human rights instruments and their translation into national laws and regulations and affirmative measures that challenge age discrimination and recognise older people as autonomous subjects". Older people's music participation contributes to the maintenance of interpersonal relationships and promoting successful ageing. At the same time, older persons can make contributions to society including caregiving and volunteering. For example, "A study of Bolivian migrants who [had] moved to Spain found that 69% left their children at home, usually with grandparents. In rural China, grandparents care for 38% of children aged under five whose parents have gone to work in cities." Economics Population ageing is the increase in the number and proportion of older people in society. Population ageing has three possible causes: migration, longer life expectancy (decreased death rate) and decreased birth rate. Ageing has a significant impact on society. Young people tend to have fewer legal privileges (if they are below the age of majority), they are more likely to push for political and social change, to develop and adopt new technologies, and to need education. Older people have different requirements from society and government, and frequently have differing values as well, such as for property and pension rights. In the 21st century, one of the most significant population trends is ageing. Currently, over 11% of the world's current population are people aged 60 and older and the United Nations Population Fund (UNFPA) estimates that by 2050 that number will rise to approximately 22%. Ageing has occurred due to development which has enabled better nutrition, sanitation, health care, education and economic well-being. Consequently, fertility rates have continued to decline and life expectancy has risen. Life expectancy at birth is over 80 now in 33 countries. Ageing is a "global phenomenon", that is occurring fastest in developing countries, including those with large youth populations, and poses social and economic challenges to the work which can be overcome with "the right set of policies to equip individuals, families and societies to address these challenges and to reap its benefits". As life expectancy rises and birth rates decline in developed countries, the median age rises accordingly. According to the United Nations, this process is taking place in nearly every country in the world. A rising median age can have significant social and economic implications, as the workforce gets progressively older and the number of old workers and retirees grows relative to the number of young workers. Older people generally incur more health-related costs than do younger people in the workplace and can also cost more in worker's compensation and pension liabilities. In most developed countries an older workforce is somewhat inevitable. In the United States for instance, the Bureau of Labor Statistics estimates that one in four American workers will be 55 or older by 2020. Among the most urgent concerns of older persons worldwide is income security. This poses challenges for governments with ageing populations to ensure investments in pension systems continues to provide economic independence and reduce poverty in old age. These challenges vary for developing and developed countries. UNFPA stated that, "Sustainability of these systems is of particular concern, particularly in developed countries, while social protection and old-age pension coverage remain a challenge for developing countries, where a large proportion of the labour force is found in the informal sector." The global economic crisis has increased financial pressure to ensure economic security and access to health care in old age. To elevate this pressure "social protection floors must be implemented in order to guarantee income security and access to essential health and social services for all older persons and provide a safety net that contributes to the postponement of disability and prevention of impoverishment in old age". It has been argued that population ageing has undermined economic development and can lead to lower inflation because elderly individuals care especially strongly about the value of their pensions and savings. Evidence suggests that pensions, while making a difference to the well-being of older persons, also benefit entire families especially in times of crisis when there may be a shortage or loss of employment within households. A study by the Australian Government in 2003 estimated that "women between the ages of 65 and 74 years contribute A$16 billion per year in unpaid caregiving and voluntary work. Similarly, men in the same age group contributed A$10 billion per year." Due to increasing share of the elderly in the population, health care expenditures will continue to grow relative to the economy in coming decades. This has been considered as a negative phenomenon and effective strategies like labour productivity enhancement should be considered to deal with negative consequences of ageing. Sociology In the field of sociology and mental health, ageing is seen in five different views: ageing as maturity, ageing as decline, ageing as a life-cycle event, ageing as generation, and ageing as survival. Positive correlates with ageing often include economics, employment, marriage, children, education, and sense of control, as well as many others. The social science of ageing includes disengagement theory, activity theory, selectivity theory, and continuity theory. Retirement, a common transition faced by the elderly, may have both positive and negative consequences. As cyborgs currently are on the rise some theorists argue there is a need to develop new definitions of ageing and for instance a bio-techno-social definition of ageing has been suggested. There is a current debate as to whether or not the pursuit of longevity and the postponement of senescence are cost-effective health care goals given finite health care resources. Because of the accumulated infirmities of old age, bioethicist Ezekiel Emanuel, opines that the pursuit of longevity via the compression of morbidity hypothesis is a "fantasy" and that human life is not worth living after age 75; longevity then should not be a goal of health care policy. This opinion has been contested by neurosurgeon and medical ethicist Miguel Faria, who states that life can be worthwhile during old age, and that longevity should be pursued in association with the attainment of quality of life. Faria claims that postponement of senescence as well as happiness and wisdom can be attained in old age in a large proportion of those who lead healthy lifestyles and remain intellectually active. Health care demand With age inevitable biological changes occur that increase the risk of illness and disability. UNFPA states that: "A life-cycle approach to health care – one that starts early, continues through the reproductive years and lasts into old age – is essential for the physical and emotional well-being of older persons, and, indeed, all people. Public policies and programmes should additionally address the needs of older impoverished people who cannot afford health care." Many societies in Western Europe and Japan have ageing populations. While the effects on society are complex, there is a concern about the impact on health care demand. The large number of suggestions in the literature for specific interventions to cope with the expected increase in demand for long-term care in ageing societies can be organized under four headings: improve system performance; redesign service delivery; support informal caregivers; and shift demographic parameters. However, the annual growth in national health spending is not mainly due to increasing demand from ageing populations, but rather has been driven by rising incomes, costly new medical technology, a shortage of health care workers and informational asymmetries between providers and patients. A number of health problems become more prevalent as people get older. These include mental health problems as well as physical health problems, especially dementia. It has been estimated that population ageing only explains 0.2 percentage points of the annual growth rate in medical spending of 4.3% since 1970. In addition, certain reforms to the Medicare system in the United States decreased elderly spending on home health care by 12.5% per year between 1996 and 2000. Self-perception Beauty standards have evolved over time, and as scientific research in cosmeceuticals, cosmetic products seen to have medicinal benefits like anti-ageing creams, has increased, the industry has also expanded; the kinds of products they produce (such as serums and creams) have gradually gained popularity and become a part of many people's personal care routine. The increase in demand for cosmeceuticals has led scientists to find ingredients for these products in unorthodox places. For example, the secretion of cryptomphalus aspersa (or brown garden snail) has been found to have antioxidant properties, increase skin cell proliferation, and increase extracellular proteins such as collagen and fibronectin (important proteins for cell proliferation). Another substance used to prevent the physical manifestations of ageing is onobotulinumtoxinA, the toxin injected for Botox. In some cultures, old age is celebrated and honoured. In Korea, for example, a special party called hwangap is held to celebrate and congratulate an individual for turning 60 years old. In China, respect for elderly is often the basis for how a community is organized and has been at the foundation of Chinese culture and morality for thousands of years. Older people are respected for their wisdom and most important decisions have traditionally not been made without consulting them. This is a similar case for most Asian countries such as the Philippines, Thailand, Vietnam, Singapore, etc. Positive self-perceptions of ageing are associated with better mental and physical health and well-being. Positive self-perception of health has been correlated with higher well-being and reduced mortality among the elderly. Various reasons have been proposed for this association; people who are objectively healthy may naturally rate their health better as than that of their ill counterparts, though this link has been observed even in studies which have controlled for socioeconomic status, psychological functioning and health status. This finding is generally stronger for men than women, though this relationship is not universal across all studies and may only be true in some circumstances. As people age, subjective health remains relatively stable, even though objective health worsens. In fact, perceived health improves with age when objective health is controlled in the equation. This phenomenon is known as the "paradox of ageing". This may be a result of social comparison; for instance, the older people get, the more they may consider themselves in better health than their same-aged peers. Elderly people often associate their functional and physical decline with the normal ageing process. One way to help younger people experience what it feels like to be older is through an ageing suit. There are several different kinds of suits including the GERT (named as a reference to gerontology), the R70i exoskeleton, and the AGNES (Age Gain Now Empathy Suit) suits. These suits create the feelings of the effects of ageing by adding extra weight and increased pressure in certain points like the wrists, ankles and other joints. In addition, the various suits have different ways to impair vision and hearing to simulate the loss of these senses. To create the loss of feeling in hands that the elderly experience, special gloves are a part of the uniforms. Use of these suits may help to increase the amount of empathy felt for the elderly and could be considered particularly useful for those who are either learning about ageing, or those who work with the elderly, such as nurses or care centre staff. Design is another field that could benefit from the empathy these suits may cause. When designers understand what it feels like to have the impairments of old age, they can better design buildings, packaging, or even tools to help with the simple day-to-day tasks that are more difficult with less dexterity. Designing with the elderly in mind may help to reduce the negative feelings that are associated with the loss of abilities that the elderly face. Healthy ageing The healthy ageing framework, proposed by the World Health Organation operationalizes health as functional ability, which results from the interactions of intrinsic capacity and the environments. Intrinsic capacity Intrinsic capacity is a construct encompassing people's physical and mental abilities which can be drawn upon during ageing. Intrinsic capacity comprises the domains of: cognition, locomotion, vitality/nutrition, psychological and sensory (visual and hearing). A recent study found four "profiles" or "statuses" of intrinsic capacity among older adults, namely high IC (43% at baseline), low deterioration with impaired locomotion (17%), high deterioration without cognitive impairment (22%) and high deterioration with cognitive impairment (18%). Over half of the study sample remained in the same status at baseline and follow-up (61%). Around one-fourth of participants transitioned from the high IC to the low deterioration status, and only 3% of the participants improved their status. Interestingly, the probability of improvement was observed in the status of high deterioration. Participants in the latent statuses of low and high levels of deterioration had a significantly higher risk of frailty, disability and dementia than their high IC counterparts. Successful aging The concept of successful aging can be traced back to the 1950s and was popularized in the 1980s. Traditional definitions of successful aging have emphasized absence of physical and cognitive disabilities. In their 1987 article, Rowe and Kahn characterized successful aging as involving three components: a) freedom from disease and disability, b) high cognitive and physical functioning, and c) social and productive engagement. The study cited previous was also done back in 1987 and therefore, these factors associated with successful aging have probably been changed. With the current knowledge, scientists started to focus on learning about the effect spirituality in successful aging. There are some differences in cultures as to which of these components are the most important. Most often across cultures, social engagement was the most highly rated but depending on the culture the definition of successful aging changes. Cultural references The ancient Greek dramatist Euripides (5th century BC) describes the multiple-headed mythological monster Hydra as having a regenerative capacity which makes it immortal, which is the historical background to the name of the biological genus Hydra. The Book of Job (c. 6th century BC) describes the human lifespan as inherently limited and makes a comparison with the innate immortality that a felled tree may have when undergoing vegetative regeneration:
Biology and health sciences
Health, fitness, and medicine
null
9649191
https://en.wikipedia.org/wiki/Suicide%20crisis
Suicide crisis
A suicide crisis, suicidal crisis or potential suicide is a situation in which a person is attempting to kill themselves or is seriously contemplating or planning to do so. It is considered by public safety authorities, medical practice, and emergency services to be a medical emergency, requiring immediate suicide intervention and emergency medical treatment. Suicidal presentations occur when an individual faces an emotional, physical, or social problem they feel they cannot overcome and considers suicide to be a solution. Clinicians usually attempt to re-frame suicidal crises, point out that suicide is not a solution and help the individual identify and solve or tolerate the problems. Nature Most cases of potential suicide have warning signs. Attempting to kill oneself, talking about or planning suicide, writing a suicide note, talking or thinking frequently about death, exhibiting a death wish by expressing it verbally or by taking potentially deadly risks, or taking steps towards attempting suicide (e.g., obtaining rope and tying it to a ligature point to attempt a hanging or stockpiling pills for an attempted overdose) are all indicators of a suicide crisis. More subtle clues include preparing for death for no apparent reason (such as putting affairs in order, changing a will, etc.), writing goodbye letters, and visiting or calling family members or friends to say farewell. The person may also start giving away previously valued items (because they "no longer need them"). In other cases, the person who seemed depressed and suicidal may become normal or filled with energy or calmness again; these people particularly need to be watched because the return to normalcy could be because they have come to terms with whatever act is next (e.g., a plan to attempt suicide and "escape" from their problems). Depression is a major causative factor of suicide, and individuals with depression are considered a high-risk group for suicidal behavior. However, suicidal behaviour is not just restricted to patients diagnosed with some form of depression. More than 90% of all suicides are related to a mood disorder, such as bipolar disorder, depression, addiction, PTSD, or other psychiatric illnesses, such as schizophrenia. The deeper the depression, the greater the risk, often manifested in feelings or expressions of apathy, helplessness, hopelessness, or worthlessness. Suicide is often committed in response to a cause of depression, such as the cessation of a romantic relationship, serious illness or injury (like the loss of a limb or blindness), the death of a loved one, financial problems or poverty, guilt or fear of getting caught for something the person did, drug abuse, old age, concerns with gender identity, among others. In 2006, WHO conducted a study on suicide around the world. The results in Canada showed that 80-90% of suicide attempts (an estimation, due to the complications of predicting attempted suicide). 90% of attempted suicides investigated led to hospitalizations. 12% of attempts were in hospitals. Treatments Ketamine has been tested for treatment-resistant bipolar depression, major depressive disorder, and people in a suicidal crisis in emergency rooms, and is being used this way off-label. The drug is given by a single intravenous infusion at doses less than those used in anesthesia, and preliminary data have indicated it produces a rapid (within 2 hours) and relatively sustained (about 1–2 weeks long) significant reduction in symptoms in some patients. Initial studies with ketamine have sparked scientific and clinical interest due to its rapid onset, and because it appears to work by blocking NMDA receptors for glutamate, a different mechanism from most modern antidepressants that operate on other targets. Some studies have shown that lithium medication can reduce suicidal ideation within 48 hours of administration. Intervention Intervention is important to stop someone in a suicidal crisis from harming or killing themselves. Every sign of suicide should be taken seriously. Steps to take in order to help defuse the situation or get the person in crisis to safety include: Stay with the person so they are not alone. Call 988 (if in the U.S.) or another suicide hotline, or take the person to the nearest hospital facility. Reach out to a family member or friend about what is going on. In many countries police negotiators will be called to respond to situations where a person is at high risk of an immediate suicide crisis. However offers of help are frequently rejected in these situations, because they have not been directly sought by the person in crisis, who wants to maintain a level of independence. Supporting those in crisis to make independent decisions, and adapting terminology, for example using the phrase ‘sort (x) out’ can aid in minimising resistance to the help being offered. If a friend or loved one is talking about suicide but is not yet in crisis, the following steps should be taken to help them get professional help and feel supported: Call a suicide hotline number; the U.S. numbers are 988 or 800-273-8255. Remove dangerous objects, such as guns and knives, from the home. Offer reassurance and support. Help the person to seek medical treatment.
Biology and health sciences
Mental disorders
Health
9650286
https://en.wikipedia.org/wiki/Australian%20Lowline
Australian Lowline
The Australian Lowline is a modern Australian breed of small, polled beef cattle. It was the result of a selective breeding experiment using black Aberdeen Angus cattle at the Agricultural Research Centre of the Department of Agriculture of New South Wales at Trangie. It is among the smallest breeds of cattle, but is not a dwarf breed. History In 1929 the Department of Agriculture of New South Wales started an Aberdeen Angus herd at the Agricultural Research Centre at Trangie with stock imported from Canada. Various additions to the herd were made, from Canada, from the United States, from the United Kingdom and from other herds in Australia, until the herd-book was closed in 1964. From about this time, various research projects were conducted at Trangie. In 1974 an investigation of the correlation between growth rate and profitability, and of whether feed conversion efficiency was higher in large or in small animals fed on grass, was begun. In the study, three separate herds were established: one of animals with a high rate of growth in their first year, one with animals that had shown low growth, and one randomly selected as a control group. These were called the High Line, the Low Line and the Control Line respectively. The Low Line herd started with 85 cows and some young bulls, and was closed to additions of other stock from 1974; it eventually numbered more than 400. To exclude possible effects of climate from the study, some stock was reared at Glen Innes in northern New South Wales and at Hamilton, Victoria. The experiment ran for nineteen years, by the end of which the Low Line animals were on average some 30% smaller than the High Line group. When the experiment ended in the early 1990s, the Lowline stock was auctioned off. A breeders' association, the Australian Lowline Cattle Association, was formed in 1992, and the first herd-book was published in 1993; it listed 150 cows and 36 bulls. Australia is the only country which reports Lowline cattle to DAD-IS; the breeders' association has members in Canada, New Zealand, the United Kingdom and the United States. Characteristics The Australian Lowline is among the smallest of cattle breeds, but is not affected by dwarfism. Height is about 60% of that of the normal Aberdeen Angus breed, or about for bulls and for cows. Calves average about at birth, but may weigh as little as . The coat is usually solid black, but may also be solid red; some white colouring in the area of the scrotum or udder is tolerated. The cattle are naturally polled and are quiet-tempered. They adapt well to varying climatic conditions. Cows calve easily and provide plenty of milk to their young. Compared to larger cattle, the Lowline does less damage to pasture land, and does not need such high or strong fencing. Use The Australian Lowline is reared for beef. The meat is wellmarbled and tasty; carcass yield is high.
Biology and health sciences
Miniature cattle
Animals
7445870
https://en.wikipedia.org/wiki/Coua
Coua
Couas are large, mostly terrestrial birds of the cuckoo family, endemic to the island of Madagascar. Couas are reminiscent of African turacos when walking along tree branches, and they likewise feature brightly coloured bare skin around the eyes. Some resemble coucals in their habit of clambering through jungle while foraging, while the arboreal species move between tree canopies with gliding flight. Four species have been recorded in rainforests while the remaining six are found in the dry forests of western and southern Madagascar. They have large feet, with a reversible third toe like all cuckoos. Their long tibia suggest a relationship with the Carpococcyx ground-cuckoos of Asia, a genus with similar nestlings. Consequently, they are sometimes united in the subfamily Couinae. Couas build their own nests and lay white eggs. Couas' calls are a short series of evenly-spaced notes, which are sometimes answered by other individuals. Taxonomy The genus Coua was erected by the Swiss naturalist Heinrich Rudolf Schinz in 1821 with the Cuculus madagascariensis (a synonym of Cuculus gigas) as the type species. The name is from koa, the Malagasy word for the couas. Species There are ten extant species placed in the genus: Fossils and extinct species Ancient coua, Coua primaeva – prehistoric Bertha's coua, Coua berthae – only known from Holocene fossil remains Delalande's coua, or the snail-eating coua Coua delalandei – extinct (late 19th century)
Biology and health sciences
Cuculiformes and relatives
Animals
11127278
https://en.wikipedia.org/wiki/Pair-instability%20supernova
Pair-instability supernova
A pair-instability supernova is a type of supernova predicted to occur when pair production, the production of free electrons and positrons in the collision between atomic nuclei and energetic gamma rays, temporarily reduces the internal radiation pressure supporting a supermassive star's core against gravitational collapse. This pressure drop leads to a partial collapse, which in turn causes greatly accelerated burning in a runaway thermonuclear explosion, resulting in the star being blown completely apart without leaving a stellar remnant behind. Pair-instability supernovae can only happen in stars with a mass range from around 130 to 250 solar masses and low to moderate metallicity (low abundance of elements other than hydrogen and helium – a situation common in Population III stars). Physics Photon emission Photons given off by a body in thermal equilibrium have a black-body spectrum with an energy density proportional to the fourth power of the temperature, as described by the Stefan–Boltzmann law. Wien's law states that the wavelength of maximum emission from a black body is inversely proportional to its temperature. Equivalently, the frequency, and the energy, of the peak emission is directly proportional to the temperature. Photon pressure in stars In very massive, hot stars with interior temperatures above about (), photons produced in the stellar core are primarily in the form of very high-energy gamma rays. The pressure from these gamma rays fleeing outward from the core helps to hold up the upper layers of the star against the inward pull of gravity. If the level of gamma rays (the energy density) is reduced, then the outer layers of the star will begin to collapse inwards. Gamma rays with sufficiently high energy can interact with nuclei, electrons, or one another. One of those interactions is to form pairs of particles, such as electron-positron pairs, and these pairs can also meet and annihilate each other to create gamma rays again, all in accordance with Albert Einstein's mass-energy equivalence equation At the very high density of a large stellar core, pair production and annihilation occur rapidly. Gamma rays, electrons, and positrons are overall held in thermal equilibrium, ensuring the star's core remains stable. By random fluctuation, the sudden heating and compression of the core can generate gamma rays energetic enough to be converted into an avalanche of electron-positron pairs. This reduces the pressure. When the collapse stops, the positrons find electrons and the pressure from gamma rays is driven up, again. The population of positrons provides a brief reservoir of new gamma rays as the expanding supernova's core pressure drops. Pair-instability As temperatures and gamma ray energies increase, more and more gamma ray energy is absorbed in creating electron–positron pairs. This reduction in gamma ray energy density reduces the radiation pressure that resists gravitational collapse and supports the outer layers of the star. The star contracts, compressing and heating the core, thereby increasing the rate of energy production. This increases the energy of the gamma rays that are produced, making them more likely to interact, and so increases the rate at which energy is absorbed in further pair production. As a result, the stellar core loses its support in a runaway process, in which gamma rays are created at an increasing rate; but more and more of the gamma rays are absorbed to produce electron–positron pairs, and the annihilation of the electron–positron pairs is insufficient to halt further contraction of the core. Finally, the thermal runaway ignites detonation fusion of oxygen and heavier elements. When the temperature reaches the level when electrons and positrons carry the same energy fraction as gamma-rays, pair production cannot increase any further, it is balanced by annihilation. Contraction no longer accelerates, but the core now produces much more energy than prior to collapse, and this results in a supernova: the outer layers of the star are blown away by sudden large increase of power production in the core. Calculations suggest that so much of the outer layers are lost that the very hot core itself is no longer under sufficient pressure to keep it intact, and it is completely disrupted too. Stellar susceptibility For a star to undergo pair-instability supernova, the increased creation of positron/electron pairs by gamma ray collisions must reduce outward pressure enough for inward gravitational pressure to overwhelm it. High rotational speed and/or metallicity can prevent this. Stars with these characteristics still contract as their outward pressure drops, but unlike their slower or less metal-rich cousins, these stars continue to exert enough outward pressure to prevent gravitational collapse. Stars formed by collision mergers having a metallicity Z between 0.02 and 0.001 may end their lives as pair-instability supernovae if their mass is in the appropriate range. Very large high-metallicity stars are probably unstable due to the Eddington limit, and would tend to shed mass during the formation process. Stellar behavior Several sources describe the stellar behavior for large stars in pair-instability conditions. Below 100 solar masses Gamma rays produced by stars of fewer than 100 or so solar masses are not energetic enough to produce electron-positron pairs. Some of these stars will undergo supernovae of a different type at the end of their lives, but the causative mechanisms do not involve pair-instability. 100 to 130 solar masses These stars are large enough to produce gamma rays with enough energy to create electron-positron pairs, but the resulting net reduction in counter-gravitational pressure is insufficient to cause the core-overpressure required for supernova. Instead, the contraction caused by pair-creation provokes increased thermonuclear activity within the star that repulses the inward pressure and returns the star to equilibrium. It is thought that stars of this size undergo a series of these pulses until they shed sufficient mass to drop below 100 solar masses, at which point they are no longer hot enough to support pair-creation. Pulsing of this nature may have been responsible for the variations in brightness experienced by Eta Carinae in 1843, though this explanation is not universally accepted. 130 to 250 solar masses For very high-mass stars, with mass at least 130 and up to perhaps roughly 250 solar masses, a true pair-instability supernova can occur. In these stars, the first time that conditions support pair production instability, the situation runs out of control. The collapse proceeds to efficiently compress the star's core; the overpressure is sufficient to allow runaway nuclear fusion to burn it in several seconds, creating a thermonuclear explosion. With more thermal energy released than the star's gravitational binding energy, it is completely disrupted; no black hole or other remnant is left behind. This is predicted to contribute to a "mass gap" in the mass distribution of stellar black holes. (This "upper mass gap" is to be distinguished from a suspected "lower mass gap" in the range of a few solar masses.) In addition to the immediate energy release, a large fraction of the star's core is transformed to nickel-56, a radioactive isotope which decays with a half-life of 6.1 days into cobalt-56. Cobalt-56 has a half-life of 77 days and then further decays to the stable isotope iron-56 (see Supernova nucleosynthesis). For the hypernova SN 2006gy, studies indicate that perhaps 40 solar masses of the original star were released as Ni-56, almost the entire mass of the star's core regions. Collision between the exploding star core and gas it ejected earlier, and radioactive decay, release most of the visible light. 250 solar masses or more A different reaction mechanism, photodisintegration, follows the initial pair-instability collapse in stars of at least 250 solar masses. This endothermic (energy-absorbing) reaction absorbs the excess energy from the earlier stages before the runaway fusion can cause a hypernova explosion; the star then collapses completely into a black hole. Appearance Luminosity Pair-instability supernovae are popularly thought to be highly luminous. This is only the case for the most massive progenitors since the luminosity depends strongly on the ejected mass of radioactive 56Ni. They can have peak luminosities of over 1037 W, brighter than type Ia supernovae, but at lower masses peak luminosities are less than 1035 W, comparable to or less than typical type II supernovae. Spectrum The spectra of pair-instability supernovae depend on the nature of the progenitor star. Thus they can appear as type II or type Ib/c supernova spectra. Progenitors with a significant remaining hydrogen envelope will produce a type II supernova, those with no hydrogen but significant helium will produce a type Ib, and those with no hydrogen and virtually no helium will produce a type Ic. Light curves In contrast to the spectra, the light curves are quite different from the common types of supernova. The light curves are highly extended, with peak luminosity occurring months after onset. This is due to the extreme amounts of 56Ni expelled, and the optically dense ejecta, as the star is entirely disrupted. Remnant Pair-instability supernovae completely destroy the progenitor star and do not leave behind a neutron star or black hole. The entire mass of the star is ejected, so a nebular remnant is produced and many solar masses of heavy elements are ejected into interstellar space. Pair-instability supernovae candidates Some supernovae candidates for classification as pair-instability supernovae include: SN 2006gy SN 2007bi, SN 2213-1745 SN 1000+0216, SN 2010mb OGLE14-073, SN 2016aps SN 2016iet, SN 2018ibb,
Physical sciences
Stellar astronomy
Astronomy
2283222
https://en.wikipedia.org/wiki/Endohedral%20fullerene
Endohedral fullerene
Endohedral fullerenes, also called endofullerenes, are fullerenes that have additional atoms, ions, or clusters enclosed within their inner spheres. The first lanthanum C60 complex called La@C60 was synthesized in 1985. The @ (at sign) in the name reflects the notion of a small molecule trapped inside a shell. Two types of endohedral complexes exist: endohedral metallofullerenes and non-metal doped fullerenes. Notation In a traditional chemical formula notation, a buckminsterfullerene (C60) with an atom (M) was simply represented as MC60 regardless of whether M was inside or outside the fullerene. In order to allow for more detailed discussions with minimal loss of information, a more explicit notation was proposed in 1991, where the atoms listed to the left of the @ sign are situated inside the network composed of the atoms listed to the right. The example above would then be denoted M@C60 if M were inside the carbon network. A more complex example is K2(K@C59B), which denotes "a 60-atom fullerene cage with one boron atom substituted for a carbon in the geodesic network, a single potassium trapped inside, and two potassium atoms adhering to the outside." The choice of the symbol has been explained by the authors as being concise, readily printed and transmitted electronically (the at sign is included in ASCII, which most modern character encoding schemes are based on), and the visual aspects suggesting the structure of an endohedral fullerene. Endohedral metallofullerenes Doping fullerenes with electropositive metals takes place in an arc reactor or via laser evaporation. The metals can be transition metals like scandium, yttrium as well as lanthanides like lanthanum and cerium. Also possible are endohedral complexes with elements of the alkaline earth metals like barium and strontium, alkali metals like potassium and tetravalent metals like uranium, zirconium and hafnium. The synthesis in the arc reactor is however unspecific. Besides unfilled fullerenes, endohedral metallofullerenes develop with different cage sizes like La@C60 or La@C82 and as different isomer cages. Aside from the dominant presence of mono-metal cages, numerous di-metal endohedral complexes and the tri-metal carbide fullerenes like Sc3C2@C80 were also isolated. In 1999 a discovery drew large attention. With the synthesis of the Sc3N@C80 by Harry Dorn and coworkers, the inclusion of a molecule fragment in a fullerene cage had succeeded for the first time. This compound can be prepared by arc-vaporization at temperatures up to 1100 °C of graphite rods packed with scandium(III) oxide iron nitride and graphite powder in a K-H generator in a nitrogen atmosphere at 300 Torr. Endohedral metallofullerenes are characterised by the fact that electrons will transfer from the metal atom to the fullerene cage and that the metal atom takes a position off-center in the cage. The size of the charge transfer is not always simple to determine. In most cases it is between 2 and 3 charge units, in the case of the La2@C80 however it can be even about 6 electrons such as in Sc3N@C80 which is better described as [Sc3N]+6@[C80]−6. These anionic fullerene cages are very stable molecules and do not have the reactivity associated with ordinary empty fullerenes. They are stable in air up to very high temperatures (600 to 850 °C). The lack of reactivity in Diels-Alder reactions is utilised in a method to purify [C80]−6 compounds from a complex mixture of empty and partly filled fullerenes of different cage size. In this method Merrifield resin is modified as a cyclopentadienyl resin and used as a solid phase against a mobile phase containing the complex mixture in a column chromatography operation. Only very stable fullerenes such as [Sc3N]+6@[C80]−6 pass through the column unreacted. In Ce2@C80 the two metal atoms exhibit a non-bonded interaction. Since all the six-membered rings in C80-Ih are equal the two encapsulated Ce atoms exhibit a three-dimensional random motion. This is evidenced by the presence of only two signals in the 13C-NMR spectrum. It is possible to force the metal atoms to a standstill at the equator as shown by x-ray crystallography when the fullerene is exahedrally functionalized by an electron donation silyl group in a reaction of Ce2@C80 with 1,1,2,2-tetrakis(2,4,6-trimethylphenyl)-1,2-disilirane. Gd@C82(OH)22, an endohedral metallofluorenol, can competitively inhibit the WW domain in the oncogene YAP1 from activating. It was originally developed as an MRI contrast agent. Non-metal doped fullerenes Endohedral complexes He@C60 and Ne@C60 are prepared by pressurizing C60 to ca. 3 bar in a noble-gas atmosphere. Under these conditions about one out of every 650,000 C60 cages was doped with a helium atom. The formation of endohedral complexes with helium, neon, argon, krypton and xenon as well as numerous adducts of the He@C60 compound was also demonstrated with pressures of 3 kbars and incorporation of up to 0.1% of the noble gases. While noble gases are chemically very inert and commonly exist as individual atoms, this is not the case for nitrogen and phosphorus and so the formation of the endohedral complexes N@C60, N@C70 and P@C60 is more surprising. The nitrogen atom is in its electronic initial state (4S3/2) and is highly reactive. Nevertheless, N@C60 is sufficiently stable that exohedral derivatization from the mono- to the hexa adduct of the malonic acid ethyl ester is possible. In these compounds no charge transfer of the nitrogen atom in the center to the carbon atoms of the cage takes place. Therefore, 13C-couplings, which are observed very easily with the endohedral metallofullerenes, could only be observed in the case of the N@C60 in a high resolution spectrum as shoulders of the central line. The central atom in these endohedral complexes is located in the center of the cage. While other atomic traps require complex equipment, e.g. laser cooling or magnetic traps, endohedral fullerenes represent an atomic trap that is stable at room temperature and for an arbitrarily long time. Atomic or ion traps are of great interest since particles are present free from (significant) interaction with their environment, allowing unique quantum mechanical phenomena to be explored. For example, the compression of the atomic wave function as a consequence of the packing in the cage could be observed with ENDOR spectroscopy. The nitrogen atom can be used as a probe, in order to detect the smallest changes of the electronic structure of its environment. Contrary to the metallo endohedral compounds, these complexes cannot be produced in an arc. Atoms are implanted in the fullerene starting material using gas discharge (nitrogen and phosphorus complexes) or by direct ion implantation. Alternatively, endohedral hydrogen fullerenes can be produced by opening and closing a fullerene by organic chemistry methods. A recent example of endohedral fullerenes includes single molecules of water encapsulated in C60. Noble gas endofullerenes are predicted to exhibit unusual polarizability. Thus, calculated values of mean polarizability of Ng@C60 do not equal to the sum of polarizabilities of a fullerene cage and the trapped atom, i.e. exaltation of polarizability occurs. The sign of the Δα polarizability exaltation depends on the number of atoms in a fullerene molecule: for small fullerenes (), it is positive; for the larger ones (), it is negative (depression of polarizability). The following formula, describing the dependence of Δα on n, has been proposed: Δα = αNg(2e−0.06(n – 20)−1). It describes the DFT-calculated mean polarizabilities of Ng@C60 endofullerenes with sufficient accuracy. The calculated data allows using C60 fullerene as a Faraday cage, which isolates the encapsulated atom from the external electric field. The mentioned relations should be typical for the more complicated endohedral structures (e.g., C60@C240 and giant fullerene-containing "onions" ). Molecular endofullerenes Closed fullerenes encapsulating small molecules have been synthesized. Representative are the synthesis of the dihydrogen endofullerene H2@C60, the water endofullerene H2O@C60, the hydrogen fluoride endofullerene HF@C60, and the methane endofullerene CH4@C60. The encapsulated molecules display unusual physical properties which have been studied by a variety of physical methods. As shown theoretically, compression of molecular endofullerenes (e.g., H2@C60) may lead to dissociation of the encapsulated molecules and reaction of their fragments with interiors of the fullerene cage. Such reactions should result in endohedral fullerene adducts, which are currently unknown.
Physical sciences
Supramolecular chemistry
Chemistry
12107594
https://en.wikipedia.org/wiki/Archispirostreptus%20gigas
Archispirostreptus gigas
Archispirostreptus gigas, known as the giant African millipede, shongololo or Bongololo, is the largest extant species of millipede, growing up to in length, in circumference. It has approximately 256 legs, although the number of legs changes with each molting so it can vary according to each individual. It is a widespread species in lowland parts of East Africa, from Mozambique to Kenya, but rarely reaches altitudes above . It lives mostly in forests, but can also be found in areas of coastal habitat that contain at least a few trees. It is native to Southern Arabia, especially Dhofar. In general, giant millipedes have a life expectancy of about 7–10 years. Giant millipedes have two main modes of defence if they feel threatened: curling into a tight spiral exposing only the hard exoskeleton, and secretion of an irritating liquid from pores on their body. This liquid can be harmful if introduced into the eyes or mouth. Because of this defense, A. gigas is one of the few invertebrates that driver ants are incapable of taking as prey. The chemicals identified in this millipede’s defensive secretion are toluquinone and 2-methoxy-3-methylbenzoquinone Small mites are often observed crawling on their exoskeleton and among their legs. The millipedes have a symbiotic relationship with these mites, in which the mites help clean the millipede's exoskeleton in exchange for food and the protection of their host. A docile species, A. gigas is sometimes seen in the pet trade. However, the U.S. federal government requires anyone bringing giant millipedes into the country to have permits for them.
Biology and health sciences
Myriapoda
Animals
12116844
https://en.wikipedia.org/wiki/Poultry%20farming
Poultry farming
Poultry farming is the form of animal husbandry which raises domesticated birds such as chickens, ducks, turkeys and geese to produce meat or eggs for food. Poultry – mostly chickens – are farmed in great numbers. More than 60 billion chickens are killed for consumption annually. Chickens raised for eggs are known as layers, while chickens raised for meat are called broilers. In the United States, the national organization overseeing poultry production is the Food and Drug Administration (FDA). In the UK, the national organisation is the Department for Environment, Food and Rural Affairs (DEFRA). Intensive and alternative According to the World Watch Institute, 74 percent of the world's poultry meat, and 68 percent of eggs are produced intensively. One alternative to intensive poultry farming is free-range farming using lower stocking densities. Poultry producers routinely use nationally approved medications, such as antibiotics, in feed or drinking water, to treat disease or to prevent disease outbreaks. Some FDA-approved medications are also approved for improved feed utilization. Chicken coop A chicken coop or hen house is a structure where chickens or other fowl are kept safe and secure. There may be nest boxes and perches in the house. There is a long-standing controversy over the basic need for a chicken coop. One philosophy, known as the "fresh air school", holds that chickens are mostly hardy but can be brought low by confinement, poor air quality and darkness, hence the need for a highly ventilated or open-sided coop with conditions more like the outdoors, even in winter. However, others who keep chickens believe they are prone to illness in outdoor weather and need a controlled-environment coop. This has led to two housing designs for chickens: fresh-air houses with wide openings and nothing more than wire mesh between chickens and the weather (even in Northern winters), or closed houses with doors, windows and hatches which can shut off most ventilation. Egg-laying chickens Commercial hens usually begin laying eggs at 16–21 weeks of age, although production gradually declines soon after from approximately 25 weeks of age. This means that in many countries, by approximately 72 weeks of age, flocks are considered economically unviable and are slaughtered after approximately 12 months of egg production, although chickens will naturally live for 6 or more years. In some countries, hens are force moulted to re-invigorate egg-laying. Environmental conditions are often automatically controlled in egg-laying systems. For example, the duration of the light phase is initially increased to prompt the beginning of egg-laying at 16–20 weeks of age and then mimics summer day length which stimulates the hens to continue laying eggs all year round; normally, egg production occurs only in the warmer months. Some commercial breeds of hen can produce over 300 eggs a year. Free-range Free-range poultry farming allows chickens to roam freely for a period of the day, although they are usually confined in sheds at night to protect them from predators or kept indoors if the weather is particularly bad. In the UK, the Department for Environment, Food and Rural Affairs (DEFRA) states that a free-range chicken must have day-time access to open-air runs during at least half of its life. Unlike in the United States, this definition also applies to free-range egg-laying hens, meaning they can still be confined in high stocking densities with limited outdoors access. The European Union regulates marketing standards for egg farming which specifies a minimum condition for free-range eggs that "hens have continuous daytime access to open air runs, except in the case of temporary restrictions imposed by veterinary authorities". The RSPCA "Welfare standards for laying hens and pullets" indicates that the stocking rate must not exceed 1,000 birds per hectare (10 m2 per hen) of range available and a minimum area of overhead shade/shelter of 8 m2 per 1,000 hens must be provided. Free-range farming of egg-laying hens is increasing its share of the market. DEFRA figures indicate that 45% of eggs produced in the UK throughout 2010 were free range, 5% were produced in barn systems and 50% from cages. This compares with 41% being free range in 2009. Suitable land requires adequate drainage to minimise worms and coccidial oocysts, suitable protection from prevailing winds, good ventilation, access and protection from predators. Excess heat, cold or damp can have a harmful effect on the animals and their productivity. Free range farmers have less control than farmers using cages in what food their chickens eat, which can lead to unreliable productivity, though supplementary feeding reduces this uncertainty. In some farms, the manure from free range poultry can be used to benefit crops. The benefits of free-range poultry farming for laying hens include opportunities for natural behaviours such as pecking, scratching, foraging and exercise outdoors. Both intensive free-range poultry and "cage-free" farming with hens still being confined in close proximity due to high stocking densities have animal welfare concerns. Cannibalism, feather pecking and vent pecking can be common, prompting some farmers to use beak trimming as a preventative measure, although reducing stocking rates would eliminate these problems. Diseases can be common and the animals are vulnerable to predators. Barn systems have been found to have the worst bird welfare. In South-East Asia, a lack of disease control in free range farming has been associated with outbreaks of avian influenza. Free-run Instead of keeping them in cages, free-run laying hens roam freely within an enclosed barn. This type of housing also provides enrichment for the hens, including nesting boxes and perches that are often located along the floor of the barn. Many believe that this type of housing is better for the bird than any caging system, but it has its disadvantages, too. Due to the increase in activity of the birds, dust levels tend to elevate and the air quality decreases. When air quality drops, so does production as this compromises the health and welfare of both birds and their caretakers. Organic In organic systems in the US, organic management starts with the selection of the livestock and should begin "no later than the second day of life". Organic poultry production requires organic management in nutrition, preventative health care, living conditions, handling/processing, and record keeping. The Soil Association standards used to certify organic flocks in the UK indicate a maximum outdoors stocking density of 1,000 birds per hectare and a maximum of 2,000 hens in each poultry house. In the UK, organic laying hens are not routinely beak-trimmed. Yarding While often confused with free range farming, yarding is actually a separate method by which a hutch and fenced-off area outside are combined when farming poultry. The distinction is that free-range poultry are either totally unfenced, or the fence is so distant that it has little influence on their freedom of movement. Yarding is a common technique used by small farms in the Northeastern U.S. The birds are released daily from hutches or coops. The hens usually lay eggs either on the floor of the coop or in baskets if provided by the farmer. This husbandry technique can be complicated if used with roosters, mostly because of their aggressive behavior. Battery cage The majority of hens in many countries are housed in battery cages, although the European Union Council Directive 1999/74/EC has banned the conventional battery cage in EU states from January 2012. As of April 1, 2017, no new battery cages are able to be installed in Canada. Farmers must move towards enriched housing or use a cage-free system. In 2016, the Egg Farmers of Canada announced that the country's egg farmers will be transitioning away from conventional hen housing systems (battery cages) and have no conventional caging left by 2036. Batteries are small cages, usually made of metal in modern systems, housing 3 to 8 hens. The walls are made of either solid metal or mesh, and the floor is sloped wire mesh to allow the feces to drop through and eggs to roll onto an egg-collecting conveyor belt. Water is usually provided by overhead nipple systems, and food in a trough along the front of the cage replenished at regular intervals by a mechanical system. Battery cages are arranged in long rows as multiple tiers, often with cages back-to-back (hence the term). Within a single barn, there may be several floors containing battery cages meaning that a single shed may contain many tens of thousands of hens. Light intensity is often kept low (e.g. 10 lux) to reduce feather pecking and vent pecking. Benefits of battery cages include easier care for the birds, floor-laid eggs (which are expensive to collect) are eliminated, eggs are cleaner, capture at the end of lay is expedited, generally less feed is required to produce eggs, broodiness is eliminated, more hens may be housed in a given house floor space, internal parasites are more easily treated, and labor requirements are generally much reduced. In farms using cages for egg production, there are more birds per unit area; this allows for greater productivity and lower food costs. Floor space ranges upwards from 300 cm2 per hen. EU standards in 2003 called for at least 550 cm2 per hen. In the US, the current recommendation by the United Egg Producers is 67 to 86 in2 (430 to 560 cm2) per bird. The space available to battery hens has often been described as less than the size of a piece of A4 paper (623 cm2). Animal welfare scientists have been critical of battery cages because they do not provide hens with sufficient space to stand, walk, flap their wings, perch, or make a nest, and it is widely considered that hens suffer through boredom and frustration through being unable to perform these behaviours. This can lead to a wide range of abnormal behaviours, some of which are injurious to the hens or their cagemates. Furnished cage In 1999, the European Union Council Directive 1999/74/EC banned conventional battery cages for laying hens throughout the European Union from January 1, 2012; they were banned previously in other countries including Switzerland. In response to these bans, development of prototype commercial furnished cage systems began in the 1980s. Furnished cages, sometimes called 'enriched' or 'modified' cages, are cages for egg-laying hens which have been designed to allow the hens to perform their "natural behaviors" whilst retaining their economic and husbandry advantages, and also provide some of the welfare advantages of non-cage systems. Many design features of furnished cages have been incorporated because research in animal welfare science has shown them to be of benefit to the hens. In the UK, the DEFRA "Code for the Welfare of Laying Hens" states furnished cages should provide at least 750 cm2 of cage area per hen, 600 cm2 of which should be usable; the height of the cage other than that above the usable area should be at least 20 cm at every point and no cage should have a total area that is less than 2000 cm2. In addition, furnished cages should provide a nest, litter such that pecking and scratching are possible, appropriate perches allowing at least 15 cm per hen, a claw-shortening device, and a feed trough which may be used without restriction providing 12 cm per hen. Furnished cages (Enriched) give the hens more space than the conventional battery cages, so that each bird may spread their wings without touching one another if desired. Enrichment such as nest boxes, perches, and dust baths are also provided so that the birds may carry out their natural behaviors such as nesting, roosting, and scratching as though they were outdoors. Enrichment of laying hen cages ultimately results in better bone quality. This is a result of the increased activity in the hens from the additional space and enrichment provided in the furnished housing system. Although the enriched housing system has its advantages such as reduced aggression towards one another and cleaner eggs, modern egg laying breeds often suffer from osteoporosis which results in the chicken's skeletal system being weakened. During egg production, large amounts of calcium are transferred from bones to create egg-shell. Although dietary calcium levels are adequate, absorption of dietary calcium is not always sufficient, given the intensity of production, to fully replenish bone calcium. This can lead to increases in bone breakages, particularly when the hens are being removed from cages at the end of laying. Osteoporosis may be prevented by free range and cage-free housing systems, as they have shown to have a beneficial impact on the skeletal system of the hens compared to those housed in caged systems. Countries such as Austria, Belgium and Germany are planning to ban furnished cages until 2025 additionally to the already banned conventional cages. Meat-producing chickens – husbandry systems Indoor broilers Meat chickens, commonly called broilers, are floor-raised on litter such as wood shavings, peanut shells, and rice hulls, indoors in climate-controlled housing. Under modern farming methods, meat chickens reared indoors reach slaughter weight at 5 to 9 weeks of age, as they have been selectively bred to do so. In the first week of a broiler's life, it can grow up to 300 percent of its body size. A nine-week-old broiler averages over 9 pounds (4 kg) in body weight. At nine weeks, a hen will average around 7 pounds (3.2 kg) and a rooster will weigh around 12 pounds (5.5 kg), having a nine-pound (4 kg) average. Broilers are not raised in cages. They are raised in large, open structures known as grow out houses. A farmer receives the birds from the hatchery at one day old. A grow out consists of 5 to 9 weeks according to how big the kill plant wants the chickens to be. These houses are equipped with mechanical systems to deliver feed and water to the birds. They have ventilation systems and heaters that function as needed. The floor of the house is covered with bedding material consisting of wood chips, rice hulls, or peanut shells. In some cases they can be grown over dry litter or compost. Because dry bedding helps maintain flock health, most growout houses have enclosed watering systems ("nipple drinkers") which reduce spillage. Keeping birds inside a house protects them from predators such as hawks and foxes. Some houses are equipped with curtain walls, which can be rolled up in good weather to admit natural light and fresh air. Most growout houses built in recent years feature "tunnel ventilation," in which a bank of fans draws fresh air through the house. Traditionally, a flock of broilers consist of about 20,000 birds in a growout house that measures 400/500 feet long and 40/50 feet wide, thus providing about eight-tenths of a square foot per bird. The Council for Agricultural Science and Technology (CAST) states that the minimum space is one-half square foot per bird. More modern houses are often larger and contain more birds, but the floor space allotment still meets the needs of the birds. The larger the bird is grown the fewer chickens are put in each house, to give the bigger bird more space per square foot. Because broilers are relatively young and have not reached sexual maturity, they exhibit very little aggressive conduct. Chicken feed consists primarily of corn and soybean meal with the addition of essential vitamins and minerals. No hormones or steroids are allowed in raising chickens. Issues with indoor husbandry In intensive broiler sheds, the air can become highly polluted with ammonia from the droppings. In this case, a farmer must run more fans to bring in more clean fresh air. If not, this can damage the chickens' eyes and respiratory systems and can cause painful burns on their legs (called hock burns) as well as blisters on their feet. Broilers bred for fast growth have a high rate of developing leg deformities because their large breast muscles cause distortions on their developing legs and pelvis, leading to them often being unable to support their body weight. In cases where the chickens become crippled and can no longer walk, farmers have to go in and pull them out. Because of their difficulty moving, the chickens cannot change their environment to avoid heat, cold, or dirt as they would in natural conditions. The added weight and overcrowding also puts a strain on their hearts and lungs, possibly leading to Ascites. In the UK, up to 19 million broilers die in their sheds from heart failure each year. In a heat wave, if a power failure shuts down the ventilation, 20,000 chickens could die in a short period of time. In a good grow out, a farmer should sell between 92% and 96% of their flock, with a 1.80 to a 2.0 feed conversion ratio. After marketing the birds, the farmer must clean out and prepare for another flock. A farmer should average 4 to 5 grow outs a year. Indoor with higher welfare In a "higher welfare" system, chickens are kept indoors but with more space (around 14 to 16 birds per square metre). They have a richer environment for example with natural light or straw bales that encourage foraging and perching. The chickens grow more slowly and live for up to two weeks longer than intensively farmed birds. The benefits of higher welfare indoor systems are the reduced growth rate, less crowding and more opportunities for natural behaviour. One example of indoor production with higher welfare production is the Better Chicken Commitment standard. Free-range broilers Free-range broilers are reared under similar conditions to free-range egg-laying hens. The breeds grow more slowly than those used for indoor rearing and usually reach slaughter weight at approximately 8 weeks of age. In the EU, each chicken must have one square metre of outdoor space. The benefits of free-range poultry farming include opportunities for natural behaviours such as pecking, scratching, foraging and exercise outdoors. Because they grow slower and have opportunities for exercise, free-range broilers often have better leg and heart health. Organic broilers Organic broiler chickens are reared under similar conditions to free-range broilers but with restrictions on the routine use of in-feed or in-water medications, other food additives and synthetic amino acids. The breeds used are slower growing, more traditional breeds and typically reach slaughter weight at around 12 weeks of age. They have a larger space allowance outside (at least 2 square metres and sometimes up to 10 square metres per bird). The Soil Association standards indicate a maximum outdoors stocking density of 2,500 birds per hectare and a maximum of 1,000 broilers per poultry house. Dual-purpose chicken A dual-purpose chicken is a type of chicken that may be used in the production of both eggs and meat. In the past, many chicken breeds were selected for both functions. However, since the advent of laying and meat hybrids, industrial chicken breeding has made a sharp distinction between chickens with either function, so that certain characteristics have been promoted to an extreme degree. Partly due to the discussion about male offspring of laying hens that are not economically viable and are usually gassed or ground alive as day-old chicks, a discussion is currently underway as to whether dual-purpose chickens have a future role on a large or smaller scale. Historically, the distinction between egg and meat production did not exist. It only appeared with the development of industrial farming and the breeder's specialization (including day-old chicks). Modern laying breeds have become unable to provide enough meat to satisfy consumers accustomed to breeds selected for fattening, which are very poor layers and brooders. In addition, the strategies for killing livestock affected by avian influenza or highly pathogenic diseases or with significant epidemiological or eco-epidemiological risks have led in a large number of family backyards to replace old mixed varieties with laying poultry or modern meat dishes. Faced with the criticism leveled at industrial farming (in particular concerning the killing of millions of chicks by gassing (with CO2) or even in certain cases denounced by the media by grinding live chicks, asphyxiation in plastic bags (when the animals are not buried alive or simply thrown in a dumpster), the concept of dual use is one of the possible answers, and as such supported by the Demeter network in Germany. In Switzerland, where two million chicks of hybrid laying breed are put to death every year (according to Oswald Burch, director of GalloSuisse, on SRF1 radio), these animals killed almost at birth are sold as food for animals in zoos or animal stores, or are transformed into biogas. Another solution would be to analyze the sex of the embryo or fetus in the egg before the incubation phase (when the egg is still consumable) and to eliminate it from the breeding circuit to direct it to the egg sales circuit (Eggs that have not yet been incubated and fertilized can be consumed during the first days after laying, recalls Ruedi Zweifel, director of the Aviforum foundation, the competence center of the Swiss poultry farming). The universities of Leipzig and Dresden are testing ways to achieve this, but have not yet found any that are applicable in real time on an industrial scale. The German company Lohmann is one of the first to have integrated this concept on a large scale, as part of its collaboration with the agricultural association Déméter. It produced its own poultry by crossing lines presenting the sought-after characteristics, which is a way to solve the problem of killing male chicks. The dual-purpose chicken selected by the Lohmann group, the “Lohmann Dual”, is raised in Switzerland by a few breeders, and the Coop network decided to launch the experiment with a test on 5,000 poultry, although knowing that instead of producing up to 300 eggs per year like very good laying hens, it will only produce around 250 eggs per year, which are also smaller according to the journal of the Swiss Poultry Organization. If the consumer accepts higher prices in exchange for better consideration of the animal cause, then a sector could be launched. Concerning meat, Coop spokesperson Ramon Gander estimated that the demand was there and according to him “the meat has also convinced tasters”. Issues Humane treatment Animal welfare groups have frequently criticized the poultry industry for engaging in practices which they assert to be inhumane. Many animal rights advocates object to killing chickens for food, the "factory farm conditions" under which they are raised, methods of transport, and slaughter. Animal Outlook (formerly Compassion Over Killing) and other groups have repeatedly conducted undercover investigations at chicken farms and slaughterhouses which they allege confirm their claims of cruelty. A common practice among hatcheries for egg-laying hens is the culling of newly hatched male chicks since they do not lay eggs and do not grow fast enough to be profitable for meat. There are plans to more ethically destroy the eggs before the chicks are hatched, using "in-ovo" sex determination. Chickens are often stunned before slaughter using carbon dioxide or electric shock in a water bath. More humane methods that could be used are low atmospheric pressure stunning and inert gas asphyxiation. According to animal charities, carrying chickens by their legs is inhumane. The European Commission advocates for this practice and the UK government is intending to legalize it. Beak trimming Laying hens are routinely beak-trimmed at 1 day of age to reduce the damaging effects of aggression, feather pecking and cannibalism. Scientific studies have shown that beak trimming is likely to cause both acute and chronic pain. Severe beak trimming, or beak trimming birds at an older age, may cause chronic pain. Following beak trimming of older or adult hens, the nociceptors in the beak stump show abnormal patterns of neural discharge, indicating acute pain. Neuromas, tangled masses of swollen regenerating axon sprouts, are found in the healed stumps of birds beak trimmed at 5 weeks of age or older and in severely beak trimmed birds. Neuromas have been associated with phantom pain in human amputees and have therefore been linked to chronic pain in beak trimmed birds. If beak trimming is severe because of improper procedure or done in older birds, the neuromas will persist which suggests that beak trimmed older birds experience chronic pain, although this has been debated. Beak-trimmed chicks initially peck less than non-trimmed chickens, which animal behaviorist Temple Grandin attributes to guarding against pain. The animal rights activist, Peter Singer, claims this procedure is bad because beaks are sensitive, and the usual practice of trimming them without anaesthesia is considered inhumane by some. Some within the chicken industry claim that beak-trimming is not painful whereas others argue that the procedure causes chronic pain and discomfort, and decreases the ability to eat or drink. Antibiotics Antibiotics have been used in poultry farming in mass quantities since 1951, when the Food and Drug Administration (FDA) approved their use. Scientists had found that chickens fed an antibiotic residue grew 50 percent faster than controls. The chickens laid more eggs and experienced lower mortality and less illness. Upon this discovery, farmers transitioned from expensive animal proteins to comparatively inexpensive antibiotics and B12. Chickens were now reaching their market weight at a much faster rate and at a lower cost. With a growing population and greater demand on the farmers, antibiotics appeared to be an ideal and cost-effective way to increase the output of poultry. Since this discovery, antibiotics have been routinely used in poultry production, but more recently have been the topic of debate secondary to the fear of bacterial antibiotic resistance. Arsenic Poultry feed can include roxarsone or nitarsone, arsenical antimicrobial drugs that also promote growth. Roxarsone was used as a broiler starter by about 70% of the broiler growers between 1995 and 2000. The drugs have generated controversy because it contains arsenic, which is highly toxic to humans. This arsenic could be transmitted through run-off from the poultry yards. A 2004 study by the U.S. magazine Consumer Reports reported "no detectable arsenic in our samples of muscle" but found "A few of our chicken-liver samples has an amount that according to EPA standards could cause neurological problems in a child who ate 2 ounces of cooked liver per week or in an adult who ate 5.5 ounces per week." The U.S. Food and Drug Administration (FDA), however, is the organization responsible for the regulation of foods in America, and all samples tested were "far less than the ... amount allowed in a food product." Growth hormones Hormone use in poultry production is illegal in the United States. Similarly, no chicken meat for sale in Australia is fed hormones. Several scientific studies have documented the fact that chickens grow rapidly because they are bred to do so, not because of growth hormones. E. coli According to Consumer Reports, "1.1 million or more Americans [are] sickened each year by undercooked, tainted chicken." A USDA study discovered E. coli (Biotype I) in 99% of supermarket chicken, the result of chicken butchering not being a sterile process. However, the same study also shows that the strain of E. coli found was always a non-lethal form, and no chicken had any of the pathenogenic O157:H7 serotype. Many of these chickens, furthermore, had relatively low levels of contamination. Feces tend to leak from the carcass until the evisceration stage, and the evisceration stage itself gives an opportunity for the interior of the carcass to receive intestinal bacteria. (The skin of the carcass does as well, but the skin presents a better barrier to bacteria and reaches higher temperatures during cooking.) Before 1950, this was contained largely by not eviscerating the carcass at the time of butchering, deferring this until the time of retail sale or in the home. This gave the intestinal bacteria less opportunity to colonize the edible meat. The development of the "ready-to-cook broiler" in the 1950s added convenience while introducing risk, under the assumption that end-to-end refrigeration and thorough cooking would provide adequate protection. E. coli can be killed by proper cooking times, but there is still some risk associated with it, and its near-ubiquity in commercially farmed chicken is troubling to some. Irradiation has been proposed as a means of sterilizing chicken meat after butchering. The aerobic bacteria found in poultry housing can include not only E. coli, but Staphylococcus, Pseudomona, Micrococcus and others as well. These contaminants can contribute to dust that often causes issues with the respiratory systems of both the poultry and humans working in the environment. If bacterial levels in the poultry drinking water reach high levels, it can result in bacterial diarrhoea which can lead to blood poisoning should the bacteria spread from damaged intestines. Salmonella too can be stressful on poultry production. How it causes disease has been investigated in some detail. Avian influenza There is also a risk that crowded conditions in chicken farms will allow avian influenza (bird flu) to spread quickly. A United Nations press release states: "Governments, local authorities and international agencies need to take a greatly increased role in combating the role of factory-farming, commerce in live poultry, and wildlife markets which provide ideal conditions for the virus to spread and mutate into a more dangerous form". Dermatitis Several dermatitis conditions are significant in chickens especially gangrenous dermatitis. GD is caused by Clostridium septicum, Clostridium perfringens type A, Clostridium sordellii, Clostridium novyi, Staphylococcus aureus, Staphylococcus xylosus, Staphylococcus epidermidis, Escherichia coli, Pasteurella multocida, Pseudomonas aeruginosa, Enterococcus faecalis, Proteus spp., Bacillus spp., Erysipelothrix rhusiopathiae, and Gallibacterium anatis var. haemolytica. Beemer et al. 1970 finds Rhodotorula mucilaginosa to cause a dermatitis in chickens easily confused with GD. Efficiency Farming of chickens on an industrial scale relies largely on high protein feeds derived from soybeans; in the European Union the soybean dominates the protein supply for animal feed, and the poultry industry is the largest consumer of such feed. Two kilograms of grain must be fed to poultry to produce 1 kg of weight gain, much less than that required for pork or beef. However, for every gram of protein consumed, chickens yield only 0.33 g of edible protein. Economic factors Changes in commodity prices for poultry feed have a direct effect on the cost of doing business in the poultry industry. For instance, a significant rise in the price of corn in the United States can put significant economic pressure on large industrial chicken farming operations. Waste management, manure Poultry production requires regular control of excrement, and in many parts of the world, production operations, especially larger operations, need to comply with environmental regulations and protections. Different from mammalian excrement, in poultry (and all birds) urine and feces are excreted as a combined manure, and the result is both wetter and higher in concentrated nitrogen. Waste can be managed wet, dry or by some combination. Wet management is particularly used in battery egg laying operations, where the waste is sluiced out with constantly or occasionally flowing water. Water is also used to clean the floors around nesting sites that are separate from open runs. Dry management particularly refers to dry litter such as sawdust that is removed as needed. Dry can also include open pasture where manure is absorbed by the existing soil and vegetation, but needs to be monitored diligently so as to not overwhelm the ground capacity and lead to runoff and other pollution problems. Both liquid sluicings and dry litter are used as organic fertilizers, but the wet bulk of liquid manure is harder to ship and is often limited to more local use, while the latter is easier to distribute in bulk and in commercial packaging. Mortality Mortality is a daily consideration for poultry farmers, and the carcasses must be disposed of in order to limit the spread of disease and the prevalence of pests. There are a variety of methods of disposal, the most common being burial, composting, incineration, and rendering. Environmental concerns surrounding each of these methods deal with nutrient pollution into the surrounding soil and groundwater – because of these concerns, in many countries and US states the practice of burial in pits is heavily regulated or disallowed. Farmers may construct their own facilities for composting, or purchase equipment to begin incineration or storage for rendering. Composting offers a safe and practical use for the organic material, while proper management of a composting site limits odor and presence of pests. Incineration offers a swifter disposal method, but uses fuel energy and thus brings varying costs. Rendering has the advantage of being handled off site, and the use of freezers can eliminate the spread of pathogens in storage awaiting pickup. Government organizations, like the USDA, may offer financial assistance to farmers looking to begin utilizing environmentally friendly mortality solutions. Predation In North American production the most common predators are: the coyote foxes, especially the red fox the bobcat mustelids weasels, especially the least weasel and long-tailed weasel birds of prey hawks, especially the red-tailed, red-shouldered, and Cooper's hawk owls, especially the great horned owl the raccoon the Virginia opossum skunks rodents snakes, especially the rat snake pet dogs and cats Worker health and safety Poultry workers experience substantially higher rates of illness and injury than manufacturing workers do on average. For 2013, there were an estimated 1.59 cases of occupation-related illness per 100 full-time U.S. meat and poultry workers, compared to 0.36 for manufacturing workers overall. Injuries are associated with repetitive movements, awkward postures, and cold temperatures. High rates of carpal tunnel syndrome and other muscular and skeletal disorders are reported. Disinfectant chemicals and infectious bacteria are causes of respiratory illnesses, allergic reactions, diarrhea, and skin infections. Poultry housing has been shown to have adverse effects on the respiratory health of workers, ranging from a cough to chronic bronchitis. Workers are exposed to concentrated airborne particulate matter (PM) and endotoxins (a harmful waste product of bacteria). In a conventional hen house a conveyor belt beneath the cages removes the manure. In a cage-free aviary system the manure coats the ground, resulting in the build-up of dust and bacteria over time. Eggs are often laid on the ground or under cages in the aviary housing, causing workers to come close to the floor and force dust and bacteria into the air, which they then inhale during egg collection. Oxfam America reports that huge industrialized poultry operations are under such pressure to maximize profits that workers are denied access to toilets. World chicken population The Food and Agriculture Organization of the United Nations estimated that in 2002 there were nearly sixteen billion chickens in the world. In 2008, the top countries with the highest number of chickens in the world was led by China with the largest at approx 4.6 billion, followed by the US with approx over 2 billion and then followed by Indonesia, Brazil and Mexico. In 2019, China had over 5.14 billion chickens, a higher amount than any other country in the world, followed by Indonesia with approx 3.7 billion chickens. The countries with the next-highest amounts were the US, Brazil, Pakistan, Iran, India, Mexico, Russia and Myanmar respectively. In 1950, the average American consumed 20 pounds (9 kg) of chicken per year, but 92.2 pounds (41.9 kg) in 2017. Additionally, in 1980 most chickens were sold whole, but by 2000 almost 90 percent of chickens were sold after being butchered into parts.
Technology
Animal husbandry
null
13690575
https://en.wikipedia.org/wiki/Solar%20power
Solar power
Solar power, also known as solar electricity, is the conversion of energy from sunlight into electricity, either directly using photovoltaics (PV) or indirectly using concentrated solar power. Solar panels use the photovoltaic effect to convert light into an electric current. Concentrated solar power systems use lenses or mirrors and solar tracking systems to focus a large area of sunlight to a hot spot, often to drive a steam turbine. Photovoltaics (PV) were initially solely used as a source of electricity for small and medium-sized applications, from the calculator powered by a single solar cell to remote homes powered by an off-grid rooftop PV system. Commercial concentrated solar power plants were first developed in the 1980s. Since then, as the cost of solar panels has fallen, grid-connected solar PV systems' capacity and production has doubled about every three years. Three-quarters of new generation capacity is solar, with both millions of rooftop installations and gigawatt-scale photovoltaic power stations continuing to be built. In 2023, solar power generated 5.5% (1,631 TWh) of global electricity and over 1% of primary energy, adding twice as much new electricity as coal. Along with onshore wind power, utility-scale solar is the source with the cheapest levelised cost of electricity for new installations in most countries. As of 2023, 33 countries generated more than a tenth of their electricity from solar, with China making up more than half of solar growth. Almost half the solar power installed in 2022 was mounted on rooftops. Much more low-carbon power is needed for electrification and to limit climate change. The International Energy Agency said in 2022 that more effort was needed for grid integration and the mitigation of policy, regulation and financing challenges. Nevertheless solar may greatly cut the cost of energy. Potential Geography affects solar energy potential because different locations receive different amounts of solar radiation. In particular, with some variations, areas that are closer to the equator generally receive higher amounts of solar radiation. However, solar panels that can follow the position of the Sun can significantly increase the solar energy potential in areas that are farther from the equator. Daytime cloud cover can reduce the light available for solar cells. Land availability also has a large effect on the available solar energy. Technologies Solar power plants use one of two technologies: Photovoltaic (PV) systems use solar panels, either on rooftops or in ground-mounted solar farms, converting sunlight directly into electric power. Concentrated solar power (CSP) systems use mirrors or lenses to concentrate sunlight to extreme heat to make steam, which is converted into electricity by a turbine. Photovoltaic cells A solar cell, or photovoltaic cell, is a device that converts light into electric current using the photovoltaic effect. The first solar cell was constructed by Charles Fritts in the 1880s. The German industrialist Ernst Werner von Siemens was among those who recognized the importance of this discovery. In 1931, the German engineer Bruno Lange developed a photo cell using silver selenide in place of copper oxide, although the prototype selenium cells converted less than 1% of incident light into electricity. Following the work of Russell Ohl in the 1940s, researchers Gerald Pearson, Calvin Fuller and Daryl Chapin created the silicon solar cell in 1954. These early solar cells cost US$286/watt and reached efficiencies of 4.5–6%. In 1957, Mohamed M. Atalla developed the process of silicon surface passivation by thermal oxidation at Bell Labs. The surface passivation process has since been critical to solar cell efficiency. over 90% of the market is crystalline silicon. The array of a photovoltaic system, or PV system, produces direct current (DC) power which fluctuates with the sunlight's intensity. For practical use this usually requires conversion to alternating current (AC), through the use of inverters. Multiple solar cells are connected inside panels. Panels are wired together to form arrays, then tied to an inverter, which produces power at the desired voltage, and for AC, the desired frequency/phase. Many residential PV systems are connected to the grid when available, especially in developed countries with large markets. In these grid-connected PV systems, use of energy storage is optional. In certain applications such as satellites, lighthouses, or in developing countries, batteries or additional power generators are often added as back-ups. Such stand-alone power systems permit operations at night and at other times of limited sunlight. In "vertical agrivoltaics" system, solar cells are oriented vertically on farmland, to allow the land to both grow crops and generate renewable energy. Other configurations include floating solar farms, placing solar canopies over parking lots, and installing solar panels on roofs. Thin-film solar A thin-film solar cell is a second generation solar cell that is made by depositing one or more thin layers, or thin film (TF) of photovoltaic material on a substrate, such as glass, plastic or metal. Thin-film solar cells are commercially used in several technologies, including cadmium telluride (CdTe), copper indium gallium diselenide (CIGS), and amorphous thin-film silicon (a-Si, TF-Si). Perovskite solar cells Concentrated solar power Concentrated solar power (CSP), also called "concentrated solar thermal", uses lenses or mirrors and tracking systems to concentrate sunlight, then uses the resulting heat to generate electricity from conventional steam-driven turbines. A wide range of concentrating technologies exists: among the best known are the parabolic trough, the compact linear Fresnel reflector, the dish Stirling and the solar power tower. Various techniques are used to track the sun and focus light. In all of these systems a working fluid is heated by the concentrated sunlight and is then used for power generation or energy storage. Thermal storage efficiently allows overnight electricity generation, thus complementing PV. CSP generates a very small share of solar power and in 2022 the IEA said that CSP should be better paid for its storage. the levelized cost of electricity from CSP is over twice that of PV. However, their very high temperatures may prove useful to help decarbonize industries (perhaps via hydrogen) which need to be hotter than electricity can provide. Hybrid systems A hybrid system combines solar with energy storage and/or one or more other forms of generation. Hydro, wind and batteries are commonly combined with solar. The combined generation may enable the system to vary power output with demand, or at least smooth the solar power fluctuation. There is much hydro worldwide, and adding solar panels on or around existing hydro reservoirs is particularly useful, because hydro is usually more flexible than wind and cheaper at scale than batteries, and existing power lines can sometimes be used. Development and deployment Early days The early development of solar technologies starting in the 1860s was driven by an expectation that coal would soon become scarce, such as experiments by Augustin Mouchot. Charles Fritts installed the world's first rooftop photovoltaic solar array, using 1%-efficient selenium cells, on a New York City roof in 1884. However, development of solar technologies stagnated in the early 20th century in the face of the increasing availability, economy, and utility of coal and petroleum. Bell Telephone Laboratories’ 1950s research used silicon wafers with a thin coating of boron. The “Bell Solar Battery” was described as 6% efficient, with a square yard of the panels generating 50 watts. The first satellite with solar panels was launched in 1957. By the 1970s, solar panels were still too expensive for much other than satellites. In 1974 it was estimated that only six private homes in all of North America were entirely heated or cooled by functional solar power systems. However, the 1973 oil embargo and 1979 energy crisis caused a reorganization of energy policies around the world and brought renewed attention to developing solar technologies. Deployment strategies focused on incentive programs such as the Federal Photovoltaic Utilization Program in the US and the Sunshine Program in Japan. Other efforts included the formation of research facilities in the United States (SERI, now NREL), Japan (NEDO), and Germany (Fraunhofer ISE). Between 1970 and 1983 installations of photovoltaic systems grew rapidly. In the United States, President Jimmy Carter set a target of producing 20% of U.S. energy from solar by the year 2000, but his successor, Ronald Reagan, removed the funding for research into renewables. Falling oil prices in the early 1980s moderated the growth of photovoltaics from 1984 to 1996. Mid-1990s to 2010 In the mid-1990s development of both, residential and commercial rooftop solar as well as utility-scale photovoltaic power stations began to accelerate again due to supply issues with oil and natural gas, global warming concerns, and the improving economic position of PV relative to other energy technologies. In the early 2000s, the adoption of feed-in tariffs—a policy mechanism, that gives renewables priority on the grid and defines a fixed price for the generated electricity—led to a high level of investment security and to a soaring number of PV deployments in Europe. 2010s For several years, worldwide growth of solar PV was driven by European deployment, but it then shifted to Asia, especially China and Japan, and to a growing number of countries and regions all over the world. The largest manufacturers of solar equipment were based in China. Although concentrated solar power capacity grew more than tenfold, it remained a tiny proportion of the total, because the cost of utility-scale solar PV fell by 85% between 2010 and 2020, while CSP costs only fell 68% in the same timeframe. 2020s Despite the rising cost of materials, such as polysilicon, during the 2021–2022 global energy crisis, utility scale solar was still the least expensive energy source in many countries due to the rising costs of other energy sources, such as natural gas. In 2022, global solar generation capacity exceeded 1 TW for the first time. However, fossil-fuel subsidies have slowed the growth of solar generation capacity. Current status About half of installed capacity is utility scale. Forecasts Most new renewable capacity between 2022 and 2027 is forecast to be solar, surpassing coal as the largest source of installed power capacity. Utility scale is forecast to become the largest source of electricity in all regions except sub-Saharan Africa by 2050. According to a 2021 study, global electricity generation potential of rooftop solar panels is estimated at 27 PWh per year at costs ranging from $40 (Asia) to $240 per MWh (US, Europe). Its practical realization will however depend on the availability and cost of scalable electricity storage solutions. Photovoltaic power stations Concentrating solar power stations Commercial concentrating solar power (CSP) plants, also called "solar thermal power stations", were first developed in the 1980s. The 377 MW Ivanpah Solar Power Facility, located in California's Mojave Desert, is the world's largest solar thermal power plant project. Other large CSP plants include the Solnova Solar Power Station (150 MW), the Andasol solar power station (150 MW), and Extresol Solar Power Station (150 MW), all in Spain. The principal advantage of CSP is the ability to efficiently add thermal storage, allowing the dispatching of electricity over up to a 24-hour period. Since peak electricity demand typically occurs at about 5 pm, many CSP power plants use 3 to 5 hours of thermal storage. Economics Cost per watt The typical cost factors for solar power include the costs of the modules, the frame to hold them, wiring, inverters, labour cost, any land that might be required, the grid connection, maintenance and the solar insolation that location will receive. Photovoltaic systems use no fuel, and modules typically last 25 to 40 years. Thus upfront capital and financing costs make up 80% to 90% of the cost of solar power, which is a problem for countries where contracts may not be honoured, such as some African countries. Some countries are considering price caps, whereas others prefer contracts for difference. In many countries, solar power is the lowest cost source of electricity. In Saudi Arabia, a power purchase agreement (PPA) was signed in April 2021 for a new solar power plant in Al-Faisaliah. The project has recorded the world's lowest cost for solar PV electricity production of USD 1.04 cents/ kWh. Installation prices Expenses of high-power band solar modules has greatly decreased over time. Beginning in 1982, the cost per kW was approximately 27,000 American dollars, and in 2006 the cost dropped to approximately 4,000 American dollars per kW. The PV system in 1992 cost approximately 16,000 American dollars per kW and it dropped to approximately 6,000 American dollars per kW in 2008. In 2021 in the US, residential solar cost from 2 to 4 dollars/watt (but solar shingles cost much more) and utility solar costs were around $1/watt. Productivity by location The productivity of solar power in a region depends on solar irradiance, which varies through the day and year and is influenced by latitude and climate. PV system output power also depends on ambient temperature, wind speed, solar spectrum, the local soiling conditions, and other factors. Onshore wind power tends to be the cheapest source of electricity in Northern Eurasia, Canada, some parts of the United States, and Patagonia in Argentina whereas in other parts of the world mostly solar power (or less often a combination of wind, solar and other low carbon energy) is thought to be best. Modelling by Exeter University suggests that by 2030, solar will be least expensive in all countries except for some in north-eastern Europe. The locations with highest annual solar irradiance lie in the arid tropics and subtropics. Deserts lying in low latitudes usually have few clouds and can receive sunshine for more than ten hours a day. These hot deserts form the Global Sun Belt circling the world. This belt consists of extensive swathes of land in Northern Africa, Southern Africa, Southwest Asia, Middle East, and Australia, as well as the much smaller deserts of North and South America. Thus solar is (or is predicted to become) the cheapest source of energy in all of Central America, Africa, the Middle East, India, South-east Asia, Australia, and several other regions. Different measurements of solar irradiance (direct normal irradiance, global horizontal irradiance) are mapped below: Self-consumption In cases of self-consumption of solar energy, the payback time is calculated based on how much electricity is not purchased from the grid. However, in many cases, the patterns of generation and consumption do not coincide, and some or all of the energy is fed back into the grid. The electricity is sold, and at other times when energy is taken from the grid, electricity is bought. The relative costs and prices obtained affect the economics. In many markets, the price paid for sold PV electricity is significantly lower than the price of bought electricity, which incentivizes self-consumption. Moreover, separate self-consumption incentives have been used in e.g., Germany and Italy. Grid interaction regulation has also included limitations of grid feed-in in some regions in Germany with high amounts of installed PV capacity. By increasing self-consumption, the grid feed-in can be limited without curtailment, which wastes electricity. A good match between generation and consumption is key for high self-consumption. The match can be improved with batteries or controllable electricity consumption. However, batteries are expensive, and profitability may require the provision of other services from them besides self-consumption increase, for example avoiding power outages. Hot water storage tanks with electric heating with heat pumps or resistance heaters can provide low-cost storage for self-consumption of solar power. Shiftable loads, such as dishwashers, tumble dryers and washing machines, can provide controllable consumption with only a limited effect on the users, but their effect on self-consumption of solar power may be limited. Energy pricing, incentives and taxes The original political purpose of incentive policies for PV was to facilitate an initial small-scale deployment to begin to grow the industry, even where the cost of PV was significantly above grid parity, to allow the industry to achieve the economies of scale necessary to reach grid parity. Since reaching grid parity, some policies are implemented to promote national energy independence, high tech job creation and reduction of CO2 emissions. Financial incentives for photovoltaics differ across countries, including Australia, China, Germany, India, Japan, and the United States and even across states within the US. Net metering In net metering the price of the electricity produced is the same as the price supplied to the consumer, and the consumer is billed on the difference between production and consumption. Net metering can usually be done with no changes to standard electricity meters, which accurately measure power in both directions and automatically report the difference, and because it allows homeowners and businesses to generate electricity at a different time from consumption, effectively using the grid as a giant storage battery. With net metering, deficits are billed each month while surpluses are rolled over to the following month. Best practices call for perpetual roll over of kWh credits. Excess credits upon termination of service are either lost or paid for at a rate ranging from wholesale to retail rate or above, as can be excess annual credits. Community solar A community solar project is a solar power installation that accepts capital from and provides output credit and tax benefits to multiple customers, including individuals, businesses, nonprofits, and other investors. Participants typically invest in or subscribe to a certain kW capacity or kWh generation of remote electrical production. Taxes In some countries tariffs (import taxes) are imposed on imported solar panels. Grid integration Variability The overwhelming majority of electricity produced worldwide is used immediately because traditional generators can adapt to demand and storage is usually more expensive. Both solar power and wind power are sources of variable renewable power, meaning that all available output must be used locally, carried on transmission lines to be used elsewhere, or stored (e.g., in a battery). Since solar energy is not available at night, storing it so as to have continuous electricity availability is potentially an important issue, particularly in off-grid applications and for future 100% renewable energy scenarios. Solar is intermittent due to the day/night cycles and variable weather conditions. However solar power can be forecast somewhat by time of day, location, and seasons. The challenge of integrating solar power in any given electric utility varies significantly. In places with hot summers and mild winters, solar tends to be well matched to daytime cooling demands. Energy storage Concentrated solar power plants may use thermal storage to store solar energy, such as in high-temperature molten salts. These salts are an effective storage medium because they are low-cost, have a high specific heat capacity, and can deliver heat at temperatures compatible with conventional power systems. This method of energy storage is used, for example, by the Solar Two power station, allowing it to store 1.44 TJ in its 68 m3 storage tank, enough to provide full output for close to 39 hours, with an efficiency of about 99%. In stand alone PV systems, batteries are traditionally used to store excess electricity. With grid-connected photovoltaic power systems, excess electricity can be sent to the electrical grid. Net metering and feed-in tariff programs give these systems a credit for the electricity they produce. This credit offsets electricity provided from the grid when the system cannot meet demand, effectively trading with the grid instead of storing excess electricity. When wind and solar are a small fraction of the grid power, other generation techniques can adjust their output appropriately, but as these forms of variable power grow, additional balance on the grid is needed. As prices are rapidly declining, PV systems increasingly use rechargeable batteries to store a surplus to be used later at night. Batteries used for grid-storage can stabilize the electrical grid by leveling out peak loads for a few hours. In the future, less expensive batteries could play an important role on the electrical grid, as they can charge during periods when generation exceeds demand and feed their stored energy into the grid when demand is higher than generation. Common battery technologies used in today's home PV systems include nickel-cadmium, lead-acid, nickel metal hydride, and lithium-ion.Lithium-ion batteries have the potential to replace lead-acid batteries in the near future, as they are being intensively developed and lower prices are expected due to economies of scale provided by large production facilities such as the Tesla Gigafactory 1. In addition, the Li-ion batteries of plug-in electric cars may serve as future storage devices in a vehicle-to-grid system. Since most vehicles are parked an average of 95% of the time, their batteries could be used to let electricity flow from the car to the power lines and back. Retired electric vehicle (EV) batteries can be repurposed. Other rechargeable batteries used for distributed PV systems include, sodium–sulfur and vanadium redox batteries, two prominent types of a molten salt and a flow battery, respectively. Other technologies Solar power plants, while they can be curtailed, usually simply output as much power as possible. Therefore in an electricity system without sufficient grid energy storage, generation from other sources (coal, biomass, natural gas, nuclear, hydroelectricity) generally go up and down in reaction to the rise and fall of solar electricity and variations in demand (see load following power plant). Conventional hydroelectric dams work very well in conjunction with solar power; water can be held back or released from a reservoir as required. Where suitable geography is not available, pumped-storage hydroelectricity can use solar power to pump water to a high reservoir on sunny days, then the energy is recovered at night and in bad weather by releasing water via a hydroelectric plant to a low reservoir where the cycle can begin again. While hydroelectric and natural gas plants can quickly respond to changes in load; coal, biomass and nuclear plants usually take considerable time to respond to load and can only be scheduled to follow the predictable variation. Depending on local circumstances, beyond about 20–40% of total generation, grid-connected intermittent sources like solar tend to require investment in some combination of grid interconnections, energy storage or demand side management. In countries with high solar generation, such as Australia, electricity prices may become negative in the middle of the day when solar generation is high, thus incentivizing new battery storage. The combination of wind and solar PV has the advantage that the two sources complement each other because the peak operating times for each system occur at different times of the day and year. The power generation of such solar hybrid power systems is therefore more constant and fluctuates less than each of the two component subsystems. Solar power is seasonal, particularly in northern/southern climates, away from the equator, suggesting a need for long term seasonal storage in a medium such as hydrogen or pumped hydroelectric. Environmental effects Solar power is cleaner than electricity from fossil fuels, so can be better for the environment. Solar power does not lead to harmful emissions during operation, but the production of the panels creates some pollution. The carbon footprint of manufacturing is less than 1kg /Wp, and this is expected to fall as manufacturers use more clean electricity and recycled materials. Solar power carries an upfront cost to the environment via production with a carbon payback time of several years , but offers clean energy for the remainder of their 30-year lifetime. The life-cycle greenhouse-gas emissions of solar farms are less than 50 gram (g) per kilowatt-hour (kWh), but with battery storage could be up to 150 g/kWh. In contrast, a combined cycle gas-fired power plant without carbon capture and storage emits around 500 g/kWh, and a coal-fired power plant about 1000 g/kWh. Similar to all energy sources where their total life cycle emissions are mostly from construction, the switch to low carbon power in the manufacturing and transportation of solar devices would further reduce carbon emissions. Lifecycle surface power density of solar power varies but averages about 7 W/m2, compared to about 240 for nuclear power and 480 for gas. However, when the land required for gas extraction and processing is accounted for, gas power is estimated to have not much higher power density than solar. According to a 2021 study, obtaining 25% to 80% of electricity from solar farms in their own territory by 2050 would require the panels to cover land ranging from 0.5% to 2.8% of the European Union, 0.3% to 1.4% in India, and 1.2% to 5.2% in Japan and South Korea. Occupation of such large areas for PV farms could drive residential opposition as well as lead to deforestation, removal of vegetation and conversion of farm land. However some countries, such as South Korea and Japan, use land for agriculture under PV, or floating solar, together with other low-carbon power sources. Worldwide land use has minimal ecological impact. Land use can be reduced to the level of gas power by installing on buildings and other built up areas. Harmful materials are used in the production of solar panels, but generally in small amounts. , the environmental impact of perovskite is difficult to estimate, but there is some concern that lead may be a problem. A 2021 International Energy Agency study projects the demand for copper will double by 2040. The study cautions that supply needs to increase rapidly to match demand from large-scale deployment of solar and required grid upgrades. More tellurium and indium may also be needed. Recycling may help. As solar panels are sometimes replaced with more efficient panels, the second-hand panels are sometimes reused in developing countries, for example in Africa. Several countries have specific regulations for the recycling of solar panels. Although maintenance cost is already low compared to other energy sources, some academics have called for solar power systems to be designed to be more repairable. Solar panels can increase local temperature. In large installation in the desert, the effect can be stronger than the urban heat island. A very small proportion of solar power is concentrated solar power. Concentrated solar power may use much more water than gas-fired power. This can be a problem, as this type of solar power needs strong sunlight so is often built in deserts. Politics Solar generation cannot be cut off by geopolitics once installed, unlike oil and gas, which contributes to energy security. over 40% of global polysilicon manufacturing capacity is in Xinjiang in China, which raises concerns about human rights violations (Xinjiang internment camps). According to the International Solar Energy Society China's dominance of manufacturing is not a problem, both because they estimate solar manufacturing cannot grow to more than 400b USD per year, and because if Chinese supply was cut off other countries would have years to create their own industry.
Technology
Energy and fuel
null
5708292
https://en.wikipedia.org/wiki/Daeodon
Daeodon
Daeodon is an extinct genus of entelodont even-toed ungulates that inhabited North America about 29 to 15.97 million years ago during the latest Oligocene and earliest Miocene. The type species is Daeodon shoshonensis, described from a very fragmentary holotype by Cope. Some authors synonymize it with Dinohyus hollandi and several other species (see below), but due to the lack of diagnostic material, this may be questionable. Another large member of this family, possibly larger than Daeodon, is the Asian Paraentelodon, but it is known by very incomplete material. Taxonomy The genus Daeodon was erected by the American anatomist and paleontologist Edward Drinker Cope in 1878. He classified it as a perissodactyl and thought that it was closely related to Menodus. This classification persisted until the description of "Elotherium" calkinsi in 1905, a very similar and much more complete animal from the same rocks, which was promptly assigned as a species of Dynohyus by Peterson (1909). This led to Daeodons reclassification as a member of the family Entelodontidae. The exact relationships between Daeodon and other entelodonts are not well understood; some authors (Lucas et al., 1998) consider the greater morphological similarity of Daeodon to Paraentelodon rather than to earlier North American entelodonts, like Archaeotherium, as evidence for Daeodon being a descendant from a Late Oligocene immigration of large Asian entelodonts to North America. However, the existence of distinct specimens of Archaeotherium showing characters reminiscent of those present in both Paraentelodon and Daeodon raises the possibility of both genera actually descending from a North American common ancestor. Although not specified in Cope's original description, the name Daeodon comes from the Greek words , meaning "hostile" or "dreadful" and , meaning "teeth". Species The type species of Daeodon is D. shohonensis, which is based on a fragment of a lower jaw from the John Day Formation of Oregon. Several other species were assigned to the genus in the subsequent decades, like D. calkinsi, D. mento and D. minor. Since 1945, it had been suggested that two other taxa were actually junior synonyms of Daeodon, but the formalization of this referral didn't take place until the work of Lucas et al. (1998). Ammodon leidyanum, named by Cope's rival, O. C. Marsh, and Dinohyus hollandi, a complete skeleton from the Agate Springs quarry of Nebraska, were found to be indistinguishable from each other and in turn both were indistinguishable from D. shoshonensis. With the exception of D. calkinsi, which was tentatively excluded from Daeodon, the other previously recognized species of Daeodon were also synonymized to D. shoshonensis. That same year, an obscure entelodont, Boochoerus humerosum, was also synonymized to Daeodon by Foss and Fremd (1998) and, albeit its status as a distinct species was retained, they note that the differences could still be attributed to individual or population variation or sexual dimorphism. Description Daeodon shoshonensis is the largest-known entelodont; known adult individuals had skulls about long and were about tall at the shoulders. It's differentiated from other entelodonts by a suite of unique dental characters, the shape and relatively small size of the cheekbone flanges of its skull compared to those of Archaeotherium, and the small size of its chin tubercle, as well as features of its carpus and tarsus and the fusion of the bones of the lower leg. Like other entelodonts, its limbs were long and slender with the bones of the foreleg fused together and with only two toes on each foot. It also had a relatively lightly constructed neck for the size of its head, whose weight was mostly supported by muscles and tendons attached to the tall spines of the thoracic vertebrae, similar to those of modern-day bison and white rhinoceros. Paleoecology Habitat Daeodon had a wide range in North America, with many fossils found in Agate Fossil Beds, representing an environment in a transition period between dense forests and expansive prairie, likely a major cause of their extinction in the early Miocene. It adapted to the grassland with a more cursorial body plan than more basal entelodonts like Archaeotherium, losing their dewclaws entirely, proximally fused metacarpals, and similar shoulder musculature to bison. The Agate Springs bonebed was a floodplain environment with wet and dry seasons. Daeodon shared this landscape with small gazelle-like camels Stenomylus, the large browsing chalicothere Moropus, several species of predatory coyote- to wolf-sized amphicyonids that lived in packs, land beavers (Palaeocastor) that filled the ecological niche of modern prairie dogs, and thousands of small herd-living rhinoceros. The rhinos suffered massive periodic die-offs in the dry season, but Daeodon fossils are rare, which suggests they were neither social animals nor especially attracted to carrion. Diet Daeodon was omnivorous like all other entelodonts. Enamel patterns suggest eating of nuts, roots, and vines, as well as meat and bones. The superficial similarity to peccaries, hippos, and bears implies a wide range in terms of what plants Daeodon may have been eating. The dry seasons of North America at the time could get very harsh, so they may have supplemented their water intake by eating grape vines. The extent of its carnivory is debated, but tooth wear suggests they specialized in crushing bone and ripping meat, and bite marks on chalicothere bones suggest they either hunted or scavenged large herbivores. Foss (2001) argues its head was far too heavy to be effective in taking down large prey so it must have relied exclusively on scavenging, but its bison-like adaptations for running, the stereoscopic vision characteristic of predators, and evidence of predation in entelodonts calls this interpretation into question. The uncertainty of their diets suggests they were likely opportunistic omnivores similar to bears, eating whatever they needed depending on the circumstance. Behavior Entelodonts partook in intraspecific face biting, known from tooth marks on their skulls. Males would fight for dominance, possibly using their mandibular tubercles as protection in addition to their function as muscle attachments. Sexual dimorphism of the jugal protections exist in Archaeotherium, and with a smaller Daeodon sample size, such dimorphism can't be ruled out for Daeodon. If dimorphic, the function of the expanded jugals was likely for display, supporting large preorbital glands similar to those forest hogs possessed for chemical communication.
Biology and health sciences
Other artiodactyla
Animals
5709567
https://en.wikipedia.org/wiki/New%20Mexico%20whiptail
New Mexico whiptail
The New Mexico whiptail (Aspidoscelis neomexicanus) is a female-only species of lizard found in New Mexico and Arizona in the southwestern United States, and in Chihuahua in northern Mexico. It is the official state reptile of New Mexico. It is one of many lizard species known to be parthenogenetic. Individuals of the species can be created either through the hybridization of the little striped whiptail (A. inornatus) and the western whiptail (A. tigris), or through the parthenogenetic reproduction of an adult New Mexico whiptail. The hybridization of these species prevents healthy males from forming, whereas males exist in one parent species (see sexual differentiation). Parthenogenesis allows the all-female population to reproduce. This combination of interspecific hybridization and parthenogenesis exists as a reproductive strategy in several species of whiptail lizard within the genus Aspidoscelis to which the New Mexico whiptail belongs. Description The New Mexico whiptail grows from in length, and is typically overall brown or black in color with seven pale yellow stripes from head to tail. Light colored spots often occur between the stripes. They have a white or pale blue underside, with a blue or blue-green colored throat. They are slender bodied, with a long tail that is more commonly blue-green in their infant stage, melding into the same spotted brown and yellow color as they age. Behavior Like most other whiptail lizards, the New Mexico whiptail is diurnal and insectivorous. They are wary, energetic, and fast moving, darting for cover if approached. They are found in a wide variety of semi-arid habitats, including grassland, rocky areas, shrubland, or mountainside woodlands. Reproduction occurs through parthenogenesis, with up to four unfertilized eggs being laid in mid summer, and hatching approximately eight weeks later. The New Mexico whiptail lizard is a crossbreed of a western whiptail, which lives in the desert, and the little striped whiptail, which favors grasslands. The whiptail engages in mating behavior with other females of its own species, giving rise to the nickname "lesbian lizards". A common theory is that this behavior stimulates ovulation, as those that do not "mate" do not lay eggs.
Biology and health sciences
Lizards and other Squamata
Animals
4283745
https://en.wikipedia.org/wiki/Algebraic%20expression
Algebraic expression
In mathematics, an algebraic expression is an expression built up from constants (usually, algebraic numbers) variables, and the basic algebraic operations: addition (+), subtraction (-), multiplication (×), division (÷), whole number powers, and roots (fractional powers).. For example, is an algebraic expression. Since taking the square root is the same as raising to the power , the following is also an algebraic expression: An algebraic equation is an equation involving polynomials, for which algebraic expressions may be solutions. If you restrict your set of constants to be numbers, any algebraic expression can be called an arithmetic expression. However, algebraic expressions can be used on more abstract objects such as in Abstract algebra. If you restrict your constants to integers, the set of numbers that can be described with an algebraic expression are called Algebraic numbers. By contrast, transcendental numbers like and are not algebraic, since they are not derived from integer constants and algebraic operations. Usually, is constructed as a geometric relationship, and the definition of requires an infinite number of algebraic operations. More generally, expressions which are algebraically independent from their constants and/or variables are called transcendental. Terminology Algebra has its own terminology to describe parts of an expression: 1 – Exponent (power), 2 – coefficient, 3 – term, 4 – operator, 5 – constant, - variables Conventions Variables By convention, letters at the beginning of the alphabet (e.g. ) are typically used to represent constants, and those toward the end of the alphabet (e.g. and ) are used to represent variables. They are usually written in italics. Exponents By convention, terms with the highest power (exponent), are written on the left, for example, is written to the left of . When a coefficient is one, it is usually omitted (e.g. is written ). Likewise when the exponent (power) is one, (e.g. is written ), and, when the exponent is zero, the result is always 1 (e.g. is written , since is always ). In roots of polynomials The roots of a polynomial expression of degree n, or equivalently the solutions of a polynomial equation, can always be written as algebraic expressions if n < 5 (see quadratic formula, cubic function, and quartic equation). Such a solution of an equation is called an algebraic solution. But the Abel–Ruffini theorem states that algebraic solutions do not exist for all such equations (just for some of them) if n 5. Rational expressions Given two polynomials and , their quotient is called a rational expression or simply rational fraction. A rational expression is called proper if , and improper otherwise. For example, the fraction is proper, and the fractions and are improper. Any improper rational fraction can be expressed as the sum of a polynomial (possibly constant) and a proper rational fraction. In the first example of an improper fraction one has where the second term is a proper rational fraction. The sum of two proper rational fractions is a proper rational fraction as well. The reverse process of expressing a proper rational fraction as the sum of two or more fractions is called resolving it into partial fractions. For example, Here, the two terms on the right are called partial fractions. Irrational fraction An irrational fraction is one that contains the variable under a fractional exponent. An example of an irrational fraction is The process of transforming an irrational fraction to a rational fraction is known as rationalization. Every irrational fraction in which the radicals are monomials may be rationalized by finding the least common multiple of the indices of the roots, and substituting the variable for another variable with the least common multiple as exponent. In the example given, the least common multiple is 6, hence we can substitute to obtain Algebraic and other mathematical expressions The table below summarizes how algebraic expressions compare with several other types of mathematical expressions by the type of elements they may contain, according to common but not universal conventions. A rational algebraic expression (or rational expression) is an algebraic expression that can be written as a quotient of polynomials, such as . An irrational algebraic expression is one that is not rational, such as .
Mathematics
Algebra: General
null
4285650
https://en.wikipedia.org/wiki/Nodosauridae
Nodosauridae
Nodosauridae is a family of ankylosaurian dinosaurs known from the Late Jurassic to the Late Cretaceous periods in what is now Asia, Europe, North America, and possibly South America. While traditionally regarded as a monophyletic clade as the sister taxon to the Ankylosauridae, some analyses recover it as a paraphyletic grade leading to the ankylosaurids. Description Nodosaurids, like their sister group the ankylosaurids, were heavily armored dinosaurs adorned with rows of bony armor nodules and spines (osteoderms), which were covered in keratin sheaths. Nodosaurids, like other ankylosaurians, were small- to large-sized, heavily built, quadrupedal, herbivorous dinosaurs, possessing small, leaf-shaped teeth. Unlike ankylosaurids, nodosaurids lacked mace-like tail clubs, instead having more flexible tail tips. Many nodosaurids had spikes projecting outward from their shoulders. One particularly well-preserved nodosaurid "mummy", the holotype of Borealopelta markmitchelli, preserves a nearly complete set of armor in life position, as well as the keratin covering and mineralized remains of the underlying skin, which indicate reddish pigments in a countershading pattern. Classification The family Nodosauridae was erected by Othniel Charles Marsh in 1890, and anchored on the genus Nodosaurus. The clade Nodosauridae was first informally defined by Paul Sereno in 1998 as "all ankylosaurs closer to Panoplosaurus than to Ankylosaurus," a definition followed by Vickaryous, Teresa Maryańska, and Weishampel in 2004. Vickaryous et al. considered two genera of nodosaurids to be of uncertain placement (incertae sedis): Struthiosaurus and Animantarx, and considered the most primitive member of the Nodosauridae to be Cedarpelta. Following the publication of the PhyloCode, Nodosauridae needed to be formally defined following certain parameters, including that the type genus Nodosaurus was required as an internal specifier. In formally defining Nodosauridae, Madzia and colleagues followed the previously established use for the clade, defining it as the largest clade including Nodosaurus textilis but not Ankylosaurus magniventris. As all phylogenies referenced included both Panoplosaurus and Nodosaurus within the same group relative to Ankylosaurus, the addition of another internal specifier was deemed unnecessary. Nodosauridae is traditionally composed of the basal clade Polacanthinae (sometimes recovered outside of the Nodosauridae), as well as the Panoplosaurini and Struthiosaurini within the Nodosaurinae. Nodosaurinae is defined in the PhyloCode as "the largest clade containing Nodosaurus textilis, but not Hylaeosaurus armatus, Mymoorapelta maysi, and Polacanthus foxii". Panoplosaurini is defined in the PhyloCode as "the largest clade containing Panoplosaurus mirus, but not Nodosaurus textilis and Struthiosaurus austriacus" while Struthiosaurini has a similar definition of "the largest clade containing Struthiosaurus austriacus, but not Nodosaurus textilis and Panoplosaurus mirus". Topology A below demonstrates these relationships, following the phylogenetic analyses of Rivera-Sylva and colleagues (2018), with clade names added by definition from Madzia et al. (2021). However, in 2023, Raven and colleagues proposed an alternate phylogeny for nodosaurids; instead of the traditional dichotomous split between nodosaurids and ankylosaurids, their analyses resulted in a paraphyletic grade of these taxa comprising the monophyletic clades Panoplosauridae, Polacanthidae and Struthiosauridae. These results are displayed in Topology B below. Corresponding clades are shown in matching colors for clarity, and ⊞ buttons can be clicked to expand nodes: Nodosaurinae is defined as the largest clade containing Nodosaurus textilis but not Hylaeosaurus armatus, Mymoorapelta maysi, or Polacanthus foxii, and was formally defined in 2021 by Madzia and colleagues, who utilized the name of Othenio Abel in 1919, who created the term to unite Ankylosaurus, Hierosaurus and Stegopelta. The name has been significantly refined in content since Abel first used it to unite all quadrupedal, plate-armoured ornithischians, now including a significant number of taxa from the Early Cretaceous through Maastrichtian of Europe, North America, and Argentina. Previous informal definitions of the group described the clade as all taxa closer to Panoplosaurus, or Panoplosaurus and Nodosaurus, than to the early ankylosaurs Sarcolestes, Hylaeosaurus, Mymoorapelta or Polacanthus, which was reflected in the specifiers chosen by Madzia et al. when formalizing the clade following the PhyloCode. The 2018 phylogenetic analysis of Rivera-Sylva and colleagues was used as the primary reference for Panoplosaurini by Madzia et al., in addition to the supplemental analyses of Thompson et al. (2012), Arbour and Currie (2016), Arbour et al. (2016), and Brown et al. (2017). Panoplosaurini is defined as the largest clade containing Panoplosaurus mirus, but not Nodosaurus textilis or Struthiosaurus austriacus, and was named in 2021 by Madzia and colleagues for the group found in many previous analyses, both morphological and phylogenetic. Panoplosaurini includes not only the Late Cretaceous Panoplosaurus, Denversaurus and Edmontonia, but also the mid Cretaceous Animantarx and Texasetes, as well as Patagopelta. However, in the study describing it, its authors only placed it as a nodosaurine outside Panoplosaurini. The approximately equivalent clade Panoplosaurinae, named in 1929 by Franz Nopcsa, but was not significantly used until Robert Bakker reused the name in 1988, alongside the new clades Edmontoniinae and Edmontoniidae, which were considered to unite Panoplosaurus, Denversaurus and Edmontonia to the exclusion of other ankylosaurs. As none of the clades were commonly used, or formally named following the PhyloCode, Madzia et al. named Panoplosaurini instead, as the group of taxa fell within the clade Nodosaurinae, and having the same -inae suffix on both parent and child taxon could be confusing in future. The 2018 phylogenetic analysis of Rivera-Sylva and colleagues was used as the primary reference for Panoplosaurini by Madzia et al., in addition to the supplemental analyses of Arbour et al. (2016), Brown et al. (2017), and Zheng et al. (2018). Struthiosaurini is defined as the largest clade containing Struthiosaurus austriacus, but not Nodosaurus textilis or Panoplosaurus mirus, and was named in 2021 by Madzia and colleagues for the relatively stable group found in many previous analyses. Struthiosaurini includes not only the Late Cretaceous European Struthiosaurus, but also the Early Cretaceous European Europelta, the Late Cretaceous European Hungarosaurus, and Stegopelta and Pawpawsaurus from the mid Cretaceous of North America. The approximately equivalent clade Struthiosaurinae, named in 1923 by Franz Nopcsa was previously used to include European nodosaurids, but was never formally named following the PhyloCode, so Madzia et al. named Struthiosaurini instead, as the group of taxa fell within the clade Nodosaurinae, and having the same -inae suffix on both parent and child taxon could be confusing in future. The 2018 phylogenetic analysis of Rivera-Sylva and colleagues was used as the primary reference for Struthiosaurini by Madzia et al., in addition to the supplemental analyses of Arbour et al. (2016), Brown et al. (2017), and Zheng et al. (2018). Biogeography Nodosaurids are known from diverse remains throughout Europe, Asia, and North America. Some Gondwanan ankylosaurs, including the Antarctican Antarctopelta and Argentinian Patagopelta, were originally regarded as belonging to the Nodosauridae, but later analyses provided support for them belonging to the Parankylosauria, a separate lineage of basal ankylosaurs restricted to the Southern Hemisphere.
Biology and health sciences
Ornitischians
Animals
4286482
https://en.wikipedia.org/wiki/Crangon%20crangon
Crangon crangon
Crangon crangon is a species of caridean shrimp found across the northeastern Atlantic Ocean. Its range extends from the White Sea in the north of Russia to the coast of Morocco, including the Baltic Sea, and appears also throughout the Mediterranean and Black Seas. Commercially important, it is fished mainly in the southern North Sea. Common names include brown shrimp, common shrimp, bay shrimp, and sand shrimp, while translation of its French name (or its Dutch equivalent ) sometimes leads to the English version grey shrimp. Description Adults are typically long, although individuals up to have been recorded. The animals have cryptic colouration, being a sandy brown colour, which can be changed to match the environment. They live in shallow water, which can also be slightly brackish, and feed nocturnally. During the day, they remain buried in the sand to escape predatory birds and fish, with only their antennae protruding. Crangon is classified in the family Crangonidae, and shares the family's characteristic subchelate first pereiopods (where the movable finger closes onto a short projection, rather than a similarly sized fixed finger) and short rostrum. Distribution and ecology C. crangon has a wide range, extending across the northeastern Atlantic Ocean from the White Sea in the north of Russia to the coast of Morocco, including the Baltic Sea, as well as occurring throughout the Mediterranean and Black Seas. Despite its wide range, however, little gene flow occurs across certain natural barriers, such as the Strait of Gibraltar or the Bosphorus. The populations in the western Mediterranean Sea are thought to be the oldest, with the species' spread across the north Atlantic thought to postdate the Pleistocene. Adults live epibenthically (on or near the sea-floor) especially in the shallow waters of estuaries or near the coast. It is generally highly abundant, and has a significant effect on the ecosystems where it lives. Lifecycle Females reach sexual maturity at a length around , while males are mature at . The young hatch from their eggs into planktonic larvae. These pass through five moults before reaching the postlarval stage, when they settle to the sea-floor. Fishery Historically, the commercial fishery was accomplished by horse-drawn beam trawls on both sides of the Dover straits. In the sandy shallows of Morecambe Bay (Lancashire, UK) horses have been replaced by tractors. Some small fishing vessels also use beam trawls for brown shrimp. A few artisanal fishermen use hand-pushed nets. In all UK shrimp fisheries, the catch is first 'riddled' to release the young of shrimps and fish. The shrimps are then traditionally boiled on board before landing. Over of C. crangon were caught in 1999, with Germany and the Netherlands taking over 80% of this total. The UK lands an annual average of 1000 tonnes of brown shrimp, but the catch is highly variable between 500 and 1500 tonnes. In the Lancashire fishery for brown shrimp it has been shown that landings in any year are related to the annual catch, average annual air temperature (inverse) and total rainfall in the previous year. That has enabled a good prediction of annual landings one year in advance. Moreover, for the port of Lytham, the abundance of shrimp (annual catch per unit effort) was found to be closely correlated with the mean annual Zürich sunspot number for the period 1965-1975. Given that sunspot numbers are predictable, this provides another tool for the prediction of annual shrimp catch. Sunspot cycle No. 23 (1997–2008) is a good example of the correlation between UK annual brown shrimp catch and mean annual sunspot number. Greenpeace Germany classifies the brown shrimp as an "unsustainable" choice that should be avoided. Brown shrimp have been documented to contain microplastics. As food The consumption of brown shrimp is popular in Belgium, the Netherlands, northern Germany, and Denmark. Shrimp in general are known as garnalen in Dutch. It is the basis of the dish tomate-crevettes, where the shrimp are mixed with mayonnaise and fresh parsley, and served in a hollowed-out uncooked tomato. The shrimp croquette is another Belgian speciality; the shrimp are in the interior of the battered croquette along with béchamel sauce. Freshly cooked, unpeeled brown shrimp are often served as a snack accompanying beer, typically a sour ale or Flemish red such as Rodenbach. In Lancashire, England, the peeled brown shrimps are mixed with butter and spices (including nutmeg or mace) to make potted shrimps, a dish traditionally eaten with bread.
Biology and health sciences
Shrimps and prawns
Animals
3145063
https://en.wikipedia.org/wiki/Stack%20interchange
Stack interchange
A directional interchange, colloquially known as a stack interchange, is a type of grade-separated junction between two controlled-access highways that allows for free-flowing movement to and from all directions of traffic. These interchanges eliminate the problems of weaving, have the highest vehicle capacity, and vehicles travel shorter distances when compared to different types of interchanges. The first directional interchange built in the world was the Four Level Interchange which opened to Los Angeles traffic in 1949. Definition A directional interchange is a grade separated junction between two roads where all turns that require crossing over or under the opposite road's lanes of travel to complete the turn utilize ramps that make a direct or semi-direct connection. The difference between direct and semi-direct connections is how much the motorist deviates from the intended direction of travel while on the ramp. Direct ramps are shorter and can handle higher traveling speeds than semi-direct. Four-level stack The four-level stack (or simply four-stack) has one major freeway crossing another freeway with a viaduct, with connector flyover ramps crossing on two further levels. This type of interchange does not usually permit U-turns. The four-level stack creates two "inverse" dual-carriageways—the turn ramps crossing the middle section have traffic driving on the opposite side of oncoming traffic to usual (see diagram for clarity). United States The first stack interchange was the Four Level Interchange (renamed the Bill Keene Memorial Interchange), built in Los Angeles, California, and completed in 1949, at the junction of US Route 101 (US 101) and State Route 110 (SR 110). Since then, the California Department of Transportation (Caltrans) has built eight more four-level stacks throughout the state of California, notably the Judge Harry Pregerson Interchange, as well as a larger number of three-level and four-level stack–cloverleaf hybrids (where the least-used left-turning ramp is built as a cloverleaf-like 270-degree loop). The stack interchange between I-10 and I-405 is a three-level stack, since the semi-directional ramps are spaced out far enough so they do not need to cross each other at a single point as in a conventional four-level stack. The first four-level stack interchange in Texas was built in Fort Worth at the intersection of I-35W and I-30 (originally I-20) near downtown. This interchange, finished in 1958, was known as "The Pretzel" or the "Mixmaster" by locals. The original contract cost was $1,220,000. Improvements to the old Mixmaster over the past 60 years include an upgrade to a Texas-style five-level stack exchange (see below). One of the first four-level stack interchanges in the northeastern United States was constructed in the late 1960s over I-84 in Farmington, Connecticut, for the controversial I-291 beltway around the city of Hartford. Most of the I‑291 beltway was later cancelled, and the sprawling stack lay dormant for almost 25 years. In 1992 the extension of Connecticut Route 9 to I-84 used the I‑291 right-of-way and some sections of the abandoned interchange. Several ramps still remain unused, including abandoned roadbed for I-291 both north and south of the complex. Four-level stacks are used for the interchanges between: The Stack in Phoenix, Arizona SuperRedTan Interchange in Mesa, Arizona Mini Stack in Phoenix, Arizona I-5 and SR 4 in Stockton, California; I-980, I-580 and SR 24 in Oakland, California; I-71/I-75 and I-275 in Erlanger, Kentucky (Cincinnati metropolitan area); I-71 and I-90/I-490 in Cleveland, Ohio; I-77 and I-480 in Independence, Ohio (just outside Cleveland); I-77 and I-490 in Cleveland, Ohio; I-77 and I-485 in Charlotte, North Carolina; I-65 and I-440 in Nashville, Tennessee; I-20 and I-459 near Birmingham, Alabama; I-90 and I-405 in Bellevue, Washington; I‑110 and US 61/US 190 in Baton Rouge, Louisiana; I-75 and US 35 in Dayton, Ohio; I-75 and I-696 near Detroit, Michigan; I-69 and I-475 in Flint, Michigan; I‑70/I‑270 and I‑270/I‑64 St. Louis, Missouri; The Marquette Interchange between I-794, I-94, and I-43 in Milwaukee, Wisconsin; and The Zoo Interchange between I-894, I-94, I-41, and US 41/US 45 Milwaukee, Wisconsin. Another well-known stack interchange lies west of Baltimore, Maryland, serving as the junction between I-695 and I-70. It was originally built for a planned extension of I‑70 into the city. Due to strong opposition, I‑70 ends at a park and ride east. As a result, the road east of I‑695 sees little traffic compared to the high volumes to and from the west. Another four-level stack interchange in the Baltimore area is located at the northeastern junction between I-695 and I-95. The stack was built as part of a massive I-95 reconstruction project that includes high-occupancy toll lanes (HOT lanes), designed to relieve congestion between Baltimore and its northeastern suburbs. The Springfield Interchange, south of Washington, D.C., was rebuilt into a four-level stack to accommodate I-95's transition from the Capital Beltway to its own alignment further south into Virginia. This was necessitated by the inadequacy of the original configuration that was caused by the rerouting of I-95 onto the Beltway after its cancellation within Washington and points north. In Lone Tree, Colorado, there is a four-level stack serving I-25, the eastern end of C-470 and the southern end of E-470. In Thornton, Colorado, there is another stack serving I-25 and E-470 at its northern end as it continues west as the Northwest Parkway. Canada The initial design of Highway 407 had several four-level stack interchanges planned at junctions with existing 400-series highways, but only one example was built: the interchange at Highway 400 in Vaughan, Ontario, which is also the only true four-level stack in Canada. Highway 407's other proposed four-level stacks at Highway 410 and Highway 404 were reduced to three-level cloverstack interchanges, with loop ramps being built instead of a fourth level of semi-directional ramps. Similarly, the interchange with Highway 427 has four levels but only two semi-directional flyover ramps that cross each other connecting to Highway 427 south of that junction. Two loop ramps link Highway 407 with Highway 427 north of that junction. Europe In Belgium, on the Brussels Ring there are four-level stack interchanges: The Grand-Bigard and Machelen interchange (only partly in use). In Germany, there is one, the Wetzlarer Kreuz. In Greece, there is also four stack interchange near Metamorfosi, which connects the A1 and A6 (Attiki Odos) motorways. In the Netherlands there is currently four-level stack interchange: the Prins Clausplein near The Hague. It forms the junction of the A4 and A12. In the United Kingdom there are four-level stacks: at the junction of the M4 and M25 near Heathrow Airport in London (the Thorney Interchange), at the junction of the M23 and M25 to the south of London (the Merstham Interchange), and at the junction of the M4 and M5 near Bristol (the Almondsbury Interchange). The M4/M25 junction is particularly unusual as it also has a railway line bisecting it at its lowest level. The M4/M25 junction is slightly offset so there is no point where all four levels are directly above each other. M25 (a north–south road at this junction) is offset to the east by approximately . The junction of the A19 and A66 in Teesside uses a three-level variant, with a 270-degree loop allowing southbound A19 traffic to exit to the westbound A66. Southern Hemisphere The Light Horse Interchange at the junction of the M4 and M7 is a four-level stack interchange in Sydney, New South Wales, Australia. Opened in late 2005, it is the largest in the Southern Hemisphere. The EB Cloete Interchange just outside Durban, South Africa, is another four-level stack interchange. The N3 is the busiest highway in South Africa and a very busy truck route. Because Johannesburg is not located near a body of water, most of the city's exports travel through the Port of Durban. The N2 connects Cape Town with Durban and serves the South African cities of Port Elizabeth, East London and George and the towns of Grahamstown, Port Shepstone, Richards Bay and the iSimangaliso Wetland Park. Two busy roads intersect at the junction. A four-level stack interchange was chosen to serve the high volumes of traffic. The Mount Edgecombe Interchange is another four-level stack interchange just outside Durban, South Africa, and is the intersection between the N2 (to Durban and KwaDukuza) and the M41 (to Mount Edgecombe and uMhlanga). The interchange which was previously a simple diamond interchange was upgraded to a four-level/four-stack interchange, with the upgraded interchange opened in October 2018. A four-level stack interchange was chosen to serve the increasing volumes of traffic in the uMhlanga/Mount Edgecombe area. Five-level stack Texas-style stack In Texas, many stacks contain five levels. They usually have the same configuration as four-level stacks, but frontage roads add a fifth level. The frontage roads usually intersect with traffic lights and are similar to a grid of nearby one-way streets. A common setup is for one mainline to go below grade and another to go above grade. The intersection of the frontage roads is typically at grade or close to it. Two pairs of left-turn connectors are built above these. The Dallas–Fort Worth metroplex has several five-level stacks, most notably the High Five Interchange between US 75 and I-635; completed in 2005 and currently the tallest interchange in the world. Others can be found at the interchanges between State Highway 121 (SH 121) and the Dallas North Tollway, SH 121 and I-35E/US 77, I-30 and I-35W, I-30 and President George Bush Turnpike and others which are technically five levels but do not fit under a Texas-style stack configuration (i.e. the extra level being located away from the central stack or existing in only one direction). The Houston area has seven five-level stack interchanges along Beltway 8: at I-10 east and west of downtown, I-69 northeast and southwest of downtown, I-45 north and south of downtown, and US 290 in the beltway's northwest quadrant. The newly reconstructed interchange of I-610 and I-69, with the new I‑610 northbound feeder road built underground and the new I-610 southbound feeder road overpass, is also a five-level stack interchange. Though not a Texas-style stack in the above sense, an unusual stack is nonetheless found in Houston that features more than four levels of traffic but whose fifth level exists in only one direction. In 2011, the previously four-level stack interchange between I-610 and I-10 on the city's east side gained a new (though long-planned) level of complexity with the opening of four ramps connecting the new US 90 (Crosby Freeway) to the east, featuring direct movements for the new freeway to and from the southeast quadrant of I-610, to westbound I-10, and from eastbound I-10. It is the latter ramp which gives the interchange the fifth level, as US 90 to I-10 westbound merges onto I-10 before crossing I-610. (None of the frontage roads for these highways cross the interchange itself, and thus do not factor into the complexity of the stack.) More than 40 bridges make up the five-level stack interchange known as the Big I between I-40 and I-25 in Albuquerque, New Mexico. China is also home to many Texas-style stack interchanges. For example the Nanjing's Yingtian Street Elevated has one each where it intersects the Inner Ring Road twice. Other five-level stacks Sometimes a fifth level is added for HOV connectors. An example of this exists in Los Angeles, California, at the Judge Harry Pregerson Interchange. The connector from HOV southbound 110 to HOV westbound 105 can be at the same level as the connector from mixed eastbound 105 to mixed northbound 110, but the connector from HOV southbound 110 to HOV eastbound 105 needs to be higher level, since it crosses over the former connector. Another case is where connection to nearby arterials suggests that another level may be useful, thus making the interchange more complicated but easier to use. In the Atlanta area, a side ramp forms the fifth level of the Tom Moreland Interchange, colloquially known as Spaghetti Junction, found in DeKalb County, Georgia. Six-level stack There is a six-level stack on the Yan'an East Road Interchange () in Puxi, Shanghai, with no dedicated HOV/bus/truck lanes. It is six-level stack because it is formed by two elevated highways, Nanbei Elevated Road and Yan'an Elevated Road with service roads and a footbridge underneath. The centrally located interchange has a central pillar known as the Nine-Dragon Pillar (). The story is that after several construction accidents, a monk suggested the nine-dragon be welcomed with a bas relief sculpture depicting the dragon. An unusual six-level stack is located at the junction between Interstate 35E and I-635 in Dallas, Texas, and does not contain any service or frontage roads. The interchange features two levels of highway with the top three levels consisting of direct connection ramps and HOV connectors. A single ramp leading from I-635 westbound to I-35E southbound weaves underneath the I-635 eastbound bridge, making the interchange six levels. The interchange between I-35E and the Sam Rayburn Tollway in Lewisville, Texas, although similar in design to five-level stacks elsewhere in Texas, also qualifies as a six-level stack, since the ramp connecting the eastbound Sam Rayburn Tollway with northbound I-35E goes over the fifth-level ramps connecting I-35E in both directions with the Sam Rayburn Tollway. The ramp connecting the westbound Sam Rayburn Tollway with southbound I-35E is on the fourth level of the interchange, going under the fifth-level ramps connecting both directions of I-35E with the Sam Rayburn Tollway.
Technology
Road infrastructure
null
3148933
https://en.wikipedia.org/wiki/IUPAC%20nomenclature%20of%20inorganic%20chemistry
IUPAC nomenclature of inorganic chemistry
In chemical nomenclature, the IUPAC nomenclature of inorganic chemistry is a systematic method of naming inorganic chemical compounds, as recommended by the International Union of Pure and Applied Chemistry (IUPAC). It is published in Nomenclature of Inorganic Chemistry (which is informally called the Red Book). Ideally, every inorganic compound should have a name from which an unambiguous formula can be determined. There is also an IUPAC nomenclature of organic chemistry. System The names "caffeine" and "3,7-dihydro-1,3,7-trimethyl-1H-purine-2,6-dione" both signify the same chemical compound. The systematic name encodes the structure and composition of the caffeine molecule in some detail, and provides an unambiguous reference to this compound, whereas the name "caffeine" simply names it. These advantages make the systematic name far superior to the common name when absolute clarity and precision are required. However, for the sake of brevity, even professional chemists will use the non-systematic name almost all of the time, because caffeine is a well-known common chemical with a unique structure. Similarly, H2O is most often simply called water in English, though other chemical names do exist. Single atom anions are named with an -ide suffix: for example, H− is hydride. Compounds with a positive ion (cation): The name of the compound is simply the cation's name (usually the same as the element's), followed by the anion. For example, NaCl is sodium chloride, and CaF2 is calcium fluoride. Cations of transition metals able to take multiple charges are labeled with Roman numerals in parentheses to indicate their charge. For example, Cu+ is copper(I), Cu2+ is copper(II). An older, deprecated notation is to append -ous or -ic to the root of the Latin name to name ions with a lesser or greater charge. Under this naming convention, Cu+ is cuprous and Cu2+ is cupric. For naming metal complexes see the page on complex (chemistry). Oxyanions (polyatomic anions containing oxygen) are named with -ite or -ate, for a lesser or greater quantity of oxygen, respectively. For example, is nitrite, while is nitrate. If four oxyanions are possible, the prefixes hypo- and per- are used: hypochlorite is ClO−, perchlorate is . The prefix bi- is a deprecated way of indicating the presence of a single hydrogen ion, as in "sodium bicarbonate" (NaHCO3). The modern method specifically names the hydrogen atom. Thus, NaHCO3 would be pronounced sodium hydrogen carbonate. Positively charged ions are called cations and negatively charged ions are called anions. The cation is always named first. Ions can be metals, non-metals or polyatomic ions. Therefore, the name of the metal or positive polyatomic ion is followed by the name of the non-metal or negative polyatomic ion. The positive ion retains its element name whereas for a single non-metal anion the ending is changed to -ide. Example: sodium chloride, potassium oxide, or calcium carbonate. When the metal has more than one possible ionic charge or oxidation number the name becomes ambiguous. In these cases the oxidation number (the same as the charge) of the metal ion is represented by a Roman numeral in parentheses immediately following the metal ion name. For example, in uranium(VI) fluoride the oxidation number of uranium is 6. Another example is the iron oxides. FeO is iron(II) oxide and Fe2O3 is iron(III) oxide. An older system used prefixes and suffixes to indicate the oxidation number, according to the following scheme: Thus the four oxyacids of chlorine are called hypochlorous acid (HOCl), chlorous acid (HOClO), chloric acid (HOClO2) and perchloric acid (HOClO3), and their respective conjugate bases are hypochlorite, chlorite, chlorate and perchlorate ions. This system has partially fallen out of use, but survives in the common names of many chemical compounds: the modern literature contains few references to "ferric chloride" (instead calling it "iron(III) chloride"), but names like "potassium permanganate" (instead of "potassium manganate(VII)") and "sulfuric acid" abound. Traditional naming Simple ionic compounds An ionic compound is named by its cation followed by its anion. See polyatomic ion for a list of possible ions. For cations that take on multiple charges, the charge is written using Roman numerals in parentheses immediately following the element name. For example, Cu(NO3)2 is copper(II) nitrate, because the charge of two nitrate ions () is 2 × −1 = −2, and since the net charge of the ionic compound must be zero, the Cu ion has a 2+ charge. This compound is therefore copper(II) nitrate. In the case of cations with a +4 oxidation state, the only acceptable format for the Roman numeral 4 is IV and not IIII. The Roman numerals in fact show the oxidation number, but in simple ionic compounds (i.e., not metal complexes) this will always equal the ionic charge on the metal. For a simple overview see , for more details see selected pages from IUPAC rules for naming inorganic compounds . List of common ion names Monatomic anions: chloride sulfide phosphide Polyatomic ions: ammonium hydronium nitrate nitrite hypochlorite chlorite chlorate perchlorate sulfite sulfate thiosulfate hydrogen sulfite (or bisulfite) hydrogen carbonate (or bicarbonate) carbonate phosphate hydrogen phosphate dihydrogen phosphate chromate dichromate borate arsenate oxalate cyanide thiocyanate permanganate Hydrates Hydrates are ionic compounds that have absorbed water. They are named as the ionic compound followed by a numerical prefix and -hydrate. The numerical prefixes used are listed below (see IUPAC numerical multiplier): mono- di- tri- tetra- penta- hexa- hepta- octa- nona- deca- For example, CuSO4·5H2O is "copper(II) sulfate pentahydrate". Molecular compounds Inorganic molecular compounds are named with a prefix (see list above) before each element. The more electronegative element is written last and with an -ide suffix. For example, H2O (water) can be called dihydrogen monoxide. Organic molecules do not follow this rule. In addition, the prefix mono- is not used with the first element; for example, SO2 is sulfur dioxide, not "monosulfur dioxide". Sometimes prefixes are shortened when the ending vowel of the prefix "conflicts" with a starting vowel in the compound. This makes the name easier to pronounce; for example, CO is "carbon monoxide" (as opposed to "monooxide"). Common exceptions The "a" of the penta- prefix is not dropped before a vowel. As the IUPAC Red Book 2005 page 69 states, "The final vowels of multiplicative prefixes should not be elided (although 'monoxide', rather than 'monooxide', is an allowed exception because of general usage)." There are a number of exceptions and special cases that violate the above rules. Sometimes the prefix is left off the initial atom: I2O5 is known as iodine pentaoxide, but it should be called diiodine pentaoxide. N2O3 is called nitrogen sesquioxide (sesqui- means ). The main oxide of phosphorus is called phosphorus pentaoxide. It should actually be diphosphorus pentaoxide, but it is assumed that there are two phosphorus atoms (P2O5), as they are needed in order to balance the oxidation numbers of the five oxygen atoms. However, people have known for years that the real form of the molecule is P4O10, not P2O5, yet it is not normally called tetraphosphorus decaoxide. In writing formulas, ammonia is NH3 even though nitrogen is more electronegative (in line with the convention used by IUPAC as detailed in Table VI of the red book). Likewise, methane is written as CH4 even though carbon is more electronegative (Hill system). Nomenclature of Inorganic Chemistry Nomenclature of Inorganic Chemistry, commonly referred to by chemists as the Red Book, is a collection of recommendations on IUPAC nomenclature, published at irregular intervals by the IUPAC. The last full edition was published in 2005, in both paper and electronic versions.
Physical sciences
Nomenclature
Chemistry
8967165
https://en.wikipedia.org/wiki/Multibody%20system
Multibody system
Multibody system is the study of the dynamic behavior of interconnected rigid or flexible bodies, each of which may undergo large translational and rotational displacements. Introduction The systematic treatment of the dynamic behavior of interconnected bodies has led to a large number of important multibody formalisms in the field of mechanics. The simplest bodies or elements of a multibody system were treated by Newton (free particle) and Euler (rigid body). Euler introduced reaction forces between bodies. Later, a series of formalisms were derived, only to mention Lagrange’s formalisms based on minimal coordinates and a second formulation that introduces constraints. Basically, the motion of bodies is described by their kinematic behavior. The dynamic behavior results from the equilibrium of applied forces and the rate of change of momentum. Nowadays, the term multibody system is related to a large number of engineering fields of research, especially in robotics and vehicle dynamics. As an important feature, multibody system formalisms usually offer an algorithmic, computer-aided way to model, analyze, simulate and optimize the arbitrary motion of possibly thousands of interconnected bodies. Applications While single bodies or parts of a mechanical system are studied in detail with finite element methods, the behavior of the whole multibody system is usually studied with multibody system methods within the following areas: Aerospace engineering (helicopter, landing gears, behavior of machines under different gravity conditions) Biomechanics Combustion engine, gears and transmissions, chain drive, belt drive Dynamic simulation Hoist, conveyor, paper mill Military applications Particle simulation (granular media, sand, molecules) Physics engine Robotics Vehicle simulation (vehicle dynamics, rapid prototyping of vehicles, improvement of stability, comfort optimization, improvement of efficiency, ...) Example The following example shows a typical multibody system. It is usually denoted as slider-crank mechanism. The mechanism is used to transform rotational motion into translational motion by means of a rotating driving beam, a connection rod and a sliding body. In the present example, a flexible body is used for the connection rod. The sliding mass is not allowed to rotate and three revolute joints are used to connect the bodies. While each body has six degrees of freedom in space, the kinematical conditions lead to one degree of freedom for the whole system. The motion of the mechanism can be viewed in the following gif animation: Concept A body is usually considered to be a rigid or flexible part of a mechanical system (not to be confused with the human body). An example of a body is the arm of a robot, a wheel or axle in a car or the human forearm. A link is the connection of two or more bodies, or a body with the ground. The link is defined by certain (kinematical) constraints that restrict the relative motion of the bodies. Typical constraints are: cardan joint or Universal Joint; 4 kinematical constraints prismatic joint; relative displacement along one axis is allowed, constrains relative rotation; implies 5 kinematical constraints revolute joint; only one relative rotation is allowed; implies 5 kinematical constraints; see the example above spherical joint; constrains relative displacements in one point, relative rotation is allowed; implies 3 kinematical constraints There are two important terms in multibody systems: degree of freedom and constraint condition. Degree of freedom The degrees of freedom denote the number of independent kinematical possibilities to move. In other words, degrees of freedom are the minimum number of parameters required to completely define the position of an entity in space. A rigid body has six degrees of freedom in the case of general spatial motion, three of them translational degrees of freedom and three rotational degrees of freedom. In the case of planar motion, a body has only three degrees of freedom with only one rotational and two translational degrees of freedom. The degrees of freedom in planar motion can be easily demonstrated using a computer mouse. The degrees of freedom are: left-right, forward-backward and the rotation about the vertical axis. Constraint condition A constraint condition implies a restriction in the kinematical degrees of freedom of one or more bodies. The classical constraint is usually an algebraic equation that defines the relative translation or rotation between two bodies. There are furthermore possibilities to constrain the relative velocity between two bodies or a body and the ground. This is for example the case of a rolling disc, where the point of the disc that contacts the ground has always zero relative velocity with respect to the ground. In the case that the velocity constraint condition cannot be integrated in time in order to form a position constraint, it is called non-holonomic. This is the case for the general rolling constraint. In addition to that there are non-classical constraints that might even introduce a new unknown coordinate, such as a sliding joint, where a point of a body is allowed to move along the surface of another body. In the case of contact, the constraint condition is based on inequalities and therefore such a constraint does not permanently restrict the degrees of freedom of bodies. Equations of motion The equations of motion are used to describe the dynamic behavior of a multibody system. Each multibody system formulation may lead to a different mathematical appearance of the equations of motion while the physics behind is the same. The motion of the constrained bodies is described by means of equations that result basically from Newton’s second law. The equations are written for general motion of the single bodies with the addition of constraint conditions. Usually the equations of motions are derived from the Newton-Euler equations or Lagrange’s equations. The motion of rigid bodies is described by means of (1) (2) These types of equations of motion are based on so-called redundant coordinates, because the equations use more coordinates than degrees of freedom of the underlying system. The generalized coordinates are denoted by , the mass matrix is represented by which may depend on the generalized coordinates. represents the constraint conditions and the matrix (sometimes termed the Jacobian) is the derivative of the constraint conditions with respect to the coordinates. This matrix is used to apply constraint forces to the according equations of the bodies. The components of the vector are also denoted as Lagrange multipliers. In a rigid body, possible coordinates could be split into two parts, where represents translations and describes the rotations. Quadratic velocity vector In the case of rigid bodies, the so-called quadratic velocity vector is used to describe Coriolis and centrifugal terms in the equations of motion. The name is because includes quadratic terms of velocities and it results due to partial derivatives of the kinetic energy of the body. Lagrange multipliers The Lagrange multiplier is related to a constraint condition and usually represents a force or a moment, which acts in “direction” of the constraint degree of freedom. The Lagrange multipliers do no "work" as compared to external forces that change the potential energy of a body. Minimal coordinates The equations of motion (1,2) are represented by means of redundant coordinates, meaning that the coordinates are not independent. This can be exemplified by the slider-crank mechanism shown above, where each body has six degrees of freedom while most of the coordinates are dependent on the motion of the other bodies. For example, 18 coordinates and 17 constraints could be used to describe the motion of the slider-crank with rigid bodies. However, as there is only one degree of freedom, the equation of motion could be also represented by means of one equation and one degree of freedom, using e.g. the angle of the driving link as degree of freedom. The latter formulation has then the minimum number of coordinates in order to describe the motion of the system and can be thus called a minimal coordinates formulation. The transformation of redundant coordinates to minimal coordinates is sometimes cumbersome and only possible in the case of holonomic constraints and without kinematical loops. Several algorithms have been developed for the derivation of minimal coordinate equations of motion, to mention only the so-called recursive formulation. The resulting equations are easier to be solved because in the absence of constraint conditions, standard time integration methods can be used to integrate the equations of motion in time. While the reduced system might be solved more efficiently, the transformation of the coordinates might be computationally expensive. In very general multibody system formulations and software systems, redundant coordinates are used in order to make the systems user-friendly and flexible. Flexible multibody There are several cases in which it is necessary to consider the flexibility of the bodies. For example in cases where flexibility plays a fundamental role in kinematics as well as in compliant mechanisms. Flexibility could be take in account in different way. There are three main approaches: Discrete flexible multibody, the flexible body is divided into a set of rigid bodies connected by elastic stiffnesses representative of the body's elasticity Modal condensation, in which elasticity is described through a finite number of modes of vibration of the body by exploiting the degrees of freedom linked to the amplitude of the mode Full flex, all the flexibility of the body is taken into account by discretize body in sub elements with singles displacement linked from elastic material properties
Mathematics
Dynamical systems
null
8967210
https://en.wikipedia.org/wiki/Flemish%20Giant%20rabbit
Flemish Giant rabbit
The Flemish Giant rabbit () is the largest breed of domestic rabbit (Oryctolagus cuniculus domesticus). They weigh 6.8 kilograms (15 lb) on average, though the largest ones can weigh up to 22 kilograms (49 lb). Historically they are a utility breed used for their fur and meat. In the modern day, they are no longer commonly raised for meat, due to their slow growth and very large bones, and are raised for exhibition at rabbit shows. They are often kept as pets as they are known for being docile and patient when being handled. History The Flemish Giant originated in Flanders. It was bred as early as the 16th century near the city of Ghent, Belgium. It is believed to have descended from a number of meat and fur breeds, possibly including the ("Stone Rabbit"—referring to the old Belgian weight size of one stone or about ) and the European "Patagonian" breed (now extinct). This "Patagonian" rabbit, a large breed that was once bred in Belgium and France, was not the same as the Patagonian rabbit of Argentina (Sylvilagus brasiliensis), a wild species of a different genus weighing less than , nor the Patagonian mara (Dolichotis patagonum), sometimes called the Patagonian hare, a species in the cavy family of rodents that cannot interbreed with rabbits. Thomas Coatoam, in his Origins of the Flemish Giants, states that "The earliest authentic record of the Flemish Giant Rabbit occurred about the year 1860, in which the veterinarian and ex-biologist, Oscar Nisbett selectively bred a series of generations of Patagonian rabbit." The first standards for the breed were written in 1893 by Albert van Heuverzwijn. The Flemish Giant is an ancestor of many rabbit breeds from all over the world, one of which is the Belgian Hare, which was imported into England in the mid-19th century. The Flemish Giant was exported from England and Belgium to America in the early 1890s to increase the size of meat rabbits during the great "rabbit boom". In the British Isles, the breed developed to such a degree that it was recognized as distinct from the Continental Giant rabbit as of 1937. The breed received little attention in the United States until about 1910, when it started appearing at small livestock shows throughout the country. Today, it is one of the more popular breeds at rabbit shows due to its unusually large size and varying colors. It is promoted by the National Federation of Flemish Giant Rabbit Breeders, which was formed in 1915. The Flemish Giant has many nicknames, including the "Gentle Giant" for its uniquely docile personality, and the "universal rabbit" for its varied purposes as a pet, show, breeding, meat, and fur animal. Flemish giants are popular as pets, especially in Europe and North America. Although they are large, they are known to exhibit cleanliness and can be trained to use a litter box. Appearance As one of the largest breeds of domestic rabbit, the Flemish Giant is a semi-arch type rabbit with its back arch starting behind the shoulders and carrying through to the base of the tail, giving a "mandolin" shape. The body of a Flemish Giant Rabbit is long and powerful, with relatively broad hindquarters. The fur of the Flemish Giant is glossy and dense. When stroked from the hindquarters to the head, the fur will roll back to its original position. Bucks have a broad, massive head in comparison to does, and can take 1.5 years to reach full maturity. Does may have a large, full, evenly carried dewlap (the fold of skin under their chins), and can take 1 year to reach full maturity. The American Rabbit Breeders Association (ARBA) standard recognizes seven different colors for the breed: black, blue, fawn, sandy, light gray, steel gray, and white. The show standard minimum weight for a senior doe is , and the show standard minimum weight of a senior buck is . Behaviour and lifestyle Flemish Giants can be docile and tolerant of being handled if they frequently have interactions with humans. Breeding The gestation period is between 28 and 31 days. On average, they give birth at 30–32 days. The Flemish Giant rabbit can produce large litters, usually between 5 and 12 in a litter. 4-H and show Flemish Giants, due to their uncomplicated grooming requirements and docile personalities, are used by 4-H programs throughout the United States as a starter rabbit for teaching children responsibility and care of farm animals and pets. Another popular youth program outside 4-H that promotes responsible show breeding is the National Federation of Flemish Giant Breeders Youth Program. Flemish Giants are the second-oldest domesticated rabbit breed in the United States, following behind the now rare Belgian Hare.
Biology and health sciences
Rabbits
Animals
8967572
https://en.wikipedia.org/wiki/Polyisoprene
Polyisoprene
Polyisoprene is strictly speaking a collective name for polymers that are produced by polymerization of isoprene. In practice polyisoprene is commonly used to refer to synthetic cis-1,4-polyisoprene, made by the industrial polymerisation of isoprene. Natural forms of polyisoprene are also used in substantial quantities, the most important being "natural rubber" (mostly cis-1,4-polyisoprene), which is derived from the sap of trees. Both synthetic polyisoprene and natural rubber are highly elastic and consequently used to make tires and a variety of other applications. The trans isomer, which is much harder than the cis isomer, has also seen significant use in the past. It too has been synthesised and extracted from plant sap, the latter resin being known as gutta-percha. These were widely used as an electrical insulator and as components of golf balls. Annual worldwide production of synthetic polyisoprene was 13 million tons in 2007 and 16 million tons in 2020. Synthesis In principle, the polymerization of isoprene can result in four different isomers. The relative amount of each isomer in the polymer is dependent on the mechanism of the polymerization reaction. Anionic chain polymerization, which is initiated by n-Butyllithium, produces cis-1,4-polyisoprene dominant polyisoprene. 90–92% of repeating units are cis-1,4-, 2–3% trans-1,4- and 6–7% 3,4-units. Coordinative chain polymerization: With Ziegler–Natta catalyst TiCl4/Al(i-C4H9)3, a more pure cis-1,4-polyisoprene similar to natural rubber is formed. With Ziegler–Natta catalyst VCl3/Al(i-C4H9)3, trans-dominant polyisoprene is formed. 1,2 and 3,4 dominant polyisoprene is produced MoO2Cl2 catalyst supported by phosphorus ligand and Al(OPhCH3)(i-Bu)2 co-catalyst. History The first reported commercialisation of a stereoregular poly-1,4-isoprene with > 90% cis (90% to 92%) was in 1960 by the Shell Chemical Company. Shell used an alkyl lithium catalyst. 90% cis-1,4 content proved insufficiently crystalline to be useful. In 1962, Goodyear succeeded in making a 98.5% cis polymer using a Ziegler-Natta catalyst, and this went on to commercial success. Usage Natural rubber and synthetic polyisoprene are used primarily for tires. Other applications include latex products, footwear, belting and hoses and condoms. Natural gutta-percha and synthetic trans-1,4-polyisoprene were used for golf balls.
Physical sciences
Polymers
Chemistry
8971972
https://en.wikipedia.org/wiki/Sea%20cave
Sea cave
A sea cave, is also known as a littoral cave, a type of cave formed primarily by the wave action of the sea. The primary process involved is erosion. Sea caves are found throughout the world, actively forming along present coastlines and as relict sea caves on former coastlines. Some of the largest wave-cut caves in the world are found on the coast of Norway, but are now 100 feet or more above present sea level. These would still be classified as littoral caves. By contrast, in places like Thailand's Phang Nga Bay, solutionally formed caves in limestone have been flooded by the rising sea and are now subject to littoral erosion, representing a new phase of their enlargement. Some of the best-known sea caves are European. Fingal's Cave, on the island of Staffa in Scotland, is a spacious cave some 70 m long, formed in columnar basalt. The Blue Grotto of Capri, although smaller, is famous for the apparent luminescent quality of its water, imparted by light passing through underwater openings. The Romans built a stairway in its rear and a now-collapsed tunnel to the surface. The Greek islands are also noted for the variety and beauty of their sea caves. Numerous sea caves have been surveyed in England, Scotland, and in France, particularly on the Normandy coast. Until 2013, the largest known sea caves were found along the west coast of the United States, the Hawaiian islands, and the Shetland Islands. In 2013 the discovery and survey of the world's largest sea cave was announced. Matainaka Cave – located on the Otago coast of New Zealand's South Island – has proven to be the world's most extensive at 1.5 km in length. Also in 2013, Crossley reported a newly surveyed complex reaching just over a kilometer in survey at Bethells Beach on New Zealand's North Island. Formation Littoral caves may be found in a wide variety of host rocks, ranging from sedimentary to metamorphic to igneous, but caves in the latter tend to be larger due to the greater strength of the host rock. However, there are some notable exceptions as discussed below. In order to form a sea cave, the host rock must first contain a weak zone. In metamorphic or igneous rock, this is typically either a fault as in the caves of the Channel Islands of California, or a dike as in the large sea caves of Kauai, Hawaii’s Na Pali Coast. In sedimentary rocks, this may be a bedding-plane parting or a contact between layers of different hardness. The latter may also occur in igneous rocks, such as in the caves on Santa Cruz Island, California, where waves have attacked the contact between the andesitic basalt and the agglomerate. The driving force in littoral cave development is wave action. Erosion is ongoing anywhere that waves batter rocky coasts, but where sea cliffs contain zones of weakness, rock is removed at a greater rate along these zones. As the sea reaches into the fissures thus formed, they begin to widen and deepen due to the tremendous force exerted within a confined space, not only by direct action of the surf and any rock particles that it bears, but also by compression of air within. Blowholes (partially submerged caves that eject large sprays of sea water as waves retreat and allow rapid re-expansion of air compressed within) attest to this process. Adding to the hydraulic power of the waves is the abrasive force of suspended sand and rock. Most sea-cave walls are irregular and chunky, reflecting an erosional process where the rock is fractured piece by piece. However, some caves have portions where the walls are rounded and smoothed, typically floored with cobbles, and result from the swirling motion of these cobbles in the surf zone. True littoral caves should not be confused with inland caves that have been intersected and revealed when a sea cliff line is eroded back, or with dissolutional voids formed in the littoral zone on tropical islands. In some regions, such as Halong Bay, Vietnam, caves in carbonate rocks are found in littoral zones, and being enlarged by littoral processes but were originally formed by dissolution. Such caves have been termed as hybrid caves. Rainwater may also influence sea-cave formation. Carbonic and organic acids leached from the soil may assist in weakening rock within fissures. As in solutional caves, small speleothems may develop in sea caves. Sea cave chambers sometimes collapse leaving a “littoral sinkhole”. These may be quite large, such as Oregon’s Devils Punch Bowl or the Queen's Bath on the Na Pali coast. Small peninsulas or headlands often have caves that cut completely through them, since they are subject to attack from both sides, and the collapse of a sea cave tunnel can leave a free-standing “sea stack” along the coast. The Californian island of Anacapa is thought to have been split into three islets by such a process. Life within sea caves may assist in their enlargement as well. For example, sea urchins drill their way into the rock, and over successive generations may remove considerable bedrock from the floors and lower walls. Factors influencing size Most sea caves are small in relation to other types. A compilation of sea-cave surveys as of July 2014 shows 2 over 1000 meters, 6 over 400 meters, nine over 300 meters, 25 over 200 meters, and 108 over 100 meters in length. In Norway, several apparently relict sea caves exceed 300 meters in length. There is no doubt that many other large sea caves exist but have not been investigated due to their remote locations and/or hostile sea conditions. Several factors contribute to the development of relatively large sea caves. The nature of the zone of weakness itself is surely a factor, although difficult to quantify. A more readily observed factor is the situation of the cave's entrance relative to prevailing sea conditions. At Santa Cruz Island, the largest caves face into the prevailing northwest swell conditions—a factor which also makes them more difficult to survey. Caves in well-protected bays sheltered from prevailing seas and winds tend to be smaller, as are caves in areas where the seas tend to be calmer. The type of host rock is important as well. Most of the large sea caves on the Western U.S. coast and Hawaii are in basalt, a strong host rock compared to sedimentary rock. Basaltic caves can penetrate far into cliffs where most of the surface erodes relatively slowly. In weaker rock, erosion along a weaker zone may not greatly outstrip that of the cliff face. However, the world's largest sea cave has formed in the heavily fractured Caversham sandstone (Barth, 2013) changing our understanding of which host rocks can form large sea caves. Time is another factor. The active littoral zone changes throughout geological time by an interplay between sea-level change and regional uplift. Recurrent ice ages during the Pleistocene have changed sea levels within a vertical range of some 200 meters. Significant sea caves have formed in the California Channel Islands that are now totally submerged by the rise in sea levels over the last 12 000 years. In regions of steady uplift, continual littoral erosion may produce sea caves of great height — Painted Cave is almost 40 m high at its entrance. On the Norwegian coast there are huge sea caves now uplifted 30 or more meters above sea level. Sediment dating in the largest of these (Halvikshulen in Osen, 340 m long) shows that it was formed over a period of at least a million years. It may well be the longest wave-cut cave in the world. The largest cave by volume is Rikoriko Cave in the Poor Knights Islands in New Zealand with 221,494 m3. Finally, caves that are larger tend to be more complex. By far the majority of sea caves consist of a single passage or chamber. Those formed on faults tend to have canyon-like or angled passages that are very straight. In Seal Canyon Cave on Santa Cruz Island, entrance light is still visible from the back of the cave 189 m from the entrance. By contrast, caves formed along horizontal bedding planes tend to be wider with lower ceiling heights. In some areas, sea caves may have dry upper levels, lifted above the active littoral zone by regional uplift. Sea caves can prove surprisingly complex where numerous zones of weakness—often faults—converge. In Catacombs Cave on Anacapa Island (California), at least six faults intersect. In several caves of the Californian Channel Islands, long fissure passages open up into large chambers beyond. This is invariably associated with intersection of a second fault oriented almost perpendicularly to that along the entrance passage. When caves have multiple entrances, they are exposed to more wave action and hence may grow relatively faster. There is an exceptionally large cave underlying the Fogla Skerry, an islet off the coast of Papa Stour, in the Shetland Islands. Though unsurveyed, estimates place it at almost 500 m of passage. Matainaka Cave in New Zealand has 12 separate entrances into which waves can penetrate and numerous joints along which intersecting passages have developed. Bibliography
Physical sciences
Oceanic and coastal landforms
Earth science
181334
https://en.wikipedia.org/wiki/Discrete%20logarithm
Discrete logarithm
In mathematics, for given real numbers a and b, the logarithm logb a is a number x such that . Analogously, in any group G, powers bk can be defined for all integers k, and the discrete logarithm logb a is an integer k such that . In number theory, the more commonly used term is index: we can write x = indr a (mod m) (read "the index of a to the base r modulo m") for r x ≡ a (mod m) if r is a primitive root of m and gcd(a,m) = 1. Discrete logarithms are quickly computable in a few special cases. However, no efficient method is known for computing them in general. In cryptography, the computational complexity of the discrete logarithm problem, along with its application, was first proposed in the Diffie–Hellman problem. Several important algorithms in public-key cryptography, such as ElGamal, base their security on the hardness assumption that the discrete logarithm problem (DLP) over carefully chosen groups has no efficient solution. Definition Let G be any group. Denote its group operation by multiplication and its identity element by 1. Let b be any element of G. For any positive integer k, the expression bk denotes the product of b with itself k times: Similarly, let b−k denote the product of b−1 with itself k times. For k = 0, the kth power is the identity: . Let a also be an element of G. An integer k that solves the equation is termed a discrete logarithm (or simply logarithm, in this context) of a to the base b. One writes k = logb a. Examples Powers of 10 The powers of 10 are For any number a in this list, one can compute log10 a. For example, log10 10000 = 4, and log10 0.001 = −3. These are instances of the discrete logarithm problem. Other base-10 logarithms in the real numbers are not instances of the discrete logarithm problem, because they involve non-integer exponents. For example, the equation log10 53 = 1.724276… means that 101.724276… = 53. While integer exponents can be defined in any group using products and inverses, arbitrary real exponents, such as this 1.724276…, require other concepts such as the exponential function. In group-theoretic terms, the powers of 10 form a cyclic group G under multiplication, and 10 is a generator for this group. The discrete logarithm log10 a is defined for any a in G. Powers of a fixed real number A similar example holds for any non-zero real number b. The powers form a multiplicative subgroup G = {…, b−3, b−2, b−1, 1, b1, b2, b3, …} of the non-zero real numbers. For any element a of G, one can compute logb a. Modular arithmetic One of the simplest settings for discrete logarithms is the group Zp×. This is the group of multiplication modulo the prime p. Its elements are non-zero congruence classes modulo p, and the group product of two elements may be obtained by ordinary integer multiplication of the elements followed by reduction modulo p. The kth power of one of the numbers in this group may be computed by finding its kth power as an integer and then finding the remainder after division by p. When the numbers involved are large, it is more efficient to reduce modulo p multiple times during the computation. Regardless of the specific algorithm used, this operation is called modular exponentiation. For example, consider Z17×. To compute 34 in this group, compute 34 = 81, and then divide 81 by 17, obtaining a remainder of 13. Thus 34 = 13 in the group Z17×. The discrete logarithm is just the inverse operation. For example, consider the equation 3k ≡ 13 (mod 17). From the example above, one solution is k = 4, but it is not the only solution. Since 316 ≡ 1 (mod 17)—as follows from Fermat's little theorem—it also follows that if n is an integer then 34+16n ≡ 34 × (316)n ≡ 13 × 1n ≡ 13 (mod 17). Hence the equation has infinitely many solutions of the form 4 + 16n. Moreover, because 16 is the smallest positive integer m satisfying 3m ≡ 1 (mod 17), these are the only solutions. Equivalently, the set of all possible solutions can be expressed by the constraint that k ≡ 4 (mod 16). Powers of the identity In the special case where b is the identity element 1 of the group G, the discrete logarithm logb a is undefined for a other than 1, and every integer k is a discrete logarithm for a = 1. Properties Powers obey the usual algebraic identity bk + l = bk b&hairsp;l. In other words, the function defined by f(k) = bk is a group homomorphism from the integers Z under addition onto the subgroup H of G generated by b. For all a in H, logb a exists. Conversely, logb a does not exist for a that are not in H. If H is infinite, then logb a is also unique, and the discrete logarithm amounts to a group isomorphism On the other hand, if H is finite of order n, then logb a is unique only up to congruence modulo n, and the discrete logarithm amounts to a group isomorphism where Zn denotes the additive group of integers modulo n. The familiar base change formula for ordinary logarithms remains valid: If c is another generator of H, then Algorithms The discrete logarithm problem is considered to be computationally intractable. That is, no efficient classical algorithm is known for computing discrete logarithms in general. A general algorithm for computing logb a in finite groups G is to raise b to larger and larger powers k until the desired a is found. This algorithm is sometimes called trial multiplication. It requires running time linear in the size of the group G and thus exponential in the number of digits in the size of the group. Therefore, it is an exponential-time algorithm, practical only for small groups G. More sophisticated algorithms exist, usually inspired by similar algorithms for integer factorization. These algorithms run faster than the naïve algorithm, some of them proportional to the square root of the size of the group, and thus exponential in half the number of digits in the size of the group. However, none of them runs in polynomial time (in the number of digits in the size of the group). Baby-step giant-step Function field sieve Index calculus algorithm Number field sieve Pohlig–Hellman algorithm Pollard's rho algorithm for logarithms Pollard's kangaroo algorithm (aka Pollard's lambda algorithm) There is an efficient quantum algorithm due to Peter Shor. Efficient classical algorithms also exist in certain special cases. For example, in the group of the integers modulo p under addition, the power bk becomes a product bk, and equality means congruence modulo p in the integers. The extended Euclidean algorithm finds k quickly. With Diffie–Hellman, a cyclic group modulo a prime p is used, allowing an efficient computation of the discrete logarithm with Pohlig–Hellman if the order of the group (being p−1) is sufficiently smooth, i.e. has no large prime factors. Comparison with integer factorization While computing discrete logarithms and integer factorization are distinct problems, they share some properties: both are special cases of the hidden subgroup problem for finite abelian groups, both problems seem to be difficult (no efficient algorithms are known for non-quantum computers), for both problems efficient algorithms on quantum computers are known, algorithms from one problem are often adapted to the other, and the difficulty of both problems has been used to construct various cryptographic systems. Cryptography There exist groups for which computing discrete logarithms is apparently difficult. In some cases (e.g. large prime order subgroups of groups Zp×) there is not only no efficient algorithm known for the worst case, but the average-case complexity can be shown to be about as hard as the worst case using random self-reducibility. At the same time, the inverse problem of discrete exponentiation is not difficult (it can be computed efficiently using exponentiation by squaring, for example). This asymmetry is analogous to the one between integer factorization and integer multiplication. Both asymmetries (and other possibly one-way functions) have been exploited in the construction of cryptographic systems. Popular choices for the group G in discrete logarithm cryptography (DLC) are the cyclic groups Zp× (e.g. ElGamal encryption, Diffie–Hellman key exchange, and the Digital Signature Algorithm) and cyclic subgroups of elliptic curves over finite fields (see Elliptic curve cryptography). While there is no publicly known algorithm for solving the discrete logarithm problem in general, the first three steps of the number field sieve algorithm only depend on the group G, not on the specific elements of G whose finite log is desired. By precomputing these three steps for a specific group, one need only carry out the last step, which is much less computationally expensive than the first three, to obtain a specific logarithm in that group. It turns out that much internet traffic uses one of a handful of groups that are of order 1024 bits or less, e.g. cyclic groups with order of the Oakley primes specified in RFC 2409. The Logjam attack used this vulnerability to compromise a variety of internet services that allowed the use of groups whose order was a 512-bit prime number, so called export grade. The authors of the Logjam attack estimate that the much more difficult precomputation needed to solve the discrete log problem for a 1024-bit prime would be within the budget of a large national intelligence agency such as the U.S. National Security Agency (NSA). The Logjam authors speculate that precomputation against widely reused 1024 DH primes is behind claims in leaked NSA documents that NSA is able to break much of current cryptography.
Mathematics
Modular arithmetic
null
181554
https://en.wikipedia.org/wiki/Period%204%20element
Period 4 element
A period 4 element is one of the chemical elements in the fourth row (or period) of the periodic table of the chemical elements. The periodic table is laid out in rows to illustrate recurring (periodic) trends in the chemical behaviour of the elements as their atomic number increases: a new row is begun when chemical behaviour begins to repeat, meaning that elements with similar behaviour fall into the same vertical columns. The fourth period contains 18 elements beginning with potassium and ending with krypton – one element for each of the eighteen groups. It sees the first appearance of d-block (which includes transition metals) in the table. Properties All 4th-period elements are stable, and many are extremely common in the Earth's crust and/or core; it is the last period with no unstable elements. Many transition metals in the period are very strong, and therefore common in industry, especially iron. Some are toxic, with all known vanadium compounds toxic, arsenic one of the most well-known poisons, and bromine a toxic liquid. Conversely, many elements are essential to human survival, such as calcium, the main component in bones. Atomic structure Progressing towards increase of atomic number, the Aufbau principle causes elements of the period to put electrons onto 4s, 3d, and 4p subshells, in that order. However, there are exceptions, such as chromium. The first twelve elements—K, Ca, and transition metals—have from 1 to 12 valence electrons respectively, which are placed on 4s and 3d. Twelve electrons over the electron configuration of argon reach the configuration of zinc, namely 3d10 4s2. After this element, the filled 3d subshell effectively withdraws from chemistry and the subsequent trend looks much like trends in the periods 2 and 3. The p-block elements of period 4 have their valence shell composed of 4s and 4p subshells of the fourth () shell and obey the octet rule. For quantum chemistry namely this period sees transition from the simplified electron shell paradigm to research of many differently-shaped subshells. The relative disposition of their energy levels is governed by the interplay of various physical effects. The period's s-block metals put their differentiating electrons onto 4s despite having vacancies among nominally lower states – a phenomenon unseen in lighter elements. Contrariwise, the six elements from gallium to krypton are the heaviest where all electron shells below the valence shell are filled completely. This is no longer possible in further periods due to the existence of f-subshells starting from . List of elements {| class="wikitable sortable" ! colspan="3" | Chemical element ! Block ! Electron configuration |- !   ! ! ! ! |- bgcolor="" || 19 || K || Potassium || s-block || [Ar] 4s1 |- bgcolor="" || 20 || Ca || Calcium || s-block || [Ar] 4s2 |- bgcolor="" || 21 || Sc || Scandium || d-block || [Ar] 3d1 4s2 |- bgcolor="" || 22 || Ti || Titanium || d-block || [Ar] 3d2 4s2 |- bgcolor="" || 23 || V || Vanadium || d-block || [Ar] 3d3 4s2 |- bgcolor="" || 24 || Cr || Chromium || d-block || [Ar] 3d5 4s1 (*) |- bgcolor="" || 25 || Mn || Manganese || d-block || [Ar] 3d5 4s2 |- bgcolor="" || 26 || Fe || Iron || d-block || [Ar] 3d6 4s2 |- bgcolor="" || 27 || Co || Cobalt || d-block || [Ar] 3d7 4s2 |- bgcolor="" || 28 || Ni || Nickel || d-block || [Ar] 3d8 4s2 |- bgcolor="" || 29 || Cu || Copper || d-block || [Ar] 3d10 4s1 (*) |- bgcolor="" || 30 || Zn || Zinc || d-block || [Ar] 3d10 4s2 |- bgcolor="" || 31 || Ga || Gallium || p-block || [Ar] 3d10 4s2 4p1 |- bgcolor="" || 32 || Ge || Germanium || p-block || [Ar] 3d10 4s2 4p2 |- bgcolor="" || 33 || As || Arsenic || p-block || [Ar] 3d10 4s2 4p3 |- bgcolor="" || 34 || Se || Selenium || p-block || [Ar] 3d10 4s2 4p4 |- bgcolor="" || 35 || Br || Bromine || p-block || [Ar] 3d10 4s2 4p5 |- bgcolor="" || 36 || Kr || Krypton || p-block || [Ar] 3d10 4s2 4p6 |} (*) Exception to the Madelung rule s-block elements Potassium Potassium (K) is an alkali metal, underneath sodium and above rubidium, and the first element of period 4. One of the most reactive chemical elements, it is usually found only in compounds. It is a silvery metal that tarnishes rapidly when exposed to the oxygen in air, which oxidizes it. It is soft enough to be cut with a knife and the second least-dense element. Potassium has a relatively low melting point; it will melt under a small open flame. It also is less dense than water, and can, in principle, float (although it will react with any water it is exposed to). Calcium Calcium (Ca) is the second element in the period. An alkali earth metal, native calcium is almost never found in nature, because it reacts with water. It has one of the most widely-known biological roles in all animals and some plants, making up structural elements such as bones and teeth. It also has applications in cells, such as signals for cellular processeses. It is regarded as the most abundant mineral in the human body. d-block elements Scandium Scandium (Sc) is the third element in the period, and is the first transition metal in the periodic table. Scandium is quite common in nature, but difficult to isolate because its chemistry mirrors that of the other rare earth compounds quite closely. Scandium has very few commercial applications, the major exception being aluminium alloys. Titanium Titanium (Ti) is an element in group 4. Titanium is both one of the least dense metals and one of the strongest and most corrosion-resistant. As such, it has many applications, especially in alloys with other elements, such as iron. It is commonly used in airplanes, golf clubs, and other objects that must be strong, but lightweight. Vanadium Vanadium (V) is an element in group 5. Vanadium is never found in pure form in nature, but is commonly found in compounds. Vanadium is similar to titanium in many ways, such as being very corrosion-resistant, however, unlike titanium, it oxidizes in air even at room temperature. All vanadium compounds have at least some level of toxicity, with some of them being extremely toxic. Chromium Chromium (Cr) is an element in group 6. Chromium is, like titanium and vanadium before it, extremely resistant to corrosion, and is indeed one of the main components of stainless steel. Chromium also has many colorful compounds, and as such is very commonly used in pigments, such as chrome green. Manganese Manganese (Mn) is an element in group 7. Manganese is often found in combination with iron. Manganese, like chromium before it, is an important component in stainless steel, preventing the iron from rusting. Manganese is also often used in pigments, again like chromium. Manganese is also poisonous; if enough is inhaled, it can cause irreversible neurological damage. Iron Iron (Fe) is an element in group 8. Iron is the most common on Earth among elements of the period, and probably the most well-known of them. It is the principal component of steel. Iron-56 has the lowest energy density of any isotope of any element, meaning that it is the most massive element that can be produced in supergiant stars. Iron also has some applications in the human body; hemoglobin is partly iron. Cobalt Cobalt (Co) is an element in group 9. Cobalt is commonly used in pigments, as many compounds of cobalt are blue in color. Cobalt is also a core component of many magnetic and high-strength alloys. The only stable isotope, cobalt-59, is an important component of vitamin B-12, while cobalt-60 is a component of nuclear fallout and can be dangerous in large enough quantities due to its radioactivity. Nickel Nickel (Ni) is an element in group 10. Nickel is rare in the Earth's crust, mainly due to the fact that it reacts with oxygen in the air, with most of the nickel on Earth coming from nickel iron meteorites. However, nickel is very abundant in the Earth's core; along with iron it is one of the two main components. Nickel is an important component of stainless steel, and in many superalloys. Copper Copper (Cu) is an element in group 11. Copper is one of the few metals that is not white or gray in color, the only others being gold, osmium and caesium. Copper has been used by humans for thousands of years to provide a reddish tint to many objects, and is even an essential nutrient to humans, although too much is poisonous. Copper is also commonly used as a wood preservative or fungicides. Zinc Zinc (Zn) is an element in group 12. Zinc is one of the main components of brass, being used since the 10th century BCE. Zinc is also incredibly important to humans; almost 2 billion people in the world suffer from zinc deficiency. However, too much zinc can cause copper deficiency. Zinc is often used in batteries, aptly named carbon-zinc batteries, and is important in many platings, as zinc is very corrosion resistant. p-block elements Gallium Gallium (Ga) is an element in group 13, under aluminium. Gallium is noteworthy because it has a melting point at about 303 kelvins, right around room temperature. For example, it will be solid on a typical spring day, but will be liquid on a hot summer day. Gallium is an important component in the alloy galinstan, along with tin. Gallium can also be found in semiconductors. Germanium Germanium (Ge) is an element in group 14. Germanium, like silicon above it, is an important semiconductor and is commonly used in diodes and transistors, often in combination with arsenic. Germanium is fairly rare on Earth, leading to its comparatively late discovery. Germanium, in compounds, can sometimes irritate the eyes, skin, or lungs. Arsenic Arsenic (As) is an element in group 15, the pnictogens. Arsenic, as mentioned above, is often used in semiconductors in alloys with germanium. Arsenic, in pure form and some alloys, is incredibly poisonous to all multicellular life, and as such is a common component in pesticides. Arsenic was also used in some pigments before its toxicity was discovered. Selenium Selenium (Se) is an element in group 16, the chalcogens. Selenium is the first nonmetal in period 4, with properties similar to sulfur. Selenium is quite rare in pure form in nature, mostly being found in minerals such as pyrite, and even then it is quite rare. Selenium is necessary for humans in trace amounts, but is toxic in larger quantities. Selenium is red in monomolar structure but metallic gray in its crystalline structure. Bromine Bromine (Br) is an element in group 17 (halogen). It does not exist in elemental form in nature. Bromine is barely liquid at room temperature, boiling at about 330 kelvins. Bromine is also quite toxic and corrosive, but bromide ions, which are relatively inert, can be found in halite, or table salt. Bromine is often used as a fire retardant because many compounds can be made to release free bromine atoms. Krypton Krypton (Kr) is a noble gas, placed under argon and over xenon. Being a noble gas, krypton rarely interacts with itself or other elements; although compounds have been detected, they are all unstable and decay rapidly, and as such, krypton is often used in fluorescent lights. Krypton, like most noble gases, is also used in lighting because of its many spectral lines and the aforementioned reasons. Biological role Many period 4 elements find roles in controlling protein function as secondary messengers, structural components, or enzyme cofactors. A gradient of potassium is used by cells to maintain a membrane potential which enables neurotransmitter firing and facilitated diffusion among other processes. Calcium is a common signaling molecule for proteins such as calmodulin and plays a critical role in triggering skeletal muscle contraction in vertebrates. Selenium is a component of the noncanonical amino acid, selenocysteine; proteins which contain selenocysteine are known as selenoproteins. Manganese enzymes are utilized by both eukaryotes and prokaryotes, and may play a role in the virulence of some pathogenic bacteria. Vanabins, also known as vanadium-associated proteins, are found in the blood cells of some species of sea squirts. The role of these proteins is disputed, although there is some speculation that they function as oxygen carriers. Zinc ions are used to stabilize the zinc finger milieu of many DNA-binding proteins. Period 4 elements can also be found complexed with organic small molecules to form cofactors. The most famous example of this is heme: an iron-containing porphyrin compound responsible for the oxygen-carrying function of myoglobin and hemoglobin as well as the catalytic activity of cytochrome enzymes. Hemocyanin replaces hemoglobin as the oxygen carrier of choice in the blood of certain invertebrates, including horseshoe crabs, tarantulas, and octopuses. Vitamin B12 represents one of the few biochemical applications for cobalt.
Physical sciences
Periods
Chemistry
181556
https://en.wikipedia.org/wiki/Period%206%20element
Period 6 element
A period 6 element is one of the chemical elements in the sixth row (or period) of the periodic table of the chemical elements, including the lanthanides. The periodic table is laid out in rows to illustrate recurring (periodic) trends in the chemical behaviour of the elements as their atomic number increases: a new row is begun when chemical behaviour begins to repeat, meaning that elements with similar behaviour fall into the same vertical columns. The sixth period contains 32 elements, tied for the most with period 7, beginning with caesium and ending with radon. Lead is currently the last stable element; all subsequent elements are radioactive. For bismuth, however, its only primordial isotope, 209Bi, has a half-life of more than 1019 years, over a billion times longer than the current age of the universe. As a rule, period 6 elements fill their 6s shells first, then their 4f, 5d, and 6p shells, in that order; however, there are exceptions, such as gold. Properties This period contains the lanthanides, also known as the rare earths. Many lanthanides are known for their magnetic properties, such as neodymium. Many period 6 transition metals are very valuable, such as gold, however many period 6 other metals are incredibly toxic, such as thallium. Period 6 contains the last stable element, lead. All subsequent elements in the periodic table are radioactive. After bismuth, which has a half-life or more than 1019 years, polonium, astatine, and radon are some of the shortest-lived and rarest elements known; less than a gram of astatine is estimated to exist on earth at any given time. Atomic characteristics {| class="wikitable sortable" ! colspan="3" | Chemical element ! Block ! Electron configuration |- bgcolor="" || 55 || Cs || Caesium || s-block || [Xe] 6s1 |- bgcolor="" || 56 || Ba || Barium || s-block || [Xe] 6s2 |- bgcolor="" || 57 || La || Lanthanum || f-block || [Xe] 5d1 6s2 |- bgcolor="" || 58 || Ce || Cerium || f-block || [Xe] 4f1 5d1 6s2 |- bgcolor="" || 59 || Pr || Praseodymium || f-block || [Xe] 4f3 6s2 |- bgcolor="" || 60 || Nd || Neodymium || f-block || [Xe] 4f4 6s2 |- bgcolor="" || 61 || Pm || Promethium || f-block || [Xe] 4f5 6s2 |- bgcolor="" || 62 || Sm || Samarium || f-block || [Xe] 4f6 6s2 |- bgcolor="" || 63 || Eu || Europium || f-block || [Xe] 4f7 6s2 |- bgcolor="" || 64 || Gd || Gadolinium || f-block || [Xe] 4f7 5d1 6s2 |- bgcolor="" || 65 || Tb || Terbium || f-block || [Xe] 4f9 6s2 |- bgcolor="" || 66 || Dy || Dysprosium || f-block || [Xe] 4f10 6s2 |- bgcolor="" || 67 || Ho || Holmium || f-block || [Xe] 4f11 6s2 |- bgcolor="" || 68 || Er || Erbium || f-block || [Xe] 4f12 6s2 |- bgcolor="" || 69 || Tm || Thulium || f-block || [Xe] 4f13 6s2 |- bgcolor="" || 70 || Yb || Ytterbium || f-block || [Xe] 4f14 6s2 |- bgcolor="" || 71 || Lu || Lutetium || d-block || [Xe] 4f14 5d1 6s2 |- bgcolor="" || 72 || Hf || Hafnium || d-block || [Xe] 4f14 5d2 6s2 |- bgcolor="" || 73 || Ta || Tantalum || d-block || [Xe] 4f14 5d3 6s2 |- bgcolor="" || 74 || W || Tungsten || d-block || [Xe] 4f14 5d4 6s2 |- bgcolor="" || 75 || Re || Rhenium || d-block || [Xe] 4f14 5d5 6s2 |- bgcolor="" || 76 || Os || Osmium || d-block || [Xe] 4f14 5d6 6s2 |- bgcolor="" || 77 || Ir || Iridium || d-block || [Xe] 4f14 5d7 6s2 |- bgcolor="" || 78 || Pt || Platinum || d-block || [Xe] 4f14 5d9 6s1 |- bgcolor="" || 79 || Au || Gold || d-block || [Xe] 4f14 5d10 6s1 |- bgcolor="" || 80 || Hg || Mercury || d-block || [Xe] 4f14 5d10 6s2 |- bgcolor="" || 81 || Tl || Thallium || p-block || [Xe] 4f14 5d10 6s2 6p1 |- bgcolor="" || 82 || Pb || Lead || p-block || [Xe] 4f14 5d10 6s2 6p2 |- bgcolor="" || 83 || Bi || Bismuth || p-block || [Xe] 4f14 5d10 6s2 6p3 |- bgcolor="" || 84 || Po || Polonium || p-block || [Xe] 4f14 5d10 6s2 6p4 |- bgcolor="" || 85 || At || Astatine || p-block || [Xe] 4f14 5d10 6s2 6p5 |- bgcolor="" || 86 || Rn || Radon || p-block || [Xe] 4f14 5d10 6s2 6p6 |} In many periodic tables, the f-block is erroneously shifted one element to the right, so that lanthanum and actinium become d-block elements, and Ce–Lu and Th–Lr form the f-block, tearing the d-block into two very uneven portions. This is a holdover from early erroneous measurements of electron configurations. Lev Landau and Evgeny Lifshitz pointed out in 1948 that lutetium is not an f-block element, and since then physical, chemical, and electronic evidence has overwhelmingly supported that the f-block contains the elements La–Yb and Ac–No, as shown here and as supported by International Union of Pure and Applied Chemistry reports dating from 1988 and 2021. An exception to the Madelung rule. s-block elements Caesium Caesium or cesium is the chemical element with the symbol Cs and atomic number 55. It is a soft, silvery-gold alkali metal with a melting point of 28 °C (82 °F), which makes it one of only five elemental metals that are liquid at (or near) room temperature. Caesium is an alkali metal and has physical and chemical properties similar to those of rubidium and potassium. The metal is extremely reactive and pyrophoric, reacting with water even at−116 °C (−177 °F). It is the least electronegative element having a stable isotope, caesium-133. Caesium is mined mostly from pollucite, while the radioisotopes, especially caesium-137, a fission product, are extracted from waste produced by nuclear reactors. Two German chemists, Robert Bunsen and Gustav Kirchhoff, discovered caesium in 1860 by the newly developed method of flame spectroscopy. The first small-scale applications for caesium have been as a "getter" in vacuum tubes and in photoelectric cells. In 1967, a specific frequency from the emission spectrum of caesium-133 was chosen to be used in the definition of the second by the International System of Units. Since then, caesium has been widely used in atomic clocks. Since the 1990s, the largest application of the element has been as caesium formate for drilling fluids. It has a range of applications in the production of electricity, in electronics, and in chemistry. The radioactive isotope caesium-137 has a half-life of about 30 years and is used in medical applications, industrial gauges, and hydrology. Although the element is only mildly toxic, it is a hazardous material as a metal and its radioisotopes present a high health risk in case of radioactivity releases. Barium Barium is a chemical element with the symbol Ba and atomic number 56. It is the fifth element in Group 2, a soft silvery metallic alkaline earth metal. Barium is never found in nature in its pure form due to its reactivity with air. Its oxide is historically known as baryta but it reacts with water and carbon dioxide and is not found as a mineral. The most common naturally occurring minerals are the very insoluble barium sulfate, BaSO4 (barite), and barium carbonate, BaCO3(witherite). Barium's name originates from Greek barys (βαρύς), meaning "heavy", describing the high density of some common barium-containing ores. Barium has few industrial applications, but the metal has been historically used to scavenge air in vacuum tubes. Barium compounds impart a green color to flames and have been used in fireworks. Barium sulfate is used for its density, insolubility, and X-ray opacity. It is used as an insoluble heavy additive to oil well drilling mud, and in purer form, as an X-ray radiocontrast agent for imaging the human gastrointestinal tract. Soluble barium compounds are poisonous due to release of the soluble barium ion, and have been used as rodenticides. New uses for barium continue to be sought. It is a component of some "high temperature" YBCOsuperconductors, and electroceramics. f-block elements (lanthanides) The lanthanide or lanthanoid (IUPAC nomenclature) series comprises the fifteen metallic chemical elements with atomic numbers 57 through 71, from lanthanum through lutetium. These fifteen elements, along with the chemically similar elements scandium and yttrium, are often collectively known as the rare-earth elements. The informal chemical symbol Ln is used in general discussions of lanthanide chemistry. All but one of the lanthanides are f-block elements, corresponding to the filling of the 4f electron shell; lanthanum, a d-block element, is also generally considered to be a lanthanide due to its chemical similarities with the other fourteen. All lanthanide elements form trivalent cations, Ln3+, whose chemistry is largely determined by the ionic radius, which decreases steadily from lanthanum to lutetium. Between initial [Xe] and final 6s2 electronic shells The lanthanide elements are the group of elements with atomic number increasing from 57 (lanthanum) to 71 (lutetium). They are termed lanthanide because the lighter elements in the series are chemically similar to lanthanum. Strictly speaking, both lanthanum and lutetium have been labeled as group 3 elements, because they both have a single valence electron in the d shell. However, both elements are often included in any general discussion of the chemistry of the lanthanide elements. In presentations of the periodic table, the lanthanides and the actinides are customarily shown as two additional rows below the main body of the table, with placeholders or else a selected single element of each series (either lanthanum or lutetium, and either actinium or lawrencium, respectively) shown in a single cell of the main table, between barium and hafnium, and radium and rutherfordium, respectively. This convention is entirely a matter of aesthetics and formatting practicality; a rarely used wide-formatted periodic table inserts the lanthanide and actinide series in their proper places, as parts of the table's sixth and seventh rows (periods). d-block elements Lutetium Lutetium ( ) is a chemical element with the symbol Lu and atomic number 71. It is the last element in the lanthanide series, which, along with the lanthanide contraction, explains several important properties of lutetium, such as it having the highest hardness or density among lanthanides. Unlike other lanthanides, which lie in the f-block of the periodic table, this element lies in the d-block; however, lanthanum is sometimes placed on the d-block lanthanide position. Chemically, lutetium is a typical lanthanide: its only common oxidation state is +3, seen in its oxide, halides and other compounds. In an aqueous solution, like compounds of other late lanthanides, soluble lutetium compounds form a complex with nine water molecules. Lutetium was independently discovered in 1907 by French scientist Georges Urbain, Austrian mineralogist Baron Carl Auer von Welsbach, and American chemist Charles James. All of these men found lutetium as an impurity in the mineral ytterbia, which was previously thought to consist entirely of ytterbium. The dispute on the priority of the discovery occurred shortly after, with Urbain and von Welsbach accusing each other of publishing results influenced by the published research of the other; the naming honor went to Urbain as he published his results earlier. He chose the name lutecium for the new element but in 1949 the spelling of element 71 was changed to lutetium. In 1909, the priority was finally granted to Urbain and his names were adopted as official ones; however, the name cassiopeium (or later cassiopium) for element 71 proposed by von Welsbach was used by many German scientists until the 1950s. Like other lanthanides, lutetium is one of the elements that traditionally were included in the classification "rare earths." Lutetium is rare and expensive; consequently, it has few specific uses. For example, a radioactive isotope lutetium-176 is used in nuclear technology to determine the age of meteorites. Lutetium usually occurs in association with the element yttrium and is sometimes used in metal alloys and as a catalyst in various chemical reactions. 177Lu-DOTA-TATE is used for radionuclide therapy (see Nuclear medicine) on neuroendocrine tumours. Hafnium Hafnium is a chemical element with the symbol Hf and atomic number 72. A lustrous, silvery gray, tetravalent transition metal, hafnium chemically resembles zirconium and is found in zirconium minerals. Its existence was predicted by Dmitri Mendeleev in 1869. Hafnium was the penultimate stable isotope element to be discovered (rhenium was identified two years later). Hafnium is named for Hafnia, the Latin name for "Copenhagen", where it was discovered. Hafnium is used in filaments and electrodes. Some semiconductor fabrication processes use its oxide for integrated circuits at 45 nm and smaller feature lengths. Some superalloys used for special applications contain hafnium in combination with niobium, titanium, or tungsten. Hafnium's large neutron capture cross-section makes it a good material for neutron absorption in control rods in nuclear power plants, but at the same time requires that it be removed from the neutron-transparent corrosion-resistant zirconium alloys used in nuclear reactors. Tantalum Tantalum is a chemical element with the symbol Ta and atomic number 73. Previously known as tantalium, the name comes from Tantalus, a character from Greek mythology. Tantalum is a rare, hard, blue-gray, lustrous transition metal that is highly corrosion resistant. It is part of the refractory metals group, which are widely used as minor component in alloys. The chemical inertness of tantalum makes it a valuable substance for laboratory equipment and a substitute for platinum, but its main use today is in tantalum capacitors in electronic equipment such as mobile phones, DVD players, video game systems and computers. Tantalum, always together with the chemically similar niobium, occurs in the minerals tantalite, columbite and coltan (a mix of columbite and tantalite). Tungsten Tungsten, also known as wolfram, is a chemical element with the chemical symbol W and atomic number 74. The word tungsten comes from the Swedish language tung sten directly translatable to heavy stone, though the name is volfram in Swedish to distinguish it from Scheelite, in Swedish alternatively named tungsten. A hard, rare metal under standard conditions when uncombined, tungsten is found naturally on Earth only in chemical compounds. It was identified as a new element in 1781, and first isolated as a metal in 1783. Its important ores include wolframite and scheelite. The free element is remarkable for its robustness, especially the fact that it has the highest melting point of all the non-alloyed metals and the second highest of all the elements after carbon. Also remarkable is its high density of 19.3 times that of water, comparable to that of uranium and gold, and much higher (about 1.7 times) than that of lead. Tungsten with minor amounts of impurities is often brittle and hard, making it difficult to work. However, very pure tungsten, though still hard, is more ductile, and can be cut with a hard-steel hacksaw. The unalloyed elemental form is used mainly in electrical applications. Tungsten's many alloys have numerous applications, most notably in incandescent light bulb filaments, X-ray tubes (as both the filament and target), electrodes in TIG welding, and superalloys. Tungsten's hardness and high density give it military applications in penetrating projectiles. Tungsten compounds are most often used industrially as catalysts. Tungsten is the only metal from the third transition series that is known to occur in biomolecules, where it is used in a few species of bacteria. It is the heaviest element known to be used by any living organism. Tungsten interferes with molybdenum and copper metabolism, and is somewhat toxic to animal life. Rhenium Rhenium is a chemical element with the symbol Re and atomic number 75. It is a silvery-white, heavy, third-row transition metal in group 7 of the periodic table. With an estimated average concentration of 1 part per billion (ppb), rhenium is one of the rarest elements in the Earth's crust. The free element has the third-highest melting point and highest boiling point of any element. Rhenium resembles manganese chemically and is obtained as a by-product of molybdenum and copper ore's extraction and refinement. Rhenium shows in its compounds a wide variety of oxidation states ranging from −1 to +7. Discovered in 1925, rhenium was the last stable element to be discovered. It was named after the river Rhine in Europe. Nickel-based superalloys of rhenium are used in the combustion chambers, turbine blades, and exhaust nozzles of jet engines, these alloys contain up to 6% rhenium, making jet engine construction the largest single use for the element, with the chemical industry's catalytic uses being next-most important. Because of the low availability relative to demand, rhenium is among the most expensive of metals, with an average price of approximately US$4,575 per kilogram (US$142.30 per troy ounce) as of August 2011; it is also of critical strategic military importance, for its use in high performance military jet and rocket engines. Osmium Osmium is a chemical element with the symbol Os and atomic number 76. It is a hard, brittle, blue-gray or blue-black transition metal in the platinum family and is the densest naturally occurring element, with a density of (slightly greater than that of iridium and twice that of lead). It is found in nature as an alloy, mostly in platinum ores; its alloys with platinum, iridium, and other platinum group metals are employed in fountain pen tips, electrical contacts, and other applications where extreme durability and hardness are needed. Iridium Iridium is the chemical element with atomic number 77, and is represented by the symbol Ir. A very hard, brittle, silvery-white transition metal of the platinum family, iridium is the second-densest element (after osmium) and is the most corrosion-resistant metal, even at temperatures as high as 2000 °C. Although only certain molten salts and halogens are corrosive to solid iridium, finely divided iridium dust is much more reactive and can be flammable. Iridium was discovered in 1803 among insoluble impurities in natural platinum. Smithson Tennant, the primary discoverer, named the iridium for the goddess Iris, personification of the rainbow, because of the striking and diverse colors of its salts. Iridium is one of the rarest elements in the Earth's crust, with annual production and consumption of only three tonnes. and are the only two naturally occurring isotopes of iridium as well as the only stable isotopes; the latter is the more abundant of the two. The most important iridium compounds in use are the salts and acids it forms with chlorine, though iridium also forms a number of organometallic compounds used in industrial catalysis, and in research. Iridium metal is employed when high corrosion resistance at high temperatures is needed, as in high-end spark plugs, crucibles for recrystallization of semiconductors at high temperatures, and electrodes for the production of chlorine in the chloralkali process. Iridium radioisotopes are used in some radioisotope thermoelectric generators. Iridium is found in meteorites with an abundance much higher than its average abundance in the Earth's crust. For this reason the unusually high abundance of iridium in the clay layer at the Cretaceous–Paleogene boundary gave rise to the Alvarez hypothesis that the impact of a massive extraterrestrial object caused the extinction of dinosaurs and many other species 66 million years ago. It is thought that the total amount of iridium in the planet Earth is much higher than that observed in crustal rocks, but as with other platinum group metals, the high density and tendency of iridium to bond with iron caused most iridium to descend below the crust when the planet was young and still molten. Platinum Platinum is a chemical element with the chemical symbol Pt and an atomic number of 78. Its name is derived from the Spanish term platina, which is literally translated into "little silver". It is a dense, malleable, ductile, precious, gray-white transition metal. Platinum has six naturally occurring isotopes. It is one of the rarest elements in the Earth's crust and has an average abundance of approximately 5 μg/kg. It is the least reactive metal. It occurs in some nickel and copper ores along with some native deposits, mostly in South Africa, which accounts for 80% of the world production. As a member of the platinum group of elements, as well as of the group 10 of the periodic table of elements, platinum is generally non-reactive. It exhibits a remarkable resistance to corrosion, even at high temperatures, and as such is considered a noble metal. As a result, platinum is often found chemically uncombined as native platinum. Because it occurs naturally in the alluvial sands of various rivers, it was first used by pre-Columbian South American natives to produce artifacts. It was referenced in European writings as early as 16th century, but it was not until Antonio de Ulloa published a report on a new metal of Colombian origin in 1748 that it became investigated by scientists. Platinum is used in catalytic converters, laboratory equipment, electrical contacts and electrodes, platinum-resistance thermometers, dentistry equipment, and jewelry. Because only a few hundred tonnes are produced annually, it is a scarce material, and is highly valuable. Being a heavy metal, it leads to health issues upon exposure to its salts, but due to its corrosion resistance, it is not as toxic as some metals. Its compounds, most notably cisplatin, are applied in chemotherapy against certain types of cancer. Gold Gold is a dense, soft, shiny, malleable and ductile metal. It is a chemical element with the symbol Au and atomic number 79. Pure gold has a bright yellow color and luster traditionally considered attractive, which it maintains without oxidizing in air or water. Chemically, gold is a transition metal and a group 11 element. It is one of the least reactive chemical elements solid under standard conditions. The metal therefore occurs often in free elemental (native) form, as nuggets or grains in rocks, in veins and in alluvial deposits. Less commonly, it occurs in minerals as gold compounds, usually with tellurium. Gold resists attacks by individual acids, but it can be dissolved by the aqua regia (nitro-hydrochloric acid), so named because it dissolves gold. Gold also dissolves in alkaline solutions of cyanide, which have been used in mining. Gold dissolves in mercury, forming amalgam alloys. Gold is insoluble in nitric acid, which dissolves silver and base metals, a property that has long been used to confirm the presence of gold in items, giving rise to the term the acid test. Gold has been a valuable and highly sought-after precious metal for coinage, jewelry, and other arts since long before the beginning of recorded history. Gold standards have been a common basis for monetary policies throughout human history, later being supplanted by fiat currency starting in the 1930s. The last gold certificate and gold coin currencies were issued in the U.S. in 1932. In Europe, most countries left the gold standard with the start of World War I in 1914 and, with huge war debts, failed to return to gold as a medium of exchange. A total of 165,000 tonnes of gold have been mined in human history, as of 2009. This is roughly equivalent to 5.3 billion troy ounces or, in terms of volume, about 8500 m3, or a cube 20.4 m on a side. The world consumption of new gold produced is about 50% in jewelry, 40% in investments, and 10% in industry. Besides its widespread monetary and symbolic functions, gold has many practical uses in dentistry, electronics, and other fields. Its high malleability, ductility, resistance to corrosion and most other chemical reactions, and conductivity of electricity led to many uses of gold, including electric wiring, colored-glass production and even gold leaf eating. It has been claimed that most of the Earth's gold lies at its core, the metal's high density having made it sink there in the planet's youth. Virtually all of the gold that mankind has discovered is considered to have been deposited later by meteorites which contained the element. This supposedly explains why, in prehistory, gold appeared as nuggets on the earth's surface. Mercury Mercury is a chemical element with the symbol Hg and atomic number 80. It is also known as quicksilver or hydrargyrum ( < Greek "hydr-" water and "argyros" silver). A heavy, silvery d-block element, mercury is the only metal that is liquid at standard conditions for temperature and pressure; the only other element that is liquid under these conditions is bromine, though metals such as caesium, francium, gallium, and rubidium melt just above room temperature. With a freezing point of −38.83 °C and boiling point of 356.73 °C, mercury has one of the narrowest ranges of its liquid state of any metal. Mercury occurs in deposits throughout the world mostly as cinnabar (mercuric sulfide). The red pigment vermilion is mostly obtained by reduction from cinnabar. Cinnabar is highly toxic by ingestion or inhalation of the dust. Mercury poisoning can also result from exposure to water-soluble forms of mercury (such as mercuric chloride or methylmercury), inhalation of mercury vapor, or eating seafood contaminated with mercury. Mercury is used in thermometers, barometers, manometers, sphygmomanometers, float valves, mercury switches, and other devices though concerns about the element's toxicity have led to mercury thermometers and sphygmomanometers being largely phased out in clinical environments in favor of alcohol-filled, galinstan-filled, digital, or thermistor-based instruments. It remains in use in scientific research applications and in amalgam material for dental restoration. It is used in lighting: electricity passed through mercury vapor in a phosphor tube produces short-wave ultraviolet light which then causes the phosphor to fluoresce, making visible light. p-block elements Thallium Thallium is a chemical element with the symbol Tl and atomic number 81. This soft gray other metal resembles tin but discolors when exposed to air. The two chemists William Crookes and Claude-Auguste Lamy discovered thallium independently in 1861 by the newly developed method of flame spectroscopy. Both discovered the new element in residues of sulfuric acid production. Approximately 60–70% of thallium production is used in the electronics industry, and the remainder is used in the pharmaceutical industry and in glass manufacturing. It is also used in infrared detectors. Thallium is highly toxic and was used in rat poisons and insecticides. Its use has been reduced or eliminated in many countries because of its nonselective toxicity. Because of its use for murder, thallium has gained the nicknames "The Poisoner's Poison" and "Inheritance Powder" (alongside arsenic). Lead Lead is a main-group element in the carbon group with the symbol Pb (from ) and atomic number 82. Lead is a soft, malleable other metal. It is also counted as one of the heavy metals. Metallic lead has a bluish-white color after being freshly cut, but it soon tarnishes to a dull grayish color when exposed to air. Lead has a shiny chrome-silver luster when it is melted into a liquid. Lead is used in building construction, lead-acid batteries, bullets and shots, weights, as part of solders, pewters, fusible alloys and as a radiation shield. Lead has the highest atomic number of all of the stable elements, although the next higher element, bismuth, has a half-life that is so long (much longer than the age of the universe) that it can be considered stable. Its four stable isotopes have 82 protons, a magic number in the nuclear shell model of atomic nuclei. Lead, at certain exposure levels, is a poisonous substance to animals as well as for human beings. It damages the nervous system and causes brain disorders. Excessive lead also causes blood disorders in mammals. Like the element mercury, another heavy metal, lead is a neurotoxin that accumulates both in soft tissues and the bones. Lead poisoning has been documented from ancient Rome, ancient Greece, and ancient China. Bismuth Bismuth is a chemical element with symbol Bi and atomic number 83. Bismuth, a trivalent other metal, chemically resembles arsenic and antimony. Elemental bismuth may occur naturally uncombined, although its sulfide and oxide form important commercial ores. The free element is 86% as dense as lead. It is a brittle metal with a silvery white color when newly made, but often seen in air with a pink tinge owing to the surface oxide. Bismuth metal has been known from ancient times, although until the 18th century it was often confused with lead and tin, which each have some of bismuth's bulk physical properties. The etymology is uncertain but possibly comes from Arabic meaning having the properties of antimony or German words or meaning "white mass". Bismuth is the most naturally diamagnetic of all metals, and only mercury has a lower thermal conductivity. Bismuth has classically been considered to be the heaviest naturally occurring stable element, in terms of atomic mass. Recently, however, it has been found to be very slightly radioactive: its only primordial isotope bismuth-209 decays via alpha decay into thallium-205 with a half-life of more than a billion times the estimated age of the universe. Bismuth compounds (accounting for about half the production of bismuth) are used in cosmetics, pigments, and a few pharmaceuticals. Bismuth has unusually low toxicity for a heavy metal. As the toxicity of lead has become more apparent in recent years, alloy uses for bismuth metal (presently about a third of bismuth production), as a replacement for lead, have become an increasing part of bismuth's commercial importance. Polonium Polonium is a chemical element with the symbol Po and atomic number 84, discovered in 1898 by Marie Skłodowska-Curie and Pierre Curie. A rare and highly radioactive element, polonium is chemically similar to bismuth and tellurium, and it occurs in uranium ores. Polonium has been studied for possible use in heating spacecraft. As it is unstable, all isotopes of polonium are radioactive. There is disagreement as to whether polonium is a post-transition metal or metalloid. Astatine Astatine is a radioactive chemical element with the symbol At and atomic number 85. It occurs on the Earth only as the result of decay of heavier elements, and decays away rapidly, so much less is known about this element than its upper neighbors in the periodic table. Earlier studies have shown this element follows periodic trends, being the heaviest known halogen, with melting and boiling points being higher than those of lighter halogens. Until recently most of the chemical characteristics of astatine were inferred from comparison with other elements; however, important studies have already been done. The main difference between astatine and iodine is that the HAt molecule is chemically a hydride rather than a halide; however, in a fashion similar to the lighter halogens, it is known to form ionic astatides with metals. Bonds to nonmetals result in positive oxidation states, with +1 best portrayed by monohalides and their derivatives, while the higher are characterized by bond to oxygen and carbon. Attempts to synthesize astatine fluoride have been met with failure. The second longest-living astatine-211 is the only one to find a commercial use, being useful as an alpha emitter in medicine; however, only extremely small quantities are used, and in larger ones it is very hazardous, as it is intensely radioactive. Astatine was first produced by Dale R. Corson, Kenneth Ross MacKenzie, and Emilio Segrè in the University of California, Berkeley in 1940. Three years later, it was found in nature; however, with an estimated amount of less than 28 grams (1 oz) at given time, astatine is the least abundant element in Earth's crust among non-transuranium elements. Among astatine isotopes, four (with mass numbers 215, 217, 218 and 219) are present in nature as the result of decay of heavier elements; however, the most stable astatine-210 and the industrially used astatine-211 are not. Radon Radon is a chemical element with symbol Rn and atomic number 86. It is a radioactive, colorless, odorless, tasteless noble gas, occurring naturally as the decay product of uranium or thorium. Its most stable isotope, 222Rn, has a half-life of 3.8 days. Radon is one of the densest substances that remains a gas under normal conditions. It is also the only gas that is radioactive under normal conditions, and is considered a health hazard due to its radioactivity. Intense radioactivity also hindered chemical studies of radon and only a few compounds are known. Radon is formed as part of the normal radioactive decay chain of uranium and thorium. Uranium and thorium have been around since the earth was formed and their most common isotope has a very long half-life (14.05 billion years). Uranium and thorium, radium, and thus radon, will continue to occur for millions of years at about the same concentrations as they do now. As the radioactive gas of radon decays, it produces new radioactive elements called radon daughters or decay products. Radon daughters are solids and stick to surfaces such as dust particles in the air. If contaminated dust is inhaled, these particles can stick to the airways of the lung and increase the risk of developing lung cancer. Radon is responsible for the majority of the public exposure to ionizing radiation. It is often the single largest contributor to an individual's background radiation dose, and is the most variable from location to location. Radon gas from natural sources can accumulate in buildings, especially in confined areas such as attics and basements. It can also be found in some spring waters and hot springs. Epidemiological studies have shown a clear link between breathing high concentrations of radon and incidence of lung cancer. Thus, radon is considered a significant contaminant that affects indoor air quality worldwide. According to the United States Environmental Protection Agency, radon is the second most frequent cause of lung cancer, after cigarette smoking, causing 21,000 lung cancer deaths per year in the United States. About 2,900 of these deaths occur among people who have never smoked. While radon is the second most frequent cause of lung cancer, it is the number one cause among non-smokers, according to EPA estimates. Biological role Of the period 6 elements, only tungsten and the early lanthanides are known to have any biological role in organisms, and even then only in lower organisms (not mammals). However, gold, platinum, mercury, and some lanthanides such as gadolinium have applications as drugs. Toxicity Most of the period 6 elements are toxic (for instance lead) and produce heavy-element poisoning. Promethium, polonium, astatine and radon are radioactive, and therefore present radioactive hazards.
Physical sciences
Periods
Chemistry
181576
https://en.wikipedia.org/wiki/Dao%20%28Chinese%20sword%29
Dao (Chinese sword)
Dao (pronunciation: , English approximation: , Chinese: 刀; pinyin: dāo; jyutping: dou1) are single-edged Chinese swords, primarily used for slashing and chopping. They can be straight or curved. The most common form is also known as the Chinese sabre, although those with wider blades are sometimes referred to as Chinese broadswords. In China, the dao is considered one of the four traditional weapons, along with the gun (stick or staff), qiang (spear), and the jian (double-edged sword), called in this group "The General of Weapons". Name In Chinese, the word can be applied to any weapon with a single-edged blade and usually refers to knives. Because of this, the term is sometimes translated as knife or Nonetheless, within Chinese martial arts and in military contexts, the larger "sword" versions of the dao are usually intended. General characteristics While the dao have varied greatly over the centuries, most single-handed dao of the Ming period and later and the modern swords based on them share several characteristics. Dao blades are moderately curved and single-edged, though often with a few inches of the back edge sharpened; the moderate curve allows them to be reasonably effective in the thrust. Hilts are sometimes canted, curving in the opposite direction of the blade, which improves handling in some forms of cuts and thrusts. The cord is usually wrapped over the wood of the handle. Hilts may also be pierced like those of jian (straight-bladed Chinese sword) for the addition of lanyards. However, modern swords for performances will often have tassels or scarves instead. Guards are typically disc-shaped and often cupped. This was to prevent rainwater from getting into the sheath and blood dripping down to the handle, making it more difficult to grip. Sometimes guards are thinner pieces of metal with an s-curve, the lower limb of the curve protecting the user's knuckles; very rarely, they may have guards like those of the jian. Other variations to the basic pattern include the large bagua dao and the long-handled pudao. Early history The earliest dao date from the Shang dynasty in China's Bronze Age, and are known as zhibeidao (直背刀) – straight-backed knives. As the name implies, these were straight-bladed or slightly curved weapons with a single edge. Originally bronze, these weapons were made of iron or steel by the time of the late Warring States period as metallurgical knowledge became sufficiently advanced to control the carbon content. Originally less common as a military weapon than the jian – the straight, double-edged blade of China – the dao became popular with cavalry during the Han dynasty due to its sturdiness, superiority as a chopping weapon, and relative ease of use – it was generally said that it takes a week to attain competence with a dao/saber, a month to attain competence with a qiang/spear, and a year to attain competence with a jian/straight sword. Soon after dao began to be issued to infantry, beginning the replacement of the jian as a standard-issue weapon. Late Han dynasty dao had round grips and ring-shaped pommels, and ranged between 85 and 114 centimeters in length. These weapons were used alongside rectangular shields. By the end of the Three Kingdoms period, the single-edged dao had almost completely replaced the jian on the battlefield. The jian subsequently became known as a weapon of self-defense for the scholarly aristocratic class, worn as part of court dress. Sui, Tang, and Song dynasties As in the preceding dynasties, Tang dynasty dao were straight along the entire length of the blade. Single-handed ("belt dao") were the most common sidearm in the Tang dynasty. These became known as hengdao ("horizontal dao" or "cross dao") from the preceding Sui dynasty onward. Two-handed changdao ("long dao") or were also used in the Tang, with some units specializing in their use. During the Song dynasty, one form of infantry dao was the shoudao, a chopping weapon with a clip point. While some illustrations show them as straight, the 11th century Song military encyclopedia the Wujing Zongyao depicts them with curved blades – possibly an influence from the steppe tribes of Central Asia, who would conquer parts of China during the Song period. Also dating from the Song are the falchion-like dadao, the long, two-handed zhanmadao, and the long-handled, similarly two-handed buzhandao (步戰刀). Yuan, Ming and Qing dynasties With the Mongol invasion of China in the early 13th century and the formation of the Yuan dynasty, the curved steppe saber became a greater influence on Chinese sword designs. Sabers had been used by Turkic, Tungusic, and other steppe peoples of Central Asia since at least the 8th century CE. It was a favored weapon among the Mongol aristocracy. Its effectiveness for mounted warfare and popularity among soldiers throughout the Mongol empire had lasting effects. In China, Mongol influence lasted long after the collapse of the Yuan dynasty at the hands of the Ming, continuing through both the Ming and the Qing dynasties, furthering the popularity of the dao and spawning a variety of new blades. Blades with greater curvature became popular, and these new styles are collectively referred to as (佩刀). During the mid-Ming, these new sabers would completely replace the jian as a military-issue weapon. The four main types of are: Yanmaodao The yanmaodao or "goose-quill saber" is largely straight like the earlier zhibeidao, with a curve appearing at the center of percussion near the blade's tip. This allows for thrusting attacks and overall handling similar to that of the jian while preserving much of the dao's strengths in cutting and slashing. Liuyedao The liuyedao or "willow leaf saber" is the most common form of Chinese saber. It first appeared during the Ming dynasty and features a moderate curve along the length of the blade. This weapon became the standard sidearm for cavalry and infantry, replacing the yanmaodao, and is the sort of saber used by many schools of Chinese martial arts. Piandao The piandao or "slashing saber" is a deeply curved dao meant for slashing and draw-cutting. This weapon bears a strong resemblance to the shamshir and scimitar. Skirmishers generally used it in conjunction with a shield. Niuweidao The niuweidao or "oxtail saber" is a heavy-bladed weapon with a characteristic flaring tip. It is the archetypal "Chinese broadsword" of kung fu movies today. It was first recorded in the early 19th century (the latter half of the Qing dynasty) and only as a civilian weapon: there is no record of it being issued to troops, and it does not appear in any listing of official weaponry. Its appearance in movies and modern literature is thus often anachronistic. Other types Besides these four major types of dao, the duandao or "short dao" was also used, this being a compact weapon generally in the shape of a liuyedao. The dadao saw continued use, and during the Ming dynasty the large two-handed changdao and were used both against the cavalry of the northern steppes and the wokou (pirates) of the southeast coast; these latter weapons (sometimes under different names) would continue to see limited use during the Qing period. Also, during the Qing, there appeared weapons such as the nandao, regional variants in the name or shape of some of the above dao, and more obscure variants such as the "nine ringed broadsword", these last likely invented for street demonstrations and theatrical performances rather than for use as weapons. The word dao is also used in the names of several polearms that feature a single-edged blade, such as the pudao and . The Chinese spear and dao (liuyedao and yanmaodao) were commonly issued to infantry due to the expense of and relatively greater amount of training required for the effective use of the Chinese straight sword, or jian. Dao can often be depicted in period artwork worn by officers and infantry. During the Yuan dynasty and after, some aesthetic features of Persian, Indian, and Turkish swords would appear on dao. These could include intricate carvings on the blade and "rolling pearls": small metal balls that would roll along fuller-like grooves in the blade. Recent history The dadao was used by some Chinese militia units against Japanese invaders in the Second Sino-Japanese War, occasioning "The Sword March". The miaodao, a descendant of the changdao, also saw use. These were used during planned ambushes on Japanese troops because the Chinese military and patriotic resistance groups often had a shortage of firearms. Most Chinese martial arts schools still train extensively with the dao, seeing it as a powerful conditioning tool and a versatile weapon, with self-defense techniques transferable to similarly sized objects more commonly found in the modern world, such as canes, baseball or cricket bats, for example. Some schools teach double sword 雙刀, forms and fencing, one dao for each hand. One measure of the proper length of the sword should be from the hilt in your hand, the tip of the blade at the brow, and, in some schools, the shoulder height. Alternatively, the sword's length should be from the middle of the throat along the size of the outstretched arm. There are also significantly larger versions of dao used for training in some Baguazhang and Taijiquan schools. Nandao The nandao or "southern broadsword" is a modern innovation used for contemporary wushu practice. In modern wushu Daoshu () refers to the competitive event in modern wushu taolu where athletes utilize a dao in a routine. It was one of the four main weapon events implemented at the 1st World Wushu Championships due to its general popularity. Apparatus The dao itself, consists of a thin blade that makes noise when stabbing or cutting techniques are used. Over time, the edge has become more flimsy to create more noise, and the sword has become lighter to allow for faster handling. The only exception to this trend was in 1997 when the Chinese Wushu Association for one year required all swords to have a stiff blade in domestic competition. In older generations of modern wushu. a broadsword flag used to be generally big but over the years has greatly reduced in size to allow for more speed and clarity of the movements. As of the 2024 IWUF rules, the broadsword blade length should be no shorter than the top of a competitor's ear if held vertically beside the body with the left hand. The flag must also be no shorter than 30 centimeters. Routines As of the 2024 IWUF rules, daoshu routines must be between 1 minute 20 seconds to 1 minute 35 seconds in length. Daoshu routines are also required to have the following techniques: Sword techniques Chán Tóu (缠头) — Broadsword Twining Guǒ Nǎo (裹脑) — Wrapping with the Broadsword Pī Dāo (劈刀) — Broadsword Chop Zhā Dāo (扎刀) — Broadsword Thrust Zhǎn Dāo (斩刀) — Broadsword Hack Guà Dāo (挂刀) — Broadsword Hooking Parry Yún Dāo (云刀) — Broadsword Cloud Waving Bèi Huā Dāo (背花刀) — Broadsword Wrist Figure 8 Behind the Back) Stances Gōng Bù (弓步) - Bow Stance Mǎ Bù (马步) - Horse Stance Pū Bù (仆步) - Drop Stance Xū Bù (虚步) - Empty Stance Xiē Bù (歇步) - Cross-Legged Crouching Stance Scoring criteria Daoshu adheres to the same deduction content (A score) and degree of difficulty content and connections (C score) as changquan, gunshu, jianshu, and qiangshu. This three-score system has been in place since the 2005 IWUF rules revision. Only the techniques Chán Tóu (缠头) and Guǒ Nǎo (裹脑) have deduction content (code 62).
Technology
Swords
null
181580
https://en.wikipedia.org/wiki/Pinus%20sylvestris
Pinus sylvestris
Pinus sylvestris, the Scots pine (UK), Scotch pine (US), Baltic pine, or European red pine is a species of tree in the pine family Pinaceae that is native to Eurasia. It can readily be identified by its combination of fairly short, blue-green leaves and orange-red bark. Description Pinus sylvestris is an evergreen coniferous tree growing up to in height and in trunk diameter when mature, exceptionally over tall and in trunk diameter on very productive sites. The tallest on record is a tree over 210 years old growing in Estonia which stands at . The lifespan is normally 150–300 years, with the oldest recorded specimens in Lapland, Northern Finland over 760 years. The bark is thick, flaky and orange-red when young to scaly and gray-brown in maturity, sometimes retaining the former on the upper portion. The habit of the mature tree is distinctive due to its long, bare and straight trunk topped by a rounded or flat-topped mass of foliage. The shoots are light brown, with a spirally arranged scale-like pattern. On mature trees the leaves ('needles') are a glaucous blue-green, often darker green to dark yellow-green in winter, long and broad, produced in fascicles of two with a persistent gray basal sheath. On vigorous young trees the leaves can be twice as long, and occasionally occur in fascicles of three or four on the tips of strong shoots. Leaf persistence varies from two to four years in warmer climates, and up to nine years in subarctic regions. Seedlings up to one year old bear juvenile leaves; these are single (not in pairs), long, flattened, with a serrated margin. The seed cones are red at pollination, then pale brown, globose and in diameter in their first year, expanding to full size in their second year, pointed ovoid-conic, green, then gray-green to yellow-brown at maturity, long. The cone scales have a flat to pyramidal apophysis (the external part of the cone scale), with a small prickle on the umbo (central boss or protuberance). The seeds are blackish, in length with a pale brown wing and are released when the cones open in spring 22–24 months after pollination. The pollen cones are yellow, occasionally pink, long; pollen release is in mid to late spring. Varieties Over 100 Pinus sylvestris varieties have been described in the botanical literature, but only three or four are now accepted. They differ only minimally in morphology, but with more pronounced differences in genetic analysis and resin composition. Populations in westernmost Scotland are genetically distinct from those in the rest of Scotland and northern Europe, but not sufficiently to have been distinguished as separate botanical varieties. Trees in the far north of the range were formerly sometimes treated as var. lapponica, but the differences are clinal and it is not genetically distinct. Names Before the 18th century, the species was more often known as Scots fir or Scotch fir. Another, less common name is European redwood. The timber from it is also called red deal or yellow deal; the name "deal" comes from an archaic unit of volume used to measure wood. Distribution and habitat Pinus sylvestris is the only pine native to northern Europe, ranging from Western Europe to Eastern Siberia, south to the Caucasus Mountains and Anatolia, and north to well inside the Arctic Circle in Fennoscandia. In the north of its range, it occurs from sea level to , while in the south of its range it is a mountain tree, growing at altitude. Its distribution intersects with T. piniperda's habitat, making the beetle a primary pest of the tree. The species is mainly found on poorer, sandy soils, rocky outcrops, peat bogs or close to the forest limit. On fertile sites, the pine is out-competed by other tree species, usually spruce or broad-leaved trees. Britain and Ireland The tree spread across Britain and Ireland after the Last Glacial Maximum. Pollen records show that pine was present locally in southern England by 9,000 years ago having entered from northeast France and that it had spread as far north as the Lake District and North Pennines 500 years later. It was present in Ireland over 8,800 years ago but absent from Wales at that time which suggests that the pine in Ireland had a separate Iberian origin or contained surviving populations, although evidence towards its survival is lacking. Pine expanded into Scotland between 8,000 and 8,500 years ago either from an independent refuge, from Scandinavia (via Doggerland) or from Ireland. As the climate warmed it became extinct from most of Britain and Ireland around 5,500 years ago except in Scotland, Kielder in England and The Burren in County Clare, Ireland. The Irish and western Scottish populations went through a massive decline around 4,000 years ago which ultimately led to the practical extinction of the Irish population between 2,000 and 1,000 years ago. It was replaced by large areas of blanket bog in western Scotland and Ireland though the reasons for its decline and extinction in England are not clear, but it may have been influenced by human activities. In Britain it now occurs naturally only in Scotland. Historical and archaeological records indicate that it also occurred in Wales and England until about 300–400 years ago, becoming extinct there due to over-exploitation and grazing; it has been re-introduced in these countries. Similar historical extinction and re-introduction applies to Ireland, Denmark and the Netherlands. Whether it truly became extinct in England is unknown. It has been speculated that it may have survived wild long enough for trees used in cultivation in England to derive from native (rather than imported) sources. Shakespeare (in Richard II) was familiar with the species in the 1590s, as was Evelyn in the early 1660s (Sylva), both around the time when the pine was thought to become extinct in England, but when landowners were also beginning ornamental and forestry planting. The pine formed much of the Caledonian Forest, which once covered much of the Scottish Highlands. Overcutting for timber demand, fire, overgrazing by sheep and deer, and even deliberate clearance to deter wolves have all been factors in the decline of this once great pine and birch forest. Only comparatively small areas, only just over 1% of the estimated original of this ancient forest remain, the main surviving remnants being at Abernethy Forest, Glen Affric, Rothiemurchus Forest, and the Black Wood of Rannoch. Plans are currently in progress to restore at least some areas and work has started at key sites. Ecology It forms either pure forests or mixes with Norway spruce, common juniper, silver birch, European rowan, Eurasian aspen and other hardwood species. In central and southern Europe, it occurs with numerous additional species, including European black pine, mountain pine, Macedonian pine, and Swiss pine. In the eastern part of its range, it occurs with Siberian pine, among others. In 2020, black spot needle blight was found on hundreds of Pinus sylvestris var. mongolica trees in four forest farms in northeastern China. It first appeared on the upper part of the needles, and then the needles became withered and gradually showed light black spots, although they still remained green. As the fungal disease progressed, the needles eventually died and turned gray with many dark black spots. The fungus was identified as Heterotruncatella spartii (within the family Sporocadaceae) based on morphology and molecular methods. Uses Pinus sylvestris is an important tree in forestry. The wood is used for pulp and sawn timber products. A seedling stand can be created by planting, sowing, or natural regeneration. Commercial plantation rotations vary between 50 and 120 years, with longer rotations in northeastern areas where growth is slower. In Scandinavian countries, the pine was used for making tar in the preindustrial age. Some active tar producers still exist, but that industry has almost ceased. The pine has also been used as a source of rosin and turpentine. The wood is pale brown to red-brown, and used for general construction work. It has a dry density around 470 kg/m3 (varying with growth conditions), an open porosity of 60%, a fibre saturation point of 0.25 kg/kg, and a saturation moisture content of 1.60 kg/kg. The pine fibres are used to make the textile known as vegetable flannel, which has a hemp-like appearance, but with a tighter, softer texture. The pine has also been widely planted in New Zealand and much of the colder regions of North America; it was one of the first trees introduced to North America, in about 1600. It is listed as an invasive species in some areas there, including Ontario, Michigan. It has been widely used in the United States for the Christmas tree trade, and was one of the most popular Christmas trees from the 1950s through the 1980s. It remains popular for that usage, though it has been eclipsed in popularity, by such species as Fraser fir, Douglas-fir, and others. Despite its invasiveness in parts of eastern North America, the pine does not often grow well there, partly due to climate and soil differences between its native habitat and that of North America, and partly due to damage by pests and diseases; the tree often grows in a twisted, haphazard manner if not tended to (as they are in the Christmas tree trade). The pines may be killed by the pine wood nematode, which causes pine wilt disease. The nematode most often attacks trees that are at least ten years old and often kills trees it infects within a few weeks. Previously, the pine was grown in and used extensively by the coal mining regions of Flanders, Belgium. It was used to fortify tunnels, primarily because it would make a cracking sound when in need of replacement. Large patches of forest, mostly containing the species, are still scattered over the countryside. Cultivars Several cultivars are grown for ornamental purposes in parks and large gardens, of which 'Aurea', 'Beuvronensis', 'Frensham', and 'Gold Coin' have gained the Royal Horticultural Society's Award of Garden Merit. In culture The Scots pine is the plant badge of Clan Gregor. It is the national tree of Scotland. Fossil record One fossil seed cone of Pinus montana fossilis was sent by the Naturmuseum Senckenberg to the Swedish Museum of Natural History (Swedish: Naturhistoriska Riksmuseet), as a scientific gift specimen, the seed cone is of late Pliocene age (Reuverian). Pinus montana is a synonym of Pinus sylvestris. The cone fossil had been recovered during the years 1884 and 1885 in Niederrad which is a quarter of Frankfurt am Main, Germany. Selection in haploid versus diploid tissue Genes of Scots Pine that are expressed in the haploid stage of the life cycle appear to be subject to stronger purifying selection than genes expressed only in the diploid stage. The concept that those genes of an organism that are expressed in the haploid stage are subject to more efficient natural selection than those genes expressed exclusively in the diploid stage is referred to as the “masking theory”. This theory implies that purifying selection is more efficient in the haploid stage of the life cycle where fitness effects are more evidently expressed than in the diploid stage of the life cycle. Gallery
Biology and health sciences
Pinaceae
Plants
181627
https://en.wikipedia.org/wiki/Tonfa
Tonfa
The tonfa (Okinawan: , lit. old man's staff / "crutch", also spelled as tongfa or tuifa, also known as T-baton) is a melee weapon with its origins in the armed component of Okinawan martial arts where it is known as the tunkua. It consists of a stick with a perpendicular handle attached a third of the way down the length of the stick, and is about long. It was traditionally made from red or white oak, and wielded in pairs. The tonfa is believed to have originated in either China, Okinawa or Southeast Asia, where it is used in the respective fighting styles. History Regional variants Although the tonfa is most commonly associated with the Okinawan martial arts, its origin is heavily debated. One of the most commonly cited origins is China, although origins from Indonesia to Okinawa are also possible. Although modern martial artists often cite that the tonfa derives from a millstone handle used by peasants, martial arts in Okinawa were historically practised by the upper classes who imported martial arts from China and elsewhere, and it is likely that the weapon was imported from outside Okinawa. The Chinese and Malay words for the weapon (guai and topang respectively) literally mean "crutch", which may suggest the weapon originating from such device. In Cambodia and Thailand, a similar weapon is used consisting of a pair of short clubs tied onto the forearms, known in Thai as mai sok and in Khmer as staupe. In Thailand and Malaysia, the mai sok often has a similar design to the tonfa, with a perpendicular handle rather than being tied on. In Vietnam, a similar weapon called the song xỉ is made of a pair of steel or aluminum bars. The song xỉ is used as a small shield to protect the forearms and has a sharp tip at the end to attack. Types of tonfa There are different versions of the Okinawa tonfa but the basic design is the same. The small grip is at one end of the tonfa. The main body of the tonfa is where there are variations. The most popular form of tonfa has rounded sides and a rounded bottom which makes a semicircle. The square tonfa has rectangular faces on the main body of the weapon. A paddle-shape tonfa has the bottom half wider than the front half and looks like a paddle. Another tonfa has a rounded body throughout. A crude pointed tonfa has the front heads and back heads ending in a pointed design. This can be used for stabbing defense. Usage The tonfa can be used for blocking and striking. The tonfa measures about three centimeters past the elbow when gripped. There are three grips, honte-mochi (natural), gyakute-mochi (reverse) and tokushu-mochi (special). The natural grip places the handle in the hand with the long arm resting along the bottom of the forearm. This grip provides protection or brace along one's forearms, and also provides reinforcement for backfist, elbow strikes, and punches. In use, the tonfa can swing out to the gyakute grip for a strike or thrust. Martial artists may also flip the tonfa and grab it by the shaft, called tokushu-mochi. This allows use of the handle as a hook in combat, similar to the kama (sickle). This grip is uncommon but is used in the kata Yaraguwa. Blocking techniques involve a sidestepping maneuver. This allows the block to stop the attack while providing a way to gain entry. The block can be used to block high attack and low attacks.
Technology
Melee weapons
null
181636
https://en.wikipedia.org/wiki/Histrionic%20personality%20disorder
Histrionic personality disorder
Histrionic personality disorder (HPD) is defined by the American Psychiatric Association as a personality disorder characterized by a pattern of excessive attention-seeking behaviors, usually beginning in adolescence or early adulthood, including inappropriate seduction and an excessive desire for approval. People diagnosed with the disorder are said to be lively, dramatic, vivacious, enthusiastic, extroverted and flirtatious. HPD lies in the dramatic cluster of personality disorders, also known as the Cluster B. People with HPD have a high desire for attention, make loud and inappropriate appearances, exaggerate their behaviors and emotions, and crave stimulation. They very often exhibit pervasive and persistent sexually provocative behavior, express strong emotions with an impressionistic style, and can be easily influenced by others. Associated features can include egocentrism, self-indulgence, continuous longing for appreciation, and persistent manipulative behavior to achieve their own wants. Signs and symptoms People diagnosed with HPD may be dramatic. They often fail to see their own personal situation realistically, instead dramatizing and exaggerating their difficulties. Patients with this disorder can have rapidly shifting emotions and a decreased ability to recognize the emotions of others. Their emotions may appear superficial or exaggerated to others. This disorder is associated with extraversion, a lower tolerance for frustration or delayed gratification, and openness to new experiences. People with HPD may have little self-doubt and often appear egocentric. Research has also shown those with histrionic personality have a greater desire for social approval and reassurance and will constantly seek it out, making those with HPD more vulnerable to social media addiction. People with this disorder often display excessive sensitivity to criticism or disapproval. They will work hard to get others to pay attention to them, possibly as a method of testing the stability of relationships. They may enjoy situations in which they can be the center of attention, and may feel uncomfortable when people are not paying attention to them. People with this disorder may wear flamboyant clothing, try body modifications, and fake medical conditions in an attempt to draw others' attention. They may be inappropriately sexually provocative, flirtatious, or exploitative. Sexually suggestive and exhibitionist behavior are also behaviors people with this condition sometimes exhibit, and are more likely to seek out casual sexual relationships. Some people with histrionic traits or personality disorder change their seduction technique into a more parental style as they age. When their desire for attention is not met, it can heighten the severity of their symptoms. Patients with HPD are usually high-functioning, both socially and professionally. They usually have good social skills, despite tending to use them to make themselves the center of attention. HPD may also affect a person's social and romantic relationships, as well as their ability to cope with losses or failures. People with HPD tend to consider relationships closer than they usually are. They may seek treatment for clinical depression when romantic (or other close personal) relationships end. Substance disorders, such as alcohol use disorder or opioid use disorder, are all common in patients with histrionic personality disorder. They are also at higher risks of suicide, body dysmorphia (a preoccupation with perceived flaws in one's physical appearance), and divorce. They may go through frequent job changes, as they become easily bored and may prefer withdrawing from frustration (instead of facing it). Because they tend to crave novelty and excitement, they may place themselves in risky situations. All of these factors may lead to greater risk of developing clinical depression. People with this condition can have an impressionistic and undetailed style of speech. Despite these traits, they can be prideful of their own personality, and may be unwilling to change, viewing any change as a threat. They may even blame their personal failures or disappointments on others. Causes Little research has been done to find evidence of what causes histrionic personality disorder. Although direct causes are inconclusive, various theories and studies suggest multiple possible causes, of a neurochemical, genetic, psychoanalytic, or environmental nature. Traits such as extravagance, vanity, and seductiveness of hysteria have similar qualities to women diagnosed with HPD. HPD symptoms typically do not fully develop until late teens or early 20s, while the onset of treatment only occurs, on average, at approximately 40 years of age. Although 80% of diagnosed cases are in females, it may be equally prevalent among men. Little is known about how this disorder affects males, but it is thought to be more difficult to detect in men. Authoritarian parenting Authoritarian or distant attitudes, usually by the mother but sometimes both parents, may result in children developing this disorder later in life. Psychoanalytic theories incriminate authoritarian or distant attitudes by one (mainly the mother) or both parents, along with conditional love based on expectations the child can never fully meet. Using psychoanalysis, Freud believed that lustfulness was a projection of the patient's lack of ability to love unconditionally and develop cognitively to maturity, and that such patients were overall emotionally shallow. He believed the reason for being unable to love could have resulted from a traumatic experience, such as the death of a close relative during childhood or divorce of one's parents, which gave the wrong impression of committed relationships. Exposure to one or multiple traumatic occurrences of a close friend or family member's leaving (via abandonment or mortality) could make the person unable to form true and affectionate attachments towards other people. Neurochemical/physiological Studies have shown that there is a strong correlation between the function of certain hormones, neurotransmitters and the Cluster B personality disorders such as HPD. This seems to be especially evident with respect to the catecholamines. Individuals diagnosed with HPD have a highly responsive noradrenergic system, which is responsible for the synthesis, storage, and release of the neurotransmitter norepinephrine. High levels of norepinephrine lead to anxiety-proneness, dependency, novelty seeking, and high sociability. Genetic Twin studies have aided in breaking down the genetic vs. environment debate. A twin study conducted by the Department of Psychology at the University of Oslo attempted to establish a correlation between genetics and Cluster B personality disorders. With a test sample of 221 twins, 92 monozygotic and 129 dizygotic, researchers interviewed the subjects using the Structured Clinical Interview for DSM-III-R Personality Disorders (SCID-II) and concluded that there was a correlation of 0.67 that histrionic personality disorder is hereditary. HPD and antisocial personality disorder Another theory suggests a possible relationship between histrionic personality disorder and antisocial personality disorder. Research has found 2/3 of patients diagnosed with histrionic personality disorder also meet criteria similar to those of the antisocial personality disorder, which suggests both disorders based towards sex-type expressions may have the same underlying cause. Some family history studies have found that histrionic personality disorder, as well as antisocial and borderline personality disorders, tend to run in families, but it is unclear how much is due to genetic versus environmental factors. Both examples suggest that predisposition could be a factor as to why certain people are diagnosed with histrionic personality disorder, however little is known about whether or not the disorder is influenced by any biological compound or is genetically inheritable. Little research has been conducted to determine the biological sources, if any, of this disorder. Diagnosis The person's appearance, behavior and history, along with a psychological evaluation, are usually sufficient to establish a diagnosis. There is no test to confirm this diagnosis. Because the criteria are subjective, some people may be wrongly diagnosed. DSM 5 The current edition of the Diagnostic and Statistical Manual of Mental Disorders, DSM 5, defines histrionic personality disorder (in Cluster B) as: The DSM 5 requires that a diagnosis for any specific personality disorder also satisfies a set of general personality disorder criteria. ICD-10 The World Health Organization's ICD-10 lists histrionic personality disorder (F60.4) as: It is a requirement of ICD-10 that a diagnosis of any specific personality disorder also satisfy a set of general personality disorder criteria. Comorbidity Most histrionics also have other mental disorders. Comorbid conditions include: antisocial, dependent, borderline, and narcissistic personality disorders, as well as depression, anxiety disorders, panic disorder, somatoform disorders, anorexia nervosa, substance use disorder and attachment disorders, including reactive attachment disorder. Millon's subtypes In 2000, Theodore Millon suggested six subtypes of histrionic personality disorder. Any individual histrionic may exhibit one or more of the following: Treatment Treatment is often prompted by depression associated with dissolved relationships. Medication does little to affect the personality disorder, but may be helpful with symptoms such as depression. Treatment for HPD itself involves psychotherapy, including cognitive therapy. Interviews and self-report methods In general clinical practice with assessment of personality disorders, one form of interview is the most popular: an unstructured interview. The actual preferred method is a semi-structured interview but there is reluctance to use this type of interview because they can seem impractical or superficial. The reason that a semi-structured interview is preferred over an unstructured interview is that semi-structured interviews tend to be more objective, systematic, replicable, and comprehensive. Unstructured interviews, despite their popularity, tend to have problems with unreliability and are susceptible to errors leading to false assumptions of the patient. One of the single most successful methods for assessing personality disorders by researchers of normal personality functioning is the self-report inventory following up with a semi-structured interview. A disadvantage to the self-report inventory method is that, with histrionic personality disorder, there is a distortion in character, self-presentation, and self-image. This means that most clients cannot be assessed by simply asking them if they match the criteria for the disorder. Most projective testing depends less on the ability or willingness of the person to provide an accurate description of the self, but there is currently limited empirical evidence on projective testing to assess histrionic personality disorder. Functional analytic psychotherapy Another way to treat histrionic personality disorder after identification is through functional analytic psychotherapy. The job of a Functional Analytic Psychotherapist is to identify the interpersonal problems with the patient as they happen in session or out of session. Initial goals of functional analytic psychotherapy are set by the therapist and include behaviors that fit the client's needs for improvement. Functional analytic psychotherapy differs from the traditional psychotherapy due to the fact that the therapist directly addresses the patterns of behavior as they occur in-session. The in-session behaviors of the patient or client are considered to be examples of their patterns of poor interpersonal communication and to adjust their neurotic defenses. To do this, the therapist must act on the client's behavior as it happens in real time and give feedback on how the client's behavior is affecting their relationship during therapy. The therapist also helps the client with histrionic personality disorder by denoting behaviors that happen outside of treatment; these behaviors are termed "Outside Problems" and "Outside Improvements". This allows the therapist to assist in problems and improvements outside of session and to verbally support the client and condition optimal patterns of behavior". This then can reflect on how they are advancing in-session and outside of session by generalizing their behaviors over time for changes or improvement". Coding client and therapist behaviors In these sessions there is a certain set of dialogue or script that can be forced by the therapist for the client to give insight on their behaviors and reasoning". Here is an example; the conversation is hypothetical. T = therapist C = Client. This coded dialogue can be transcribed as: ECRB – Evoking clinically relevant behavior T: Tell me how you feel coming in here today (CRB2) C: Well, to be honest, I was nervous. Sometimes I feel worried about how things will go, but I am really glad I am here. CRB1 – In-session problems C: Whatever, you always say that. (becomes quiet). I don't know what I am doing talking so much. CRB2 – In-session improvements TCRB1 – Clinically relevant response to client problems T: Now you seem to be withdrawing from me. That makes it hard for me to give you what you might need from me right now. What do you think you want from me as we are talking right now?". TCRB2 – Responses to client improvement T: That's great. I am glad you're here, too. I look forward to talking to you. Functional ideographic assessment template Another example of treatment besides coding is functional ideographic assessment template. The functional ideographic assessment template, also known as FIAT, was used as a way to generalize the clinical processes of functional analytic psychotherapy. The template was made by a combined effort of therapists and can be used to represent the behaviors that are a focus for this treatment. Using the FIAT therapists can create a common language to get stable and accurate communication results through functional analytic psychotherapy at the ease of the client; as well as the therapist. Epidemiology The survey data from the National epidemiological survey from 2001 to 2002 suggests a prevalence of HPD of 1.84 percent. Major character traits may be inherited, while other traits may be due to a combination of genetics and environment, including childhood experiences. This personality is seen more often in women than in men. Approximately 65% of HPD diagnoses are women while 35% are men. In Marcie Kaplan's A Women's View of DSM-III, she argues that women are overdiagnosed due to potential biases and expresses that even healthy women are often automatically diagnosed with HPD. It has also been argued due to diagnostic bias that prevalence rates are equal among women and men. Many symptoms representing HPD in the DSM are exaggerations of traditional feminine behaviors. In a peer and self-review study, it showed that femininity was correlated with histrionic, dependent and narcissistic personality disorders. Although it has typically been found that at least two thirds of HPD diagnoses are female, there have been a few exceptions. Whether or not the rate will be significantly higher than the rate of women within a particular clinical setting depends upon many factors that are mostly independent of the differential sex prevalence for HPD. Those with HPD are more likely to look for multiple people for attention, which leads to marital problems due to jealousy and lack of trust from the other party. This makes them more likely to become divorced or separated once married. With few studies done to find direct causations between HPD and culture, cultural and social aspects play a role in inhibiting and exhibiting HPD behaviors.
Biology and health sciences
Mental disorders
Health
181823
https://en.wikipedia.org/wiki/Turbojet
Turbojet
The turbojet is an airbreathing jet engine which is typically used in aircraft. It consists of a gas turbine with a propelling nozzle. The gas turbine has an air inlet which includes inlet guide vanes, a compressor, a combustion chamber, and a turbine (that drives the compressor). The compressed air from the compressor is heated by burning fuel in the combustion chamber and then allowed to expand through the turbine. The turbine exhaust is then expanded in the propelling nozzle where it is accelerated to high speed to provide thrust. Two engineers, Frank Whittle in the United Kingdom and Hans von Ohain in Germany, developed the concept independently into practical engines during the late 1930s. Turbojets have poor efficiency at low vehicle speeds, which limits their usefulness in vehicles other than aircraft. Turbojet engines have been used in isolated cases to power vehicles other than aircraft, typically for attempts on land speed records. Where vehicles are "turbine-powered", this is more commonly by use of a turboshaft engine, a development of the gas turbine engine where an additional turbine is used to drive a rotating output shaft. These are common in helicopters and hovercraft. Turbojets were widely used for early supersonic fighters, up to and including many third generation fighters, with the MiG-25 being the latest turbojet-powered fighter developed. As most fighters spend little time traveling supersonically, fourth-generation fighters (as well as some late third-generation fighters like the F-111 and Hawker Siddeley Harrier) and subsequent designs are powered by the more efficient low-bypass turbofans and use afterburners to raise exhaust speed for bursts of supersonic travel. Turbojets were used on Concorde and the longer-range versions of the Tu-144 which were required to spend a long period travelling supersonically. Turbojets are still common in medium range cruise missiles, due to their high exhaust speed, small frontal area, and relative simplicity. History The first patent for using a gas turbine to power an aircraft was filed in 1921 by Frenchman Maxime Guillaume. His engine was to be an axial-flow turbojet, but was never constructed, as it would have required considerable advances over the state of the art in compressors. In 1928, British RAF College Cranwell cadet Frank Whittle formally submitted his ideas for a turbojet to his superiors. In October 1929 he developed his ideas further. On 16 January 1930 in England, Whittle submitted his first patent (granted in 1932). The patent showed a two-stage axial compressor feeding a single-sided centrifugal compressor. Practical axial compressors were made possible by ideas from A.A. Griffith in a seminal paper in 1926 ("An Aerodynamic Theory of Turbine Design"). Whittle later concentrated on the simpler centrifugal compressor only, for a variety of practical reasons. A Whittle engine was the first turbojet to run, the Power Jets WU, on 12 April 1937. It was liquid-fuelled. Whittle's team experienced near-panic during the first start attempts when the engine accelerated out of control to a relatively high speed despite the fuel supply being cut off. It was subsequently found that fuel had leaked into the combustion chamber during pre-start motoring checks and accumulated in pools, so the engine would not stop accelerating until all the leaked fuel had burned off. Whittle was unable to interest the government in his invention, and development continued at a slow pace. In Germany, Hans von Ohain patented a similar engine in 1935. His design, an axial-flow engine, as opposed to Whittle's centrifugal flow engine, was eventually adopted by most manufacturers by the 1950s. On 27 August 1939 the Heinkel He 178, powered by von Ohain's design, became the world's first aircraft to fly using the thrust from a turbojet engine. It was flown by test pilot Erich Warsitz. The Gloster E.28/39, (also referred to as the "Gloster Whittle", "Gloster Pioneer", or "Gloster G.40") made the first British jet-engined flight in 1941. It was designed to test the Whittle jet engine in flight, and led to the development of the Gloster Meteor. The first two operational turbojet aircraft, the Messerschmitt Me 262 and then the Gloster Meteor, entered service in 1944, towards the end of World War II, the Me 262 in April and the Gloster Meteor in July. Only about 15 Meteor saw WW2 action but up to 1400 Me 262s were produced, with 300 entering combat, delivering the first ground attacks and air combat victories of jet planes. Air is drawn into the rotating compressor via the intake and is compressed to a higher pressure before entering the combustion chamber. Fuel is mixed with the compressed air and burns in the combustor. The combustion products leave the combustor and expand through the turbine where power is extracted to drive the compressor. The turbine exit gases still contain considerable energy that is converted in the propelling nozzle to a high speed jet. The first turbojets, used either a centrifugal compressor (as in the Heinkel HeS 3), or an axial compressor (as in the Junkers Jumo 004) which gave a smaller diameter, although longer, engine. By replacing the propeller used on piston engines with a high speed jet of exhaust, higher aircraft speeds were attainable. One of the last applications for a turbojet engine was Concorde which used the Olympus 593 engine. However, joint studies by Rolls-Royce and Snecma for a second generation SST engine using the 593 core were done more than three years before Concorde entered service. They evaluated bypass engines with bypass ratios between 0.1 and 1.0 to give improved take-off and cruising performance. Nevertheless, the 593 met all the requirements of the Concorde programme. Estimates made in 1964 for the Concorde design at Mach 2.2 showed the penalty in range for the supersonic airliner, in terms of miles per gallon, compared to subsonic airliners at Mach 0.85 (Boeing 707, DC-8) was relatively small. This is because the large increase in drag is largely compensated by an increase in powerplant efficiency (the engine efficiency is increased by the ram pressure rise which adds to the compressor pressure rise, the higher aircraft speed approaches the exhaust jet speed increasing propulsive efficiency). Turbojet engines had a significant impact on commercial aviation. Aside from giving faster flight speeds turbojets had greater reliability than piston engines, with some models demonstrating dispatch reliability rating in excess of 99.9%. Pre-jet commercial aircraft were designed with as many as four engines in part because of concerns over in-flight failures. Overseas flight paths were plotted to keep planes within an hour of a landing field, lengthening flights. The increase in reliability that came with the turbojet enabled three- and two-engine designs, and more direct long-distance flights. High-temperature alloys were a reverse salient, a key technology that dragged progress on jet engines. Non-UK jet engines built in the 1930s and 1940s had to be overhauled every 10 or 20 hours due to creep failure and other types of damage to blades. British engines, however, utilised Nimonic alloys which allowed extended use without overhaul, engines such as the Rolls-Royce Welland and Rolls-Royce Derwent, and by 1949 the de Havilland Goblin, being type tested for 500 hours without maintenance. It was not until the 1950s that superalloy technology allowed other countries to produce economically practical engines. Early designs Early German turbojets had severe limitations on the amount of running they could do due to the lack of suitable high temperature materials for the turbines. British engines such as the Rolls-Royce Welland used better materials giving improved durability. The Welland was type-certified for 80 hours initially, later extended to 150 hours between overhauls, as a result of an extended 500-hour run being achieved in tests. General Electric in the United States was in a good position to enter the jet engine business due to its experience with the high-temperature materials used in their turbosuperchargers during World War II. Water injection was a common method used to increase thrust, usually during takeoff, in early turbojets that were thrust-limited by their allowable turbine entry temperature. The water increased thrust at the temperature limit, but prevented complete combustion, often leaving a very visible smoke trail. Allowable turbine entry temperatures have increased steadily over time both with the introduction of superior alloys and coatings, and with the introduction and progressive effectiveness of blade cooling designs. On early engines, the turbine temperature limit had to be monitored, and avoided, by the pilot, typically during starting and at maximum thrust settings. Automatic temperature limiting was introduced to reduce pilot workload and reduce the likelihood of turbine damage due to over-temperature. Components Nose bullet A nose bullet is a component of a turbojet used to divert air into the intake, in front of the accessory drive and to house the starter motor. Air intake An intake, or tube, is needed in front of the compressor to help direct the incoming air smoothly into the rotating compressor blades. Older engines had stationary vanes in front of the moving blades. These vanes also helped to direct the air onto the blades. The air flowing into a turbojet engine is always subsonic, regardless of the speed of the aircraft itself. The intake has to supply air to the engine with an acceptably small variation in pressure (known as distortion) and having lost as little energy as possible on the way (known as pressure recovery). The ram pressure rise in the intake is the inlet's contribution to the propulsion system's overall pressure ratio and thermal efficiency. The intake gains prominence at high speeds when it generates more compression than the compressor stage. Well-known examples are the Concorde and Lockheed SR-71 Blackbird propulsion systems where the intake and engine contributions to the total compression were 63%/8% at Mach 2 and 54%/17% at Mach 3+. Intakes have ranged from "zero-length" on the Pratt & Whitney TF33 turbofan installation in the Lockheed C-141 Starlifter, to the twin long, intakes on the North American XB-70 Valkyrie, each feeding three engines with an intake airflow of about . Compressor The turbine rotates the compressor at high speed, adding energy to the airflow while squeezing (compressing) it into a smaller space. Compressing the air increases its pressure and temperature. The smaller the compressor, the faster it turns. The (large) GE90-115B fan rotates at about 2,500 RPM, while a small helicopter engine compressor rotates around 50,000 RPM. Turbojets supply bleed air from the compressor to the aircraft for the operation of various sub-systems. Examples include the environmental control system, anti-icing, and fuel tank pressurization. The engine itself needs air at various pressures and flow rates to keep it running. This air comes from the compressor, and without it, the turbines would overheat, the lubricating oil would leak from the bearing cavities, the rotor thrust bearings would skid or be overloaded, and ice would form on the nose cone. The air from the compressor, called secondary air, is used for turbine cooling, bearing cavity sealing, anti-icing, and ensuring that the rotor axial load on its thrust bearing will not wear it out prematurely. Supplying bleed air to the aircraft decreases the efficiency of the engine because it has been compressed, but then does not contribute to producing thrust. Compressor types used in turbojets were typically axial or centrifugal. Early turbojet compressors had low pressure ratios up to about 5:1. Aerodynamic improvements including splitting the compressor into two separately rotating parts, incorporating variable blade angles for entry guide vanes and stators, and bleeding air from the compressor enabled later turbojets to have overall pressure ratios of 15:1 or more. After leaving the compressor, the air enters the combustion chamber. Combustion chamber The burning process in the combustor is significantly different from that in a piston engine. In a piston engine, the burning gases are confined to a small volume, and as the fuel burns, the pressure increases. In a turbojet, the air and fuel mixture burn in the combustor and pass through to the turbine in a continuous flowing process with no pressure build-up. Instead, a small pressure loss occurs in the combustor. The fuel-air mixture can only burn in slow-moving air, so an area of reverse flow is maintained by the fuel nozzles for the approximately stoichiometric burning in the primary zone. Further compressed air is introduced which completes the combustion process and reduces the temperature of the combustion products to a level which the turbine can accept. Less than 25% of the air is typically used for combustion, as an overall lean mixture is required to keep within the turbine temperature limits. Turbine Hot gases leaving the combustor expand through the turbine. Typical materials for turbines include inconel and Nimonic. The hottest turbine vanes and blades in an engine have internal cooling passages. Air from the compressor is passed through these to keep the metal temperature within limits. The remaining stages do not need cooling. In the first stage, the turbine is largely an impulse turbine (similar to a pelton wheel) and rotates because of the impact of the hot gas stream. Later stages are convergent ducts that accelerate the gas. Energy is transferred into the shaft through momentum exchange in the opposite way to energy transfer in the compressor. The power developed by the turbine drives the compressor and accessories, like fuel, oil, and hydraulic pumps that are driven by the accessory gearbox. Nozzle After the turbine, the gases expand through the exhaust nozzle producing a high velocity jet. In a convergent nozzle, the ducting narrows progressively to a throat. The nozzle pressure ratio on a turbojet is high enough at higher thrust settings to cause the nozzle to choke. If, however, a convergent-divergent de Laval nozzle is fitted, the divergent (increasing flow area) section allows the gases to reach supersonic velocity within the divergent section. Additional thrust is generated by the higher resulting exhaust velocity. Thrust augmentation Thrust was most commonly increased in turbojets with water/methanol injection or afterburning. Some engines used both methods. Liquid injection was tested on the Power Jets W.1 in 1941 initially using ammonia before changing to water and then water-methanol. A system to trial the technique in the Gloster E.28/39 was devised but never fitted. Afterburner An afterburner or "reheat jetpipe" is a combustion chamber added to reheat the turbine exhaust gases. The fuel consumption is very high, typically four times that of the main engine. Afterburners are used almost exclusively on supersonic aircraft, most being military aircraft. Two supersonic airliners, Concorde and the Tu-144, also used afterburners as does Scaled Composites White Knight, a carrier aircraft for the experimental SpaceShipOne suborbital spacecraft. Reheat was flight-trialled in 1944 on the W.2/700 engines in a Gloster Meteor I. Net thrust The net thrust of a turbojet is given by: where: If the speed of the jet is equal to sonic velocity the nozzle is said to be "choked". If the nozzle is choked, the pressure at the nozzle exit plane is greater than atmospheric pressure, and extra terms must be added to the above equation to account for the pressure thrust. The rate of flow of fuel entering the engine is very small compared with the rate of flow of air. If the contribution of fuel to the nozzle gross thrust is ignored, the net thrust is: The speed of the jet must exceed the true airspeed of the aircraft if there is to be a net forward thrust on the airframe. The speed can be calculated thermodynamically based on adiabatic expansion. Cycle improvements The operation of a turbojet is modelled approximately by the Brayton cycle. The efficiency of a gas turbine is increased by raising the overall pressure ratio, requiring higher-temperature compressor materials, and raising the turbine entry temperature, requiring better turbine materials and/or improved vane/blade cooling. It is also increased by reducing the losses as the flow progresses from the intake to the propelling nozzle. These losses are quantified by compressor and turbine efficiencies and ducting pressure losses. When used in a turbojet application, where the output from the gas turbine is used in a propelling nozzle, raising the turbine temperature increases the jet velocity. At normal subsonic speeds this reduces the propulsive efficiency, giving an overall loss, as reflected by the higher fuel consumption, or SFC. However, for supersonic aircraft this can be beneficial, and is part of the reason why the Concorde employed turbojets. Turbojet systems are complex systems therefore to secure optimal function of such system, there is a call for the newer models being developed to advance its control systems to implement the newest knowledge from the areas of automation, so increase its safety and effectiveness.
Technology
Aircraft components
null
181897
https://en.wikipedia.org/wiki/Cliff
Cliff
At all geography and geology, a cliff or rock face is an area of rock which has a general angle defined by the vertical, or nearly vertical. Cliffs are formed by the processes of weathering and erosion, with the effect of gravity. Cliffs are common on coasts, in mountainous areas, escarpments and along rivers. Cliffs are usually composed of rock that is resistant to weathering and erosion. The sedimentary rocks that are most likely to form cliffs include sandstone, limestone, chalk, and dolomite. Igneous rocks such as granite and basalt also often form cliffs. An escarpment (or scarp) is a type of cliff formed by the movement of a geologic fault, a landslide, or sometimes by rock slides or falling rocks which change the differential erosion of the rock layers. Most cliffs have some form of scree slope at their base. In arid areas or under high cliffs, they are generally exposed jumbles of fallen rock. In areas of higher moisture, a soil slope may obscure the talus. Many cliffs also feature tributary waterfalls or rock shelters. Sometimes a cliff peters out at the end of a ridge, with mushroom rocks or other types of rock columns remaining. Coastal erosion may lead to the formation of sea cliffs along a receding coastline. The British Ordnance Survey distinguishes between around most cliffs (continuous line along the topper edge with projections down the face) and outcrops (continuous lines along lower edge). Etymology Cliff comes from the Old English word clif of essentially the same meaning, cognate with Dutch, Low German, and Old Norse klif 'cliff'. These may in turn all be from a Romance loanword into Primitive Germanic that has its origins in the Latin forms ("slope" or "hillside"). Large and famous cliffs Given that a cliff does not need to be exactly vertical, there can be ambiguity about whether a given slope is a cliff or not and also about how much of a certain slope to count as a cliff. For example, given a truly vertical rock wall above a very steep slope, one could count just the rock wall or the combination. Listings of cliffs are thus inherently uncertain. Some of the largest cliffs on Earth are found underwater. For example, an 8,000 m drop over a 4,250 m span can be found at a ridge sitting inside the Kermadec Trench. According to some sources, the highest cliff in the world, about 1,340 m high, is the east face of Great Trango in the Karakoram mountains of northern Pakistan. This uses a fairly stringent notion of cliff, as the 1,340 m figure refers to a nearly vertical headwall of two stacked pillars; adding in a very steep approach brings the total drop from the East Face precipice to the nearby Dunge Glacier to nearly 2,000 m. The location of the world's highest sea cliffs depends also on the definition of 'cliff' that is used. Guinness World Records states it is Kalaupapa, Hawaii, at 1,010 m high. Another contender is the north face of Mitre Peak, which drops 1,683 m to Milford Sound, New Zealand. These are subject to a less stringent definition, as the average slope of these cliffs at Kaulapapa is about 1.7, corresponding to an angle of 60 degrees, and Mitre Peak is similar. A more vertical drop into the sea can be found at Maujit Qaqarssuasia (also known as the 'Thumbnail') which is situated in the Torssukátak fjord area at the very tip of South Greenland and drops 1,560 m near-vertically. Considering a truly vertical drop, Mount Thor on Baffin Island in Arctic Canada is often considered the highest at 1370 m (4500 ft) high in total (the top 480 m (1600 ft) is overhanging), and is said to give it the longest vertical drop on Earth at 1,250 m (4,100 ft). However, other cliffs on Baffin Island, such as Polar Sun Spire in the Sam Ford Fjord, or others in remote areas of Greenland may be higher. The highest cliff in the solar system may be Verona Rupes, an approximately high fault scarp on Miranda, a moon of Uranus. List The following is an incomplete list of cliffs of the world. Africa Above Sea Anaga's Cliffs, Tenerife, Canary Islands, Spain, above Atlantic Ocean Cape Hangklip, Western Cape, South Africa, above False Bay, Atlantic Ocean Cape Point, Western Cape, South Africa, above Atlantic Ocean Chapman's Peak, Western Cape, South Africa, above Atlantic Ocean Karbonkelberg, Cape Town, Western Cape, South Africa, above Hout Bay, Atlantic Ocean Kogelberg, Western Cape, South Africa, above False Bay, Atlantic Ocean Los Gigantes, Tenerife, Canary Islands, Spain, above Atlantic Ocean Table Mountain, Cape Town, Western Cape, South Africa, above Atlantic Ocean Above Land Innumerable peaks in the Drakensberg mountains of South Africa are considered cliff formations. The Drakensberg Range is regarded, together with Ethiopia's Simien Mountains, as one of the two finest erosional mountain ranges on Earth. Because of their near-unique geological formation, the range has an extraordinarily high percentage of cliff faces making up its length, particularly along the highest portion of the range. This portion of the range is virtually uninterrupted cliff faces, ranging from to in height for almost . Of all, the "Drakensberg Amphitheatre" (mentioned above) is most well known. Other notable cliffs include the Trojan Wall, Cleft Peak, Injisuthi Triplets, Cathedral Peak, Monk's Cowl, Mnweni Buttress, etc. The cliff faces of the Blyde River Canyon, technically still part of the Drakensberg, may be over , with the main face of the Swadini Buttress approximately tall. Drakensberg Amphitheatre, South Africa above base, long. The Tugela Falls, the world's second tallest waterfall, falls over the edge of the cliff face. Karambony, Madagascar, above base. Mount Meru, Tanzania Caldera Cliffs, Tsaranoro, Madagascar, above base America North Several big granite faces in the Arctic region vie for the title of 'highest vertical drop on Earth', but reliable measurements are not always available. The possible contenders include (measurements are approximate): Mount Thor, Baffin Island, Canada; 1,370 m (4,500 ft) total; top 480 m (1600 ft) is overhanging. This is commonly regarded as being the largest vertical drop on Earth ot:leapyear at 1,250 m (4,100 ft). The sheer north face of Polar Sun Spire, in the §74:MTAtoFa of Baffin Island, rises 4,300 ft above the flat frozen fjord, although the lower portion of the face breaks from the vertical wall with a series of ledges and buttresses. Ketil's and its neighbor Ulamertorsuaq's west faces in Tasermiut, Greenland have been reported as over 1,000 m high. Another relevant cliff in Greenland is Agdlerussakasit's Thumbnail. Other notable cliffs include: Ättestupan Cliff, northern side of Kaiser Franz Joseph Fjord, Greenland Big Sandy Mountain, east face buttress, Wind River Range, Wyoming, 550 m Calvert Cliffs along the Chesapeake Bay in Maryland, U.S. 25 m Cap Éternité of Saguenay River, Quebec, Canada, 347 m All faces of Devils Tower, Wyoming, United States, 195 m Doublet Peak, southwest face, Wind River Range, Wyoming, United States, 370 m El Capitan, Yosemite Valley, California, United States; 900 m (3,000 ft) Grand Teton, north face Teton Range, Wyoming Northwest Face of Half Dome, near El Capitan, California, United States; 1,444 m (4,737 ft) total, vertical portion about 610 m (2,000 ft) Longs Peak Diamond, Rocky Mountain National Park, Colorado, United States, 400 m Mount Asgard, Baffin Island, Canada; vertical drop of about 1,200 m (4,000 ft). Mount Siyeh, Glacier National Park (U.S.) north face, The North Face of North Twin Peak, Rocky Mountains, Alberta, Canada, 1,200 m The west face of Notch Peak in the House Range of southwestern Utah, U.S.; a carbonate rock pure vertical drop of about 670 m (2,200 ft), with from the top of the cliff to valley floor (bottom of the canyon below the notch) Painted Wall in Black Canyon of the Gunnison National Park, Colorado, United States; 685 m (2,250 ft) Raftsmen's Acropolis, a rock face of the Montagne des Érables, Quebec, Canada, 800 m Rockwall, Kootenay National Park, British Columbia, Canada, 30 km of mostly unbroken cliffs up to 900 m Royal Gorge cliffs, Colorado, United States, 350 m Faces of Shiprock, New Mexico, United States, 400 m All walls of the Stawamus Chief, Squamish, British Columbia, Canada, up to 500 m Temple Peak, east face, Wind River Range, Wyoming, 400 m Temple Peak East, north face, Wind River Range, Wyoming, 450 m Toroweap (a.k.a. Tuweep), Grand Canyon, Arizona, United States; 900 m (3,000 ft) Uncompahgre Peak, northeast face, San Juan Range, Colorado, 275 m (550 m rise above surrounding plateau) East face of the West Temple in Zion National Park, Utah, United States believed to be the tallest sandstone cliff in the world, 670 m South All faces of Auyan Tepui, along with all other Tepuis, Venezuela, Brazil, and Guyana, Auyan Tepui is about 1,000 m (location of Angel Falls) (the falls are 979 m, the highest in the world) All faces of Cerro Chalten (Fitz Roy), Patagonia, Argentina-Chile, 1200 m All faces of Cerro Torre, Patagonia, Chile-Argentina Pão de Açúcar/Sugar Loaf, Rio de Janeiro, Brazil, 395 m Pared de Gocta, Peru, 771 m Pared Sur Cerro Aconcagua. Las Heras, Mendoza, Argentina, 2,700 m Pedra Azul, Pedra Azul State Park, Espírito Santo, Brazil, 540 m Scratched Stone (Pedra Riscada), São José do Divino/MG, Minas Gerais, Brazil, 1,480 m Faces of the Torres del Paine group, Patagonia, Chile, up to 900 m Asia Above Sea Mont Lesquin, Île de l'Est, Crozet Islands, France, 1012 m above Indian Ocean. Qingshui Cliff, Xiulin Township, Hualien County, Taiwan averaging 800 m above Pacific Ocean. The tallest peak, Qingshui Mountain, rises 2408 m directly from the Pacific Ocean. Ra's Sajir, Oman, above the Arabian Sea Theoprosopon, between Chekka and Selaata in north Lebanon jutting into the Mediterranean. Tōjinbō, Sakai, Fukui prefecture, Japan 25 m above Sea of Japan Above Land Various cliffs in the Ak-Su Valley of Kyrgyzstan are high and steep. Baintha Brakk (The Ogre), Panmah Muztagh, Gilgit–Baltistan, Pakistan, 2,000 m Gyala Peri, southeast face, Mêdog County, Tibet, China, 4,600 m Hunza Peak south face, Karakoram, Gilgit–Baltistan, Pakistan, 1,700 m K2 west face, Karakoram, Gilgit–Baltistan, Pakistan, 2900m The Latok Group, Panmah Muztagh, Gilgit–Baltistan, Pakistan, 1,800 m Lhotse northeast face, Mahalangur Himal, Nepal, 2900m Lhotse south face, Mahalangur Himal, Nepal, 3200 m Meru Peak, Uttarakhand, India, 1200 m Nanga Parbat, Rupal Face, Azad Kashmir, Pakistan, 4,600 m Qingshui Cliff, Xiulin Township, Hualien County, Taiwan averaging 800 m above Pacific Ocean. The tallest peak, Qingshui Mountain, rises 2408 meters directly from the Pacific Ocean. Ramon Crater, Israel, 400 m Shispare Sar southwest face, Karakoram, Gilgit–Baltistan, Pakistan, 3,200 m Spantik northwest face, Karakoram, Gilgit–Baltistan, Pakistan, 2,000 m Trango Towers: East Face Great Trango Tower, Baltoro Muztagh, Gilgit–Baltistan, Pakistan, 1,340 m (near vertical headwall), 2,100 m (very steep overall drop from East Summit to Dunge Glacier). Northwest Face drops approximately 2,200 m to the Trango Glacier below, but with a taller slab topped out with a shorter overhanging headwall of approximately 1,000 m. The Southwest "Azeem" Ridge forms the group's tallest steep rise of roughly 2,286 m (7,500 ft) from the Trango Glacier to the Southwest summit. Uli Biaho Towers, Baltoro Glacier, Gilgit–Baltistan, Pakistan Ultar Sar southwest face, Karakoram, Gilgit–Baltistan, Pakistan, 3,000 m World's End, Horton Plains, Nuwara Eliya, Sri Lanka. It has a sheer drop of about 4000 ft (1200 m) Various cliffs in Zhangjiajie National Forest Park, Hunan Province, China. The cliffs can get to around 1,000 ft (300 m). Europe Above Sea Beachy Head, England, 162 m above the English Channel Beinisvørð, Faroe Islands, 470 m above North Atlantic Belogradchik Rocks, Bulgaria - up to 200 m high sandstone towers Benwee Head Cliffs, Erris, County Mayo, Ireland, 304 m above Atlantic Ocean Cabo Girão, Madeira, Portugal, 589 m above Atlantic Ocean Cap Canaille, France, 394 m above Mediterranean sea is the highest sea cliff in France Cape Enniberg, Faroe Islands, 750 m above North Atlantic Conachair, St Kilda, Scotland 427 m above Atlantic Ocean, highest sea cliff in the UK Croaghaun, Achill Island, Ireland, 688 m above Atlantic Ocean Dingli Cliffs, Malta, 250 m above Mediterranean sea Dvuglav, Rila Mountain, Bulgaria 460 m (south face) Étretat, France, 84 m above the English Channel Faneque, Gran Canaria, Spain, 1027 m above Atlantic Ocean Hangman cliffs, Devon 318 m above Bristol Channel is the highest sea cliff in England High Cliff, between Boscastle and St Gennys, 223 m above Celtic Sea Hornelen, Norway, 860 m above Skatestraumen Hvanndalabjarg, Ólafsfjörður, Iceland, 630 m above Atlantic Ocean Jaizkibel, Spain, 547 m above the Bay of Biscay Kaliakra cliffs, Bulgaria, more than 70 m above the Black Sea The Kame, Foula, Shetland, 376 m above the North Atlantic, second highest sea cliff in the UK Le Tréport, France, 110 m above the English Channel Cliffs of Moher, Ireland, 217 m above Atlantic Ocean Møns Klint, Denmark, 143 m above Baltic Sea Monte Solaro, Capri, Italy, 589 m above the Mediterranean Sea Ontika Limestone cliff, Estonia, 55 m above Baltic Sea. Preikestolen, Norway, 604 m above Lysefjorden Slieve League, Ireland, 601 m above Atlantic Ocean Snake Island, Ukraine, 41 m above the Black Sea Vixía Herbeira, Northern Galicia, Spain, 621 m above Atlantic Ocean White cliffs of Dover, England, 100 m above the Strait of Dover Above Land The six great north faces of the Alps (Eiger 1,500 m, Matterhorn 1,350 m, Grandes Jorasses 1,100 m, Petit Dru 1,000 m, and Piz Badile 850 m, Cima Grande di Lavaredo 450 m) Giewont (north face), Tatra Mountains, Poland, 852 m above Polana Strążyska glade Kjerag, Norway 984 m. Mięguszowiecki Szczyt north face rises to 1,043 m above Morskie Oko lake level, High Tatras, Poland Troll Wall, Norway 1,100 m above base Vihren peak north face, Pirin Mountain, Bulgaria 460 m to the (Golemiya Kazan) Torre Cerredo west face rises to 2,200 m above Cares river, Picos de Europa, Spain Naranjo de Bulnes west face rises 550 vertical metres above Vega Urriellu, Picos de Europa, Spain Vârful Coștila, Munții Bucegi peretele Văii Albe, Bucegi Mountains, Romania 450 m vertical cliff and 1,600 m above Bușteni Vratsata, Vrachanski Balkan Nature Park, Bulgaria 400 m Submarine Bouldnor Cliff - the waters of the coast of the Isle of Wight Oceania Above Sea Ball's Pyramid, a sea stack 562m high and only 200m across at its base The Elephant, New Zealand, has cliffs falling approx 1180m into Milford Sound, and a 900m drop in less than 300m horizontally Great Australian Bight Kalaupapa, Hawaii, 1,010 m above Pacific Ocean The Lion, New Zealand, 1,302 m above Milford Sound (drops from approx 1280m to sea level in a very short distance) Lovers Leap, Highcliff, and The Chasm, on Otago Peninsula, New Zealand, all 200 to 300 m above the Pacific Ocean Mitre Peak, New Zealand, 1,683 m above Milford Sound Tasman National Park, Tasmania, has 300m dolerite sea cliffs dropping directly to the ocean in columnar form The Twelve Apostles (Victoria). A series of sea stacks in Australia, ranging from approximately 50 to 70 m above the Bass Strait Zuytdorp Cliffs in Western Australia Above Land Mount Banks in the Blue Mountains National Park, New South Wales, Australia: west of its saddle there is a 490 m fall within 100 M horizontally. As habitat Cliff landforms provide unique habitat niches to a variety of plants and animals, whose preferences and needs are suited by the vertical geometry of this landform type. For example, a number of birds have decided affinities for choosing cliff locations for nesting, often driven by the defensibility of these locations as well as absence of certain predators. Humans have also inhabited cliff dwellings. Flora The population of the rare Borderea chouardii, during 2012, existed only on two cliff habitats within western Europe.
Physical sciences
Fluvial landforms
null
181905
https://en.wikipedia.org/wiki/Escarpment
Escarpment
An escarpment is a steep slope or long cliff that forms as a result of faulting or erosion and separates two relatively level areas having different elevations. The terms scarp and scarp face are often used interchangeably with escarpment. Some sources differentiate the two terms, with escarpment referring to the margin between two landforms, and scarp referring to a cliff or a steep slope. In this usage an escarpment is a ridge which has a gentle slope on one side and a steep scarp on the other side. More loosely, the term scarp also describes a zone between a coastal lowland and a continental plateau which shows a marked, abrupt change in elevation caused by coastal erosion at the base of the plateau. Formation and description Scarps are generally formed by one of two processes: either by differential erosion of sedimentary rocks, or by movement of the Earth's crust at a geologic fault. The first process is the more common type: the escarpment is a transition from one series of sedimentary rocks to another series of a different age and composition. Escarpments are also frequently formed by faults. When a fault displaces the ground surface so that one side is higher than the other, a fault scarp is created. This can occur in dip-slip faults, or when a strike-slip fault brings a piece of high ground adjacent to an area of lower ground. Earth is not the only planet where escarpments occur. They are believed to occur on other planets when the crust contracts, as a result of cooling. On other Solar System bodies such as Mercury, Mars, and the Moon, the Latin term rupes is used for an escarpment. Erosion When sedimentary beds are tilted and exposed to the surface, erosion and weathering may occur. Escarpments erode gradually and over geological time. The mélange tendencies of escarpments results in varying contacts between a multitude of rock types. These different rock types weather at different speeds, according to Goldich dissolution series so different stages of deformation can often be seen in the layers where the escarpments have been exposed to the elements.
Physical sciences
Landforms: General
Earth science
181983
https://en.wikipedia.org/wiki/Fermi%20gas
Fermi gas
A Fermi gas is an idealized model, an ensemble of many non-interacting fermions. Fermions are particles that obey Fermi–Dirac statistics, like electrons, protons, and neutrons, and, in general, particles with half-integer spin. These statistics determine the energy distribution of fermions in a Fermi gas in thermal equilibrium, and is characterized by their number density, temperature, and the set of available energy states. The model is named after the Italian physicist Enrico Fermi. This physical model is useful for certain systems with many fermions. Some key examples are the behaviour of charge carriers in a metal, nucleons in an atomic nucleus, neutrons in a neutron star, and electrons in a white dwarf. Description An ideal Fermi gas or free Fermi gas is a physical model assuming a collection of non-interacting fermions in a constant potential well. Fermions are elementary or composite particles with half-integer spin, thus follow Fermi–Dirac statistics. The equivalent model for integer spin particles is called the Bose gas (an ensemble of non-interacting bosons). At low enough particle number density and high temperature, both the Fermi gas and the Bose gas behave like a classical ideal gas. By the Pauli exclusion principle, no quantum state can be occupied by more than one fermion with an identical set of quantum numbers. Thus a non-interacting Fermi gas, unlike a Bose gas, concentrates a small number of particles per energy. Thus a Fermi gas is prohibited from condensing into a Bose–Einstein condensate, although weakly-interacting Fermi gases might form a Cooper pair and condensate (also known as BCS-BEC crossover regime). The total energy of the Fermi gas at absolute zero is larger than the sum of the single-particle ground states because the Pauli principle implies a sort of interaction or pressure that keeps fermions separated and moving. For this reason, the pressure of a Fermi gas is non-zero even at zero temperature, in contrast to that of a classical ideal gas. For example, this so-called degeneracy pressure stabilizes a neutron star (a Fermi gas of neutrons) or a white dwarf star (a Fermi gas of electrons) against the inward pull of gravity, which would ostensibly collapse the star into a black hole. Only when a star is sufficiently massive to overcome the degeneracy pressure can it collapse into a singularity. It is possible to define a Fermi temperature below which the gas can be considered degenerate (its pressure derives almost exclusively from the Pauli principle). This temperature depends on the mass of the fermions and the density of energy states. The main assumption of the free electron model to describe the delocalized electrons in a metal can be derived from the Fermi gas. Since interactions are neglected due to screening effect, the problem of treating the equilibrium properties and dynamics of an ideal Fermi gas reduces to the study of the behaviour of single independent particles. In these systems the Fermi temperature is generally many thousands of kelvins, so in human applications the electron gas can be considered degenerate. The maximum energy of the fermions at zero temperature is called the Fermi energy. The Fermi energy surface in reciprocal space is known as the Fermi surface. The nearly free electron model adapts the Fermi gas model to consider the crystal structure of metals and semiconductors, where electrons in a crystal lattice are substituted by Bloch electrons with a corresponding crystal momentum. As such, periodic systems are still relatively tractable and the model forms the starting point for more advanced theories that deal with interactions, e.g. using the perturbation theory. 1D uniform gas The one-dimensional infinite square well of length L is a model for a one-dimensional box with the potential energy: It is a standard model-system in quantum mechanics for which the solution for a single particle is well known. Since the potential inside the box is uniform, this model is referred to as 1D uniform gas, even though the actual number density profile of the gas can have nodes and anti-nodes when the total number of particles is small. The levels are labelled by a single quantum number n and the energies are given by: where is the zero-point energy (which can be chosen arbitrarily as a form of gauge fixing), the mass of a single fermion, and is the reduced Planck constant. For N fermions with spin- in the box, no more than two particles can have the same energy, i.e., two particles can have the energy of , two other particles can have energy and so forth. The two particles of the same energy have spin (spin up) or − (spin down), leading to two states for each energy level. In the configuration for which the total energy is lowest (the ground state), all the energy levels up to n = N/2 are occupied and all the higher levels are empty. Defining the reference for the Fermi energy to be , the Fermi energy is therefore given by where is the floor function evaluated at n = N/2. Thermodynamic limit In the thermodynamic limit, the total number of particles N are so large that the quantum number n may be treated as a continuous variable. In this case, the overall number density profile in the box is indeed uniform. The number of quantum states in the range is: Without loss of generality, the zero-point energy is chosen to be zero, with the following result: Therefore, in the range: the number of quantum states is: Here, the degree of degeneracy is: And the density of states is: In modern literature, the above is sometimes also called the "density of states". However, differs from by a factor of the system's volume (which is in this 1D case). Based on the following formula: the Fermi energy in the thermodynamic limit can be calculated to be: 3D uniform gas The three-dimensional isotropic and non-relativistic uniform Fermi gas case is known as the Fermi sphere. A three-dimensional infinite square well, (i.e. a cubical box that has a side length L) has the potential energy The states are now labelled by three quantum numbers nx, ny, and nz. The single particle energies are where nx, ny, nz are positive integers. In this case, multiple states have the same energy (known as degenerate energy levels), for example . Thermodynamic limit When the box contains N non-interacting fermions of spin-, it is interesting to calculate the energy in the thermodynamic limit, where N is so large that the quantum numbers nx, ny, nz can be treated as continuous variables. With the vector , each quantum state corresponds to a point in 'n-space' with energy With denoting the square of the usual Euclidean length . The number of states with energy less than EF + E0 is equal to the number of states that lie within a sphere of radius in the region of n-space where nx, ny, nz are positive. In the ground state this number equals the number of fermions in the system: The factor of two expresses the two spin states, and the factor of 1/8 expresses the fraction of the sphere that lies in the region where all n are positive. The Fermi energy is given by Which results in a relationship between the Fermi energy and the number of particles per volume (when L2 is replaced with V2/3): This is also the energy of the highest-energy particle (the th particle), above the zero point energy . The th particle has an energy of The total energy of a Fermi sphere of fermions (which occupy all energy states within the Fermi sphere) is given by: Therefore, the average energy per particle is given by: Density of states For the 3D uniform Fermi gas, with fermions of spin-, the number of particles as a function of the energy is obtained by substituting the Fermi energy by a variable energy : from which the density of states (number of energy states per energy per volume) can be obtained. It can be calculated by differentiating the number of particles with respect to the energy: This result provides an alternative way to calculate the total energy of a Fermi sphere of fermions (which occupy all energy states within the Fermi sphere): Thermodynamic quantities Degeneracy pressure By using the first law of thermodynamics, this internal energy can be expressed as a pressure, that is where this expression remains valid for temperatures much smaller than the Fermi temperature. This pressure is known as the degeneracy pressure. In this sense, systems composed of fermions are also referred as degenerate matter. Standard stars avoid collapse by balancing thermal pressure (plasma and radiation) against gravitational forces. At the end of the star lifetime, when thermal processes are weaker, some stars may become white dwarfs, which are only sustained against gravity by electron degeneracy pressure. Using the Fermi gas as a model, it is possible to calculate the Chandrasekhar limit, i.e. the maximum mass any star may acquire (without significant thermally generated pressure) before collapsing into a black hole or a neutron star. The latter, is a star mainly composed of neutrons, where the collapse is also avoided by neutron degeneracy pressure. For the case of metals, the electron degeneracy pressure contributes to the compressibility or bulk modulus of the material. Chemical potential Assuming that the concentration of fermions does not change with temperature, then the total chemical potential μ (Fermi level) of the three-dimensional ideal Fermi gas is related to the zero temperature Fermi energy EF by a Sommerfeld expansion (assuming ): where T is the temperature. Hence, the internal chemical potential, μ-E0, is approximately equal to the Fermi energy at temperatures that are much lower than the characteristic Fermi temperature TF. This characteristic temperature is on the order of 105 K for a metal, hence at room temperature (300 K), the Fermi energy and internal chemical potential are essentially equivalent. Typical values Metals Under the free electron model, the electrons in a metal can be considered to form a uniform Fermi gas. The number density of conduction electrons in metals ranges between approximately 1028 and 1029 electrons per m3, which is also the typical density of atoms in ordinary solid matter. This number density produces a Fermi energy of the order: where me is the electron rest mass. This Fermi energy corresponds to a Fermi temperature of the order of 106 kelvins, much higher than the temperature of the Sun's surface. Any metal will boil before reaching this temperature under atmospheric pressure. Thus for any practical purpose, a metal can be considered as a Fermi gas at zero temperature as a first approximation (normal temperatures are small compared to TF). White dwarfs Stars known as white dwarfs have mass comparable to the Sun, but have about a hundredth of its radius. The high densities mean that the electrons are no longer bound to single nuclei and instead form a degenerate electron gas. The number density of electrons in a white dwarf is of the order of 1036 electrons/m3. This means their Fermi energy is: Nucleus Another typical example is that of the particles in a nucleus of an atom. The radius of the nucleus is roughly: where A is the number of nucleons. The number density of nucleons in a nucleus is therefore: This density must be divided by two, because the Fermi energy only applies to fermions of the same type. The presence of neutrons does not affect the Fermi energy of the protons in the nucleus, and vice versa. The Fermi energy of a nucleus is approximately: where mp is the proton mass. The radius of the nucleus admits deviations around the value mentioned above, so a typical value for the Fermi energy is usually given as 38 MeV. Arbitrary-dimensional uniform gas Density of states Using a volume integral on dimensions, the density of states is: The Fermi energy is obtained by looking for the number density of particles: To get: where is the corresponding d-dimensional volume, is the dimension for the internal Hilbert space. For the case of spin-, every energy is twice-degenerate, so in this case . A particular result is obtained for , where the density of states becomes a constant (does not depend on the energy): Fermi gas in harmonic trap The harmonic trap potential: is a model system with many applications in modern physics. The density of states (or more accurately, the degree of degeneracy) for a given spin species is: where is the harmonic oscillation frequency. The Fermi energy for a given spin species is: Related Fermi quantities Related to the Fermi energy, a few useful quantities also occur often in modern literature. The Fermi temperature is defined as , where is the Boltzmann constant. The Fermi temperature can be thought of as the temperature at which thermal effects are comparable to quantum effects associated with Fermi statistics. The Fermi temperature for a metal is a couple of orders of magnitude above room temperature. Other quantities defined in this context are Fermi momentum , and Fermi velocity , which are the momentum and group velocity, respectively, of a fermion at the Fermi surface. The Fermi momentum can also be described as , where is the radius of the Fermi sphere and is called the Fermi wave vector. Note that these quantities are not well-defined in cases where the Fermi surface is non-spherical. Treatment at finite temperature Grand canonical ensemble Most of the calculations above are exact at zero temperature, yet remain as good approximations for temperatures lower than the Fermi temperature. For other thermodynamics variables it is necessary to write a thermodynamic potential. For an ensemble of identical fermions, the best way to derive a potential is from the grand canonical ensemble with fixed temperature, volume and chemical potential μ. The reason is due to Pauli exclusion principle, as the occupation numbers of each quantum state are given by either 1 or 0 (either there is an electron occupying the state or not), so the (grand) partition function can be written as where , indexes the ensembles of all possible microstates that give the same total energy and number of particles , is the single particle energy of the state (it counts twice if the energy of the state is degenerate) and , its occupancy. Thus the grand potential is written as The same result can be obtained in the canonical and microcanonical ensemble, as the result of every ensemble must give the same value at thermodynamic limit . The grand canonical ensemble is recommended here as it avoids the use of combinatorics and factorials. As explored in previous sections, in the macroscopic limit we may use a continuous approximation (Thomas–Fermi approximation) to convert this sum to an integral: where is the total density of states. Relation to Fermi–Dirac distribution The grand potential is related to the number of particles at finite temperature in the following way where the derivative is taken at fixed temperature and volume, and it appears also known as the Fermi–Dirac distribution. Similarly, the total internal energy is Exact solution for power-law density-of-states Many systems of interest have a total density of states with the power-law form: for some values of , , . The results of preceding sections generalize to dimensions, giving a power law with: for non-relativistic particles in a -dimensional box, for non-relativistic particles in a -dimensional harmonic potential well, for hyper-relativistic particles in a -dimensional box. For such a power-law density of states, the grand potential integral evaluates exactly to: where is the complete Fermi–Dirac integral (related to the polylogarithm). From this grand potential and its derivatives, all thermodynamic quantities of interest can be recovered. Extensions to the model Relativistic Fermi gas The article has only treated the case in which particles have a parabolic relation between energy and momentum, as is the case in non-relativistic mechanics. For particles with energies close to their respective rest mass, the equations of special relativity are applicable. Where single-particle energy is given by: For this system, the Fermi energy is given by: where the equality is only valid in the ultrarelativistic limit, and The relativistic Fermi gas model is also used for the description of massive white dwarfs which are close to the Chandrasekhar limit. For the ultrarelativistic case, the degeneracy pressure is proportional to . Fermi liquid In 1956, Lev Landau developed the Fermi liquid theory, where he treated the case of a Fermi liquid, i.e., a system with repulsive, not necessarily small, interactions between fermions. The theory shows that the thermodynamic properties of an ideal Fermi gas and a Fermi liquid do not differ that much. It can be shown that the Fermi liquid is equivalent to a Fermi gas composed of collective excitations or quasiparticles, each with a different effective mass and magnetic moment.
Physical sciences
States of matter
Physics
182051
https://en.wikipedia.org/wiki/Cave%20bear
Cave bear
The cave bear (Ursus spelaeus) is a prehistoric species of bear that lived in Europe and Asia during the Pleistocene and became extinct about 24,000 years ago during the Last Glacial Maximum. Both the word cave and the scientific name spelaeus are used because fossils of this species were mostly found in caves. This reflects the views of experts that cave bears may have spent more time in caves than the brown bear, which uses caves only for hibernation. It is thought to have been largely herbivorous. Taxonomy Cave bear skeletons were first described in 1774 by Johann Friedrich Esper, in his book Newly Discovered Zoolites of Unknown Four Footed Animals. While scientists at the time considered that the skeletons could belong to apes, canids, felids, or even dragons or unicorns, Esper postulated that they actually belonged to polar bears. Twenty years later, Johann Christian Rosenmüller, an anatomist at Leipzig University, gave the species its binomial name. The bones were so numerous that most researchers had little regard for them. During World War I, with the scarcity of phosphate dung, earth from the caves where cave bear bones occurred was used as a source of phosphates. When the "dragon caves" in Austria’s Styria region were exploited for this purpose, only the skulls and leg bones were kept. Many caves in Central Europe have skeletons of cave bears inside, such as the Heinrichshöhle in Hemer and the Dechenhöhle in Iserlohn, Germany. A complete skeleton, five complete skulls, and 18 other bones were found inside Kletno Bear Cave, in 1966 in Poland. In Romania, in a cave called Bears' Cave, 140 cave bear skeletons were discovered in 1983. Cave bear bones are found in several caves in the country of Georgia. In 2021, Akaki Tsereteli State University's students and a lecturer discovered two complete cave bear skulls, with molars, canines, humerus, three vertebrae and other bones, in a previously unexplored cave. Evolution Both the cave bear and the brown bear are thought to be descended from the Plio-Pleistocene Etruscan bear (Ursus etruscus) that lived about 5.3 Mya to 100,000 years ago. The last common ancestor of cave bears and brown bears lived between 1.2–1.4 Mya. The immediate precursor of the cave bear was probably Ursus deningeri (Deninger's bear), a species restricted to Pleistocene Europe about 1.8 Mya to 100,000 years ago. The transition between Deninger's bear and the cave bear is given as the last interglacial, although the boundary between these forms is arbitrary, and intermediate or transitional taxa have been proposed, e.g. Ursus spelaeus deningeroides, while other authorities consider both taxa to be chronological variants of the same species. Cave bears found anywhere will vary in age, thus facilitating investigations into evolutionary trends. The three anterior premolars were gradually reduced, then disappeared, possibly in response to a largely vegetarian diet. In a fourth of the skulls found in the Conturines, the third premolar is still present, while more derived specimens elsewhere lack it. The last remaining premolar became conjugated with the true molars, enlarging the crown and granting it more cusps and cutting borders. This phenomenon, called molarization, improved the mastication capacities of the molars, facilitating the processing of tough vegetation. This allowed the cave bear to gain more energy for hibernation, while eating less than its ancestors. In 2005, scientists recovered and sequenced the nuclear DNA of a cave bear that lived between 42,000 and 44,000 years ago. The procedure used genomic DNA extracted from one of the animal's teeth. Sequencing the DNA directly (rather than first replicating it with the polymerase chain reaction), the scientists recovered 21 cave bear genes from remains that did not yield significant amounts of DNA with traditional techniques. This study confirmed and built on results from a previous study using mitochondrial DNA extracted from cave bear remains ranging from 20,000 to 130,000 years old. Both show that the cave bear was more closely related to the brown bear and polar bear than it was to the American black bear, but had split from the brown bear lineage before the distinct eastern and western brown bear lineages diversified, and before the split of brown bears and polar bears. The divergence date estimate of cave bears and brown bears is about 1.2–1.4 Mya. However, a recent study showed that both species had some hybridization between them. Description The cave bear had a very broad, domed skull with a steep forehead; its stout body had long thighs, massive shins and in-turning feet, making it similar in skeletal structure to the brown bear. Cave bears were comparable in size to, or larger than, the largest modern-day bears, measuring up to in length. The average weight for males was , while females weighed . Of cave bear skeletons in museums, 90% are classified as male due to a misconception that the female skeletons were merely "dwarfs". Cave bears grew larger during glaciations and smaller during interglacials, probably to adjust heat loss rate. Cave bears of the last Ice Age lacked the usual two or three premolars present in other bears; to compensate, the last molar is very elongated, with supplementary cusps. The humerus of the cave bear was similar in size to that of the polar bear, as were the femora of females. The femora of male cave bears, however, bore more similarities in size to those of Kodiak bears. Behaviour Dietary habits Cave bear teeth were very large and show greater wear than most modern bear species, suggesting a diet of tough materials. However, tubers and other gritty food, which cause distinctive tooth wear in modern brown bears, do not appear to have constituted a major part of cave bears' diets on the basis of dental microwear analysis. Seed fruits are documented to have been consumed by cave bears. The morphological features of the cave bear chewing apparatus, including loss of premolars, have long been suggested to indicate their diets displayed a higher degree of herbivory than the Eurasian brown bear. Indeed, a solely vegetarian diet has been inferred on the basis of tooth morphology. Results obtained on the stable isotopes of cave bear bones also point to a largely vegetarian diet in having low levels of nitrogen-15 and carbon-13, which are accumulated at a faster rate by carnivores as opposed to herbivores. However, some evidence points toward the occasional inclusion of animal protein in cave bear diets. For example, toothmarks on cave bear remains in areas where cave bears are the only recorded potential carnivores suggests occasional cannibalistic scavenging, possibly on individuals that died during hibernation, and dental microwear analysis indicates the cave bear may have fed on a greater quantity of bone than its contemporary, the smaller Eurasian brown bear. The dental microwear patterns of cave bear molars from the northeastern Iberian Peninsula show that cave bears may have consumed more meat in the days and weeks leading up to hibernation. Additionally, cave bear remains from Peștera cu Oase in the southwestern tip of the Romanian part of the Carpathian Mountains had elevated levels of nitrogen-15 in their bones, indicative of omnivorous diets, although the values are within the range of those found for the strictly herbivorous mammoth. One isotopic study concluded that cave bears displayed omnivorous habits similar to those of modern brown bears. Although the current prevailing opinion concludes that cave bears were largely herbivorous, and more so than any modern species of the genus Ursus, increasing evidence points to omnivorous diets, based both on regional variability of isotopic composition of bone remains indicative of dietary plasticity, and on a recent re-evaluation of craniodental morphology that places the cave bear squarely among omnivorous modern bear species with respect to its skull and tooth shapes. Mortality Death during hibernation was a common end for cave bears, mainly befalling specimens that failed ecologically during the summer season through inexperience, sickness or old age. Some cave bear bones show signs of numerous ailments, including spinal fusion, bone tumours, cavities, tooth resorption, necrosis (particularly in younger specimens), osteomyelitis, periostitis, rickets and kidney stones. There is also evidence that cave bears suffered from tuberculosis. Male cave bear skeletons have been found with broken bacula, probably due to fighting during the breeding season. Cave bear longevity is unknown, though it has been estimated that they seldom exceeded twenty years of age. Paleontologists doubt adult cave bears had any natural predators, save for pack-hunting wolves and cave hyenas, which would probably have attacked sick or infirm individuals. Cave hyenas are thought to be responsible for the disarticulation and destruction of some cave bear skeletons. Such large carcasses were an optimal food resource for the hyenas, especially at the end of the winter, when food was scarce. The presence of fully articulated adult cave lion skeletons, deep in cave bear dens, indicates the lions may have occasionally entered dens to prey on hibernating cave bears, with some dying in the attempt. Range and habitat The cave bear's range stretched across Europe; from Spain and the British Isles in the west, Belgium, Italy, parts of Germany, Poland, the Balkans, Romania, Georgia, and parts of Russia, including the Caucasus; and northern Iran. No traces of cave bears have been found in the northern British Isles, Scandinavia or the Baltic countries, which were all covered in extensive glaciers at the time. The largest numbers of cave bear remains have been found in Austria, Switzerland, northern Italy, northern Spain, southern France, and Romania, roughly corresponding with the Pyrenees, Alps, and Carpathians. The huge number of bones found in southern, central and eastern Europe has led some scientists to think Europe may have once had herds of cave bears. Others, however, point out that, though some caves have thousands of bones, they were accumulated over a period of 100,000 years or more, thus requiring only two deaths in a cave per year to account for the large numbers. The cave bear inhabited low mountainous areas, especially in regions rich in limestone caves. They seem to have avoided open plains, preferring forested or forest-edged terrains. Relationship with humans Between the years 1917 and 1923, the Drachenloch cave in Switzerland was excavated by Emil Bächler. The excavation uncovered more than 30,000 cave bear skeletons. It also uncovered a stone chest or cist, consisting of a low wall built from limestone slabs near a cave wall with a number of bear skulls inside it. A cave bear skull was also found with a femur bone from another bear stuck inside it. Scholars speculated that it was proof of prehistoric human religious rites involving the cave bear, or that the Drachenloch cave bears were hunted as part of a hunting ritual, or that the skulls were kept as trophies. In Archaeology, Religion, Ritual (2004), archaeologist Timothy Insoll strongly questions whether the Drachenloch finds in the stone cist were the result of human interaction. Insoll states that the evidence for religious practices involving cave bears in this time period is "far from convincing". Insoll also states that comparisons with the religious practices involving bears that are known from historic times are invalid. A similar phenomenon was encountered in Regourdou, southern France. A rectangular pit contained the remains of at least twenty bears, covered by a massive stone slab. The remains of a Neanderthal lay nearby in another stone pit, with various objects, including a bear humerus, a scraper, a core, and some flakes, which were interpreted as grave offerings. An unusual discovery in a deep chamber of Basura Cave in Savona, Italy, is thought to be related to cave bear worship, because there is a vaguely zoomorphic stalagmite surrounded by clay pellets. It is thought to have been used by Neanderthals for a ceremony; bear bones scattered on the floor further suggests it was likely to have had some sort of ritual purpose. Extinction Reassessment of fossils in 2019 indicate that the cave bear probably died out 24,000 years ago. A complex set of factors, rather than a single factor, are suggested to have led to the extinction. Compared with other megafaunal species that also became extinct during the Last Glacial Maximum, the cave bear was believed to have had a more specialized diet of high-quality plants and a relatively restricted geographical range. This was suggested as an explanation as to why it died out so much earlier than the rest. Some experts have disputed this claim, as the cave bear had survived multiple climate changes prior to extinction. Additionally, mitochondrial DNA research indicated that the genetic decline of the cave bear began long before it became extinct, demonstrating habitat loss due to climate change was not responsible. Finally, high δ15N levels were found in cave bear bones from Romania, indicating wider dietary possibilities than previously believed. Some evidence indicates that the cave bear used only caves for hibernation and was not inclined to use other locations, such as thickets, for this purpose, in contrast to the more versatile brown bear. This specialized hibernation behavior would have caused a high winter mortality rate for cave bears that failed to find available caves. Therefore, as human populations slowly increased, the cave bear faced a shrinking pool of suitable caves, and slowly faded away to extinction, as both Neanderthals and anatomically modern humans sought out caves as living quarters, depriving the cave bear of vital habitat. This hypothesis is being researched . According to the research study, published in the journal Molecular Biology and Evolution, radiocarbon dating of the fossil remains shows that the cave bear ceased to be abundant in Central Europe around 35,000 years ago. In addition to environmental change, human hunting has also been implicated in the ultimate extinction of the cave bear. In 2019, the results of a large scale study of 81 bone specimens (resulting in 59 new sequences) and 64 previously published complete mitochondrial genomes of cave bear mitochondrial DNA remains found in Switzerland, Poland, France, Spain, Germany, Italy and Serbia, indicated that the cave bear population drastically declined starting around 40,000 years ago at the onset of the Aurignacian, coinciding with the arrival of anatomically modern humans. It was concluded that human hunting and/or competition played a major role in their decline and ultimate disappearance, and that climate change was not likely to have been the dominant factor. In a study of Spanish cave bear mtDNA, each cave used by cave bears was found to contain almost exclusively a unique lineage of closely related haplotypes, indicating a homing behaviour for birthing and hibernation. The conclusion of this study is cave bears could not easily colonize new sites when in competition with humans for these resources. Overhunting by humans has been dismissed by some as human populations at the time were too small to pose a serious threat to the cave bear's survival. However, the two species may have competed for living space in caves. The Chauvet Cave contains around 300 "bear hollows" created by cave bear hibernation. Unlike brown bears, cave bears are seldom represented in cave paintings, leading some experts to believe the cave bear may have been avoided by human hunters or their habitat preferences may not have overlapped. Paleontologist Björn Kurtén hypothesized cave bear populations were fragmented and under stress even before the advent of the glaciers. Populations living south of the Alps possibly survived significantly longer.
Biology and health sciences
Bears
Animals
182146
https://en.wikipedia.org/wiki/Orbital%20mechanics
Orbital mechanics
Orbital mechanics or astrodynamics is the application of ballistics and celestial mechanics to the practical problems concerning the motion of rockets, satellites, and other spacecraft. The motion of these objects is usually calculated from Newton's laws of motion and the law of universal gravitation. Orbital mechanics is a core discipline within space-mission design and control. Celestial mechanics treats more broadly the orbital dynamics of systems under the influence of gravity, including both spacecraft and natural astronomical bodies such as star systems, planets, moons, and comets. Orbital mechanics focuses on spacecraft trajectories, including orbital maneuvers, orbital plane changes, and interplanetary transfers, and is used by mission planners to predict the results of propulsive maneuvers. General relativity is a more exact theory than Newton's laws for calculating orbits, and it is sometimes necessary to use it for greater accuracy or in high-gravity situations (e.g. orbits near the Sun). History Until the rise of space travel in the twentieth century, there was little distinction between orbital and celestial mechanics. At the time of Sputnik, the field was termed 'space dynamics'. The fundamental techniques, such as those used to solve the Keplerian problem (determining position as a function of time), are therefore the same in both fields. Furthermore, the history of the fields is almost entirely shared. Johannes Kepler was the first to successfully model planetary orbits to a high degree of accuracy, publishing his laws in 1605. Isaac Newton published more general laws of celestial motion in the first edition of Philosophiæ Naturalis Principia Mathematica (1687), which gave a method for finding the orbit of a body following a parabolic path from three observations. This was used by Edmund Halley to establish the orbits of various comets, including that which bears his name. Newton's method of successive approximation was formalised into an analytic method by Leonhard Euler in 1744, whose work was in turn generalised to elliptical and hyperbolic orbits by Johann Lambert in 1761–1777. Another milestone in orbit determination was Carl Friedrich Gauss's assistance in the "recovery" of the dwarf planet Ceres in 1801. Gauss's method was able to use just three observations (in the form of pairs of right ascension and declination), to find the six orbital elements that completely describe an orbit. The theory of orbit determination has subsequently been developed to the point where today it is applied in GPS receivers as well as the tracking and cataloguing of newly observed minor planets. Modern orbit determination and prediction are used to operate all types of satellites and space probes, as it is necessary to know their future positions to a high degree of accuracy. Astrodynamics was developed by astronomer Samuel Herrick beginning in the 1930s. He consulted the rocket scientist Robert Goddard and was encouraged to continue his work on space navigation techniques, as Goddard believed they would be needed in the future. Numerical techniques of astrodynamics were coupled with new powerful computers in the 1960s, and humans were ready to travel to the Moon and return. Practical techniques Rules of thumb The following rules of thumb are useful for situations approximated by classical mechanics under the standard assumptions of astrodynamics outlined below. The specific example discussed is of a satellite orbiting a planet, but the rules of thumb could also apply to other situations, such as orbits of small bodies around a star such as the Sun. Kepler's laws of planetary motion: Orbits are elliptical, with the heavier body at one focus of the ellipse. A special case of this is a circular orbit (a circle is a special case of ellipse) with the planet at the center. A line drawn from the planet to the satellite sweeps out equal areas in equal times no matter which portion of the orbit is measured. The square of a satellite's orbital period is proportional to the cube of its average distance from the planet. Without applying force (such as firing a rocket engine), the period and shape of the satellite's orbit will not change. A satellite in a low orbit (or a low part of an elliptical orbit) moves more quickly with respect to the surface of the planet than a satellite in a higher orbit (or a high part of an elliptical orbit), due to the stronger gravitational attraction closer to the planet. If thrust is applied at only one point in the satellite's orbit, it will return to that same point on each subsequent orbit, though the rest of its path will change. Thus one cannot move from one circular orbit to another with only one brief application of thrust. From a circular orbit, thrust applied in a direction opposite to the satellite's motion changes the orbit to an elliptical one; the satellite will descend and reach the lowest orbital point (the periapse) at 180 degrees away from the firing point; then it will ascend back. The period of the resultant orbit will be less than that of the original circular orbit. Thrust applied in the direction of the satellite's motion creates an elliptical orbit with its highest point (apoapse) 180 degrees away from the firing point. The period of the resultant orbit will be longer than that of the original circular orbit. The consequences of the rules of orbital mechanics are sometimes counter-intuitive. For example, if two spacecrafts are in the same circular orbit and wish to dock, unless they are very close, the trailing craft cannot simply fire its engines to go faster. This will change the shape of its orbit, causing it to gain altitude and actually slow down relative to the leading craft, missing the target. The space rendezvous before docking normally takes multiple precisely calculated engine firings in multiple orbital periods, requiring hours or even days to complete. To the extent that the standard assumptions of astrodynamics do not hold, actual trajectories will vary from those calculated. For example, simple atmospheric drag is another complicating factor for objects in low Earth orbit. These rules of thumb are decidedly inaccurate when describing two or more bodies of similar mass, such as a binary star system (see n-body problem). Celestial mechanics uses more general rules applicable to a wider variety of situations. Kepler's laws of planetary motion, which can be mathematically derived from Newton's laws, hold strictly only in describing the motion of two gravitating bodies in the absence of non-gravitational forces; they also describe parabolic and hyperbolic trajectories. In the close proximity of large objects like stars the differences between classical mechanics and general relativity also become important. Laws of astrodynamics The fundamental laws of astrodynamics are Newton's law of universal gravitation and Newton's laws of motion, while the fundamental mathematical tool is differential calculus. In a Newtonian framework, the laws governing orbits and trajectories are in principle time-symmetric. Standard assumptions in astrodynamics include non-interference from outside bodies, negligible mass for one of the bodies, and negligible other forces (such as from the solar wind, atmospheric drag, etc.). More accurate calculations can be made without these simplifying assumptions, but they are more complicated. The increased accuracy often does not make enough of a difference in the calculation to be worthwhile. Kepler's laws of planetary motion may be derived from Newton's laws, when it is assumed that the orbiting body is subject only to the gravitational force of the central attractor. When an engine thrust or propulsive force is present, Newton's laws still apply, but Kepler's laws are invalidated. When the thrust stops, the resulting orbit will be different but will once again be described by Kepler's laws which have been set out above. The three laws are: The orbit of every planet is an ellipse with the Sun at one of the foci. A line joining a planet and the Sun sweeps out equal areas during equal intervals of time. The squares of the orbital periods of planets are directly proportional to the cubes of the semi-major axis of the orbits. Escape velocity The formula for an escape velocity is derived as follows. The specific energy (energy per unit mass) of any space vehicle is composed of two components, the specific potential energy and the specific kinetic energy. The specific potential energy associated with a planet of mass M is given by where G is the gravitational constant and r is the distance between the two bodies; while the specific kinetic energy of an object is given by where v is its Velocity; and so the total specific orbital energy is Since energy is conserved, cannot depend on the distance, , from the center of the central body to the space vehicle in question, i.e. v must vary with r to keep the specific orbital energy constant. Therefore, the object can reach infinite only if this quantity is nonnegative, which implies The escape velocity from the Earth's surface is about 11 km/s, but that is insufficient to send the body an infinite distance because of the gravitational pull of the Sun. To escape the Solar System from a location at a distance from the Sun equal to the distance Sun–Earth, but not close to the Earth, requires around 42 km/s velocity, but there will be "partial credit" for the Earth's orbital velocity for spacecraft launched from Earth, if their further acceleration (due to the propulsion system) carries them in the same direction as Earth travels in its orbit. Formulae for free orbits Orbits are conic sections, so the formula for the distance of a body for a given angle corresponds to the formula for that curve in polar coordinates, which is: is called the gravitational parameter. and are the masses of objects 1 and 2, and is the specific angular momentum of object 2 with respect to object 1. The parameter is known as the true anomaly, is the semi-latus rectum, while is the orbital eccentricity, all obtainable from the various forms of the six independent orbital elements. Circular orbits All bounded orbits where the gravity of a central body dominates are elliptical in nature. A special case of this is the circular orbit, which is an ellipse of zero eccentricity. The formula for the velocity of a body in a circular orbit at distance r from the center of gravity of mass M can be derived as follows: Centrifugal acceleration matches the acceleration due to gravity. So, Therefore, where is the gravitational constant, equal to 6.6743 × 10−11 m3/(kg·s2) To properly use this formula, the units must be consistent; for example, must be in kilograms, and must be in meters. The answer will be in meters per second. The quantity is often termed the standard gravitational parameter, which has a different value for every planet or moon in the Solar System. Once the circular orbital velocity is known, the escape velocity is easily found by multiplying by : To escape from gravity, the kinetic energy must at least match the negative potential energy. Therefore, Elliptical orbits If , then the denominator of the equation of free orbits varies with the true anomaly , but remains positive, never becoming zero. Therefore, the relative position vector remains bounded, having its smallest magnitude at periapsis , which is given by: The maximum value is reached when . This point is called the apoapsis, and its radial coordinate, denoted , is Let be the distance measured along the apse line from periapsis to apoapsis , as illustrated in the equation below: Substituting the equations above, we get: a is the semimajor axis of the ellipse. Solving for , and substituting the result in the conic section curve formula above, we get: Orbital period Under standard assumptions the orbital period () of a body traveling along an elliptic orbit can be computed as: where: is the standard gravitational parameter, is the length of the semi-major axis. Conclusions: The orbital period is equal to that for a circular orbit with the orbit radius equal to the semi-major axis (), For a given semi-major axis the orbital period does not depend on the eccentricity (
Physical sciences
Orbital mechanics
null
182205
https://en.wikipedia.org/wiki/Forging
Forging
Forging is a manufacturing process involving the shaping of metal using localized compressive forces. The blows are delivered with a hammer (often a power hammer) or a die. Forging is often classified according to the temperature at which it is performed: cold forging (a type of cold working), warm forging, or hot forging (a type of hot working). For the latter two, the metal is heated, usually in a forge. Forged parts can range in weight from less than a kilogram to hundreds of metric tons. Forging has been done by smiths for millennia; the traditional products were kitchenware, hardware, hand tools, edged weapons, cymbals, and jewellery. Since the Industrial Revolution, forged parts are widely used in mechanisms and machines wherever a component requires high strength; such forgings usually require further processing (such as machining) to achieve a finished part. Today, forging is a major worldwide industry. History Forging is one of the oldest known metalworking processes. Traditionally, forging was performed by a smith using hammer and anvil, though introducing water power to the production and working of iron in the 12th century allowed the use of large trip hammers or power hammers that increased the amount and size of iron that could be produced and forged. The smithy or forge has evolved over centuries to become a facility with engineered processes, production equipment, tooling, raw materials and products to meet the demands of modern industry. In modern times, industrial forging is done either with presses or with hammers powered by compressed air, electricity, hydraulics or steam. These hammers may have reciprocating weights in the thousands of pounds. Smaller power hammers, or less reciprocating weight, and hydraulic presses are common in art smithies as well. Some steam hammers remain in use, but they became obsolete with the availability of the other, more convenient, power sources. Processes There are many different kinds of forging processes available; however, they can be grouped into three main classes: Drawn out: length increases, cross-section decreases Upset: length decreases, cross-section increases Squeezed in closed compression dies: produces multidirectional flow Common forging processes include: roll forging, swaging, cogging, open-die forging, impression-die forging (closed die forging), press forging, cold forging, automatic hot forging and upsetting. Temperature All of the following forging processes can be performed at various temperatures; however, they are generally classified by whether the metal temperature is above or below the recrystallization temperature. If the temperature is above the material's recrystallization temperature it is deemed hot forging; if the temperature is below the material's recrystallization temperature but above 30% of the recrystallization temperature (on an absolute scale) it is deemed warm forging; if below 30% of the recrystallization temperature (usually room temperature) then it is deemed cold forging. The main advantage of hot forging is that it can be done more quickly and precisely, and as the metal is deformed work hardening effects are negated by the recrystallization process. Cold forging typically results in work hardening of the piece. Drop forging Drop forging is a forging process where a hammer is raised and then "dropped" into the workpiece to deform it according to the shape of the die. There are two types of drop forging: open-die drop forging and impression-die (or closed-die) drop forging. As the names imply, the difference is in the shape of the die, with the former not fully enclosing the workpiece, while the latter does. Open-die drop forging Open-die forging is also known as smith forging. In open-die forging, a hammer strikes and deforms the workpiece, which is placed on a stationary anvil. Open-die forging gets its name from the fact that the dies (the surfaces that are in contact with the workpiece) do not enclose the workpiece, allowing it to flow except where contacted by the dies. The operator therefore needs to orient and position the workpiece to get the desired shape. The dies are usually flat in shape, but some have a specially shaped surface for specialized operations. For example, a die may have a round, concave, or convex surface or be a tool to form holes or be a cut-off tool. Open-die forgings can be worked into shapes which include discs, hubs, blocks, shafts (including step shafts or with flanges), sleeves, cylinders, flats, hexes, rounds, plate, and some custom shapes. Open-die forging lends itself to short runs and is appropriate for art smithing and custom work. In some cases, open-die forging may be employed to rough-shape ingots to prepare them for subsequent operations. Open-die forging may also orient the grain to increase strength in the required direction. Advantages of open-die forging Reduced chance of voids Better fatigue resistance Improved microstructure Continuous grain flow Finer grain size Greater strength Better response to thermal treatment Improvement of internal quality Greater reliability of mechanical properties, ductility and impact resistance "" is the successive deformation of a bar along its length using an open-die drop forge. It is commonly used to work a piece of raw material to the proper thickness. Once the proper thickness is achieved the proper width is achieved via "edging". "" is the process of concentrating material using a concave shaped open-die. The process is called "edging" because it is usually carried out on the ends of the workpiece. "" is a similar process that thins out sections of the forging using a convex shaped die. These processes prepare the workpieces for further forging processes. Impression-die forging Impression-die forging is also called "closed-die forging". In impression-die forging, the metal is placed in a die resembling a mold, which is attached to an anvil. Usually, the hammer die is shaped as well. The hammer is then dropped on the workpiece, causing the metal to flow and fill the die cavities. The hammer is generally in contact with the workpiece on the scale of milliseconds. Depending on the size and complexity of the part, the hammer may be dropped multiple times in quick succession. Excess metal is squeezed out of the die cavities, forming what is referred to as "flash". The flash cools more rapidly than the rest of the material; this cool metal is stronger than the metal in the die, so it helps prevent more flash from forming. This also forces the metal to completely fill the die cavity. After forging, the flash is removed. In commercial impression-die forging, the workpiece is usually moved through a series of cavities in a die to get from an ingot to the final form. The first impression is used to distribute the metal into the rough shape in accordance to the needs of later cavities; this impression is called an "edging", "fullering", or "bending" impression. The following cavities are called "blocking" cavities, in which the piece is working into a shape that more closely resembles the final product. These stages usually impart the workpiece with generous bends and large fillets. The final shape is forged in a "final" or "finisher" impression cavity. If there is only a short run of parts to be done, then it may be more economical for the die to lack a final impression cavity and instead machine the final features. Impression-die forging has been improved in recent years through increased automation which includes induction heating, mechanical feeding, positioning and manipulation, and the direct heat treatment of parts after forging. One variation of impression-die forging is called "flashless forging", or "true closed-die forging". In this type of forging, the die cavities are completely closed, which keeps the workpiece from forming flash. The major advantage to this process is that less metal is lost to flash. Flash can account for 20 to 45% of the starting material. The disadvantages of this process include additional cost due to a more complex die design and the need for better lubrication and workpiece placement. There are other variations of part formation that integrate impression-die forging. One method incorporates casting a forging preform from liquid metal. The casting is removed after it has solidified, but while still hot. It is then finished in a single cavity die. The flash is trimmed, then the part is quench hardened. Another variation follows the same process as outlined above, except the preform is produced by the spraying deposition of metal droplets into shaped collectors (similar to the Osprey process). Closed-die forging has a high initial cost due to the creation of dies and required design work to make working die cavities. However, it has low recurring costs for each part, thus forgings become more economical with greater production volume. This is one of the major reasons closed-die forgings are often used in the automotive and tool industries. Another reason forgings are common in these industrial sectors is that forgings generally have about a 20 percent higher strength-to-weight ratio compared to cast or machined parts of the same material. Design of impression-die forgings and tooling Forging dies are usually made of high-alloy or tool steel. Dies must be impact- and wear-resistant, maintain strength at high temperatures, and have the ability to withstand cycles of rapid heating and cooling. In order to produce a better, more economical die the following standards are maintained: The dies part along a single, flat plane whenever possible. If not, the parting plane follows the contour of the part. The parting surface is a plane through the center of the forging and not near an upper or lower edge. Adequate draft is provided; usually at least 3° for aluminium and 5° to 7° for steel. Generous fillets and radii are used. Ribs are low and wide. The various sections are balanced to avoid extreme difference in metal flow. Full advantage is taken of fiber flow lines. Dimensional tolerances are not closer than necessary. Barrelling occurs when, due to friction between the work piece and the die or punch, the work piece bulges at its centre in such a way as to resemble a barrel. This leads to the central part of the work piece to come in contact with the sides of the die sooner than if there were no friction present, creating a much greater increase in the pressure required for the punch to finish the forging. The dimensional tolerances of a steel part produced using the impression-die forging method are outlined in the table below. The dimensions across the parting plane are affected by the closure of the dies, and are therefore dependent on die wear and the thickness of the final flash. Dimensions that are completely contained within a single die segment or half can be maintained at a significantly greater level of accuracy. A lubricant is used when forging to reduce friction and wear. It is also used as a thermal barrier to restrict heat transfer from the workpiece to the die. Finally, the lubricant acts as a parting compound to prevent the part from sticking in the dies. Press forging Press forging works by slowly applying a continuous pressure or force, which differs from the near-instantaneous impact of drop-hammer forging. The amount of time the dies are in contact with the workpiece is measured in seconds (as compared to the milliseconds of drop-hammer forges). The press forging operation can be done either cold or hot. The main advantage of press forging, as compared to drop-hammer forging, is its ability to deform the complete workpiece. Drop-hammer forging usually only deforms the surfaces of the work piece in contact with the hammer and anvil; the interior of the workpiece will stay relatively undeformed. Another advantage to the process includes the knowledge of the new part's strain rate. By controlling the compression rate of the press forging operation, the internal strain can be controlled. There are a few disadvantages to this process, most stemming from the workpiece being in contact with the dies for such an extended period of time. The operation is a time-consuming process due to the amount and length of steps. The workpiece will cool faster because the dies are in contact with workpiece; the dies facilitate drastically more heat transfer than the surrounding atmosphere. As the workpiece cools it becomes stronger and less ductile, which may induce cracking if deformation continues. Therefore, heated dies are usually used to reduce heat loss, promote surface flow, and enable the production of finer details and closer tolerances. The workpiece may also need to be reheated. When done in high productivity, press forging is more economical than hammer forging. The operation also creates closer tolerances. In hammer forging a lot of the work is absorbed by the machinery; when in press forging, the greater percentage of work is used in the work piece. Another advantage is that the operation can be used to create any size part because there is no limit to the size of the press forging machine. New press forging techniques have been able to create a higher degree of mechanical and orientation integrity. By the constraint of oxidation to the outer layers of the part, reduced levels of microcracking occur in the finished part. Press forging can be used to perform all types of forging, including open-die and impression-die forging. Impression-die press forging usually requires less draft than drop forging and has better dimensional accuracy. Also, press forgings can often be done in one closing of the dies, allowing for easy automation. Upset forging Upset forging increases the diameter of the workpiece by compressing its length. Based on number of pieces produced, this is the most widely used forging process. A few examples of common parts produced using the upset forging process are engine valves, couplings, bolts, screws, and other fasteners. Upset forging is usually done in special high-speed machines called crank presses. The machines are usually set up to work in the horizontal plane, to facilitate the quick exchange of workpieces from one station to the next, but upsetting can also be done in a vertical crank press or a hydraulic press. The initial workpiece is usually wire or rod, but some machines can accept bars up to in diameter and a capacity of over 1000 tons. The standard upsetting machine employs split dies that contain multiple cavities. The dies open enough to allow the workpiece to move from one cavity to the next; the dies then close and the heading tool, or ram, then moves longitudinally against the bar, upsetting it into the cavity. If all of the cavities are utilized on every cycle, then a finished part will be produced with every cycle, which makes this process advantageous for mass production. These rules must be followed when designing parts to be upset forged: The length of unsupported metal that can be upset in one blow without injurious buckling should be limited to three times the diameter of the bar. Lengths of stock greater than three times the diameter may be upset successfully, provided that the diameter of the upset is not more than 1.5 times the diameter of the stock. In an upset requiring stock length greater than three times the diameter of the stock, and where the diameter of the cavity is not more than 1.5 times the diameter of the stock, the length of unsupported metal beyond the face of the die must not exceed the diameter of the bar. Automatic hot forging The automatic hot forging process involves feeding mill-length steel bars (typically long) into one end of the machine at room temperature and hot forged products emerge from the other end. This all occurs rapidly; small parts can be made at a rate of 180 parts per minute (ppm) and larger can be made at a rate of 90 ppm. The parts can be solid or hollow, round or symmetrical, up to , and up to in diameter. The main advantages to this process are its high output rate and ability to accept low-cost materials. Little labor is required to operate the machinery. There is no flash produced so material savings are between 20 and 30% over conventional forging. The final product is a consistent so air cooling will result in a part that is still easily machinable (the advantage being the lack of annealing required after forging). Tolerances are usually ±, surfaces are clean, and draft angles are 0.5 to 1°. Tool life is nearly double that of conventional forging because contact times are on the order of 0.06-second. The downside is that this process is only feasible on smaller symmetric parts and cost; the initial investment can be over $10 million, so large quantities are required to justify this process. The process starts by heating the bar to in less than 60 seconds using high-power induction coils. It is then descaled with rollers, sheared into blanks, and transferred through several successive forming stages, during which it is upset, preformed, final forged, and pierced (if necessary). This process can also be coupled with high-speed cold-forming operations. Generally, the cold forming operation will do the finishing stage so that the advantages of cold-working can be obtained, while maintaining the high speed of automatic hot forging. Examples of parts made by this process are: wheel hub unit bearings, transmission gears, tapered roller bearing races, stainless steel coupling flanges, and neck rings for liquid propane (LP) gas cylinders. Manual transmission gears are an example of automatic hot forging used in conjunction with cold working. Roll forging Roll forging is a process where round or flat bar stock is reduced in thickness and increased in length. Roll forging is performed using two cylindrical or semi-cylindrical rolls, each containing one or more shaped grooves. A heated bar is inserted into the rolls and when it hits a spot the rolls rotate and the bar is progressively shaped as it is rolled through the machine. The piece is then transferred to the next set of grooves or turned around and reinserted into the same grooves. This continues until the desired shape and size is achieved. The advantage of this process is there is no flash and it imparts a favorable grain structure into the workpiece. Examples of products produced using this method include axles, tapered levers and leaf springs. Net-shape and near-net-shape forging This process is also known as precision forging. It was developed to minimize cost and waste associated with post-forging operations. Therefore, the final product from a precision forging needs little or no final machining. Cost savings are gained from the use of less material, and thus less scrap, the overall decrease in energy used, and the reduction or elimination of machining. Precision forging also requires less of a draft, 1° to 0°. The downside of this process is its cost, therefore it is only implemented if significant cost reduction can be achieved. Cold forging Near net shape forging is most common when parts are forged without heating the slug, bar or billet. Aluminum is a common material that can be cold forged depending on final shape. Lubrication of the parts being formed is critical to increase the life of the mating dies. Induction forging Unlike the above processes, induction forging is based on the type of heating style used. Many of the above processes can be used in conjunction with this heating method. Multidirectional forging Multidirectional forging is forming of a work piece in a single step in several directions. The multidirectional forming takes place through constructive measures of the tool. The vertical movement of the press ram is redirected using wedges which distributes and redirects the force of the forging press in horizontal directions. Isothermal forging Isothermal forging is a process by which the materials and the die are heated to the same temperature (iso- meaning "equal"). Adiabatic heating is used to assist in the deformation of the material, meaning the strain rates are highly controlled. This technique is commonly used for forging aluminium, which has a lower forging temperature than steels. Forging temperatures for aluminum are around , while steels and super alloys can be . Benefits: Near net shapes which lead to lower machining requirements and therefore lower scrap rates Reproducibility of the part Due to the lower heat loss smaller machines can be used to make the forging Disadvantages: Higher die material costs to handle temperatures and pressures Uniform heating systems are required Protective atmospheres or vacuum to reduce oxidation of the dies and material Low production rates Materials and applications Forging of steel Depending on the forming temperature steel forging can be divided into: Hot forging of steel Forging temperatures above the recrystallization temperature between 950–1250 °C Good formability Low forming forces Constant tensile strength of the workpieces Warm forging of steel Forging temperatures between 750–950 °C Less or no scaling at the workpiece surface Narrower tolerances achievable than in hot forging Limited formability and higher forming forces than for hot forging Lower forming forces than in cold forming Cold forging of steel Forging temperatures at room conditions, self-heating up to 150 °C due to the forming energy Narrowest tolerances achievable No scaling at workpiece surface Increase of strength and decrease of ductility due to strain hardening Low formability and high forming forces are necessary For industrial processes steel alloys are primarily forged in hot condition. Brass, bronze, copper, precious metals and their alloys are manufactured by cold forging processes; each metal requires a different forging temperature. Forging of aluminium Aluminium forging is performed at a temperature range between 350–550 °C Forging temperatures above 550 °C are too close to the solidus temperature of the alloys and lead in conjunction with varying effective strains to unfavorable workpiece surfaces and potentially to a partial melting as well as fold formation. Forging temperatures below 350 °C reduce formability by increasing the yield stress, which can lead to unfilled dies, cracking at the workpiece surface and increased die forces Due to the narrow temperature range and high thermal conductivity, aluminium forging can only be realized in a particular process window. To provide good forming conditions a homogeneous temperature distribution in the entire workpiece is necessary. Therefore, the control of the tool temperature has a major influence to the process. For example, by optimizing the preform geometries the local effective strains can be influenced to reduce local overheating for a more homogeneous temperature distribution. Application of aluminium forged parts High-strength aluminium alloys have the tensile strength of medium strong steel alloys while providing significant weight advantages. Therefore, aluminium forged parts are mainly used in aerospace, automotive industry and many other fields of engineering especially in those fields, where highest safety standards against failure by abuse, by shock or vibratory stresses are needed. Such parts are for example pistons, chassis parts, steering components and brake parts. Commonly used alloys are AlSi1MgMn (EN AW-6082) and AlZnMgCu1,5 (EN AW-7075). About 80% of all aluminium forged parts are made of AlSi1MgMn. The high-strength alloy AlZnMgCu1,5 is mainly used for aerospace applications. Forging of magnesium Magnesium forging occurs at a temperature range between 290–450 °C Magnesium alloys are more difficult to forge due to their low plasticity, low sensitivity to strain rates and narrow forming temperature. Using semi-open die hot forging with a three-slide forging press (TSFP) has become a newly developed forging method for Mg-Al alloy AZ31, commonly used in forming aircraft brackets. This forging method has shown to improve tensile properties but lacks uniform grain size. Even though the application of magnesium alloys increases by 15–20% each year in the aerospace and automotive industry, forging magnesium alloys with specialized dies is expensive and an unfeasible method to produce parts for a mass market. Instead, most magnesium alloy parts for industry are produced by casting methods. Equipment The most common type of forging equipment is the hammer and anvil. Principles behind the hammer and anvil are still used today in drop-hammer equipment. The principle behind the machine is simple: raise the hammer and drop it or propel it into the workpiece, which rests on the anvil. The main variations between drop-hammers are in the way the hammer is powered; the most common being air and steam hammers. Drop-hammers usually operate in a vertical position. The main reason for this is excess energy (energy that is not used to deform the workpiece) that is not released as heat or sound needs to be transmitted to the foundation. Moreover, a large machine base is needed to absorb the impacts. To overcome some shortcomings of the drop-hammer, the counterblow machine or impactor is used. In a counterblow machine both the hammer and anvil move and the workpiece is held between them. Here excess energy becomes recoil. This allows the machine to work horizontally and have a smaller base. Other advantages include less noise, heat and vibration. It also produces a distinctly different flow pattern. Both of these machines can be used for open-die or closed-die forging. Forging presses A forging press, often just called a press, is used for press forging. There are two main types: mechanical and hydraulic presses. Mechanical presses function by using cams, cranks and/or toggles to produce a preset (a predetermined force at a certain location in the stroke) and reproducible stroke. Due to the nature of this type of system, different forces are available at different stroke positions. Mechanical presses are faster than their hydraulic counterparts (up to 50 strokes per minute). Their capacities range from 3 to 160 MN (300 to 18,000 short tons-force). Hydraulic presses, such as the four-die device, use fluid pressure and a piston to generate force. The advantages of a hydraulic press over a mechanical press are its flexibility and greater capacity. The disadvantages include a slower, larger, and costlier machine to operate. The roll forging, upsetting, and automatic hot forging processes all use specialized machinery.
Technology
Metallurgy
null
182208
https://en.wikipedia.org/wiki/Heat%20treating
Heat treating
Heat treating (or heat treatment) is a group of industrial, thermal and metalworking processes used to alter the physical, and sometimes chemical, properties of a material. The most common application is metallurgical. Heat treatments are also used in the manufacture of many other materials, such as glass. Heat treatment involves the use of heating or chilling, normally to extreme temperatures, to achieve the desired result such as hardening or softening of a material. Heat treatment techniques include annealing, case hardening, precipitation strengthening, tempering, carburizing, normalizing and quenching. Although the term heat treatment applies only to processes where the heating and cooling are done for the specific purpose of altering properties intentionally, heating and cooling often occur incidentally during other manufacturing processes such as hot forming or welding. Physical processes Metallic materials consist of a microstructure of small crystals called "grains" or crystallites. The nature of the grains (i.e. grain size and composition) is one of the most effective factors that can determine the overall mechanical behavior of the metal. Heat treatment provides an efficient way to manipulate the properties of the metal by controlling the rate of diffusion and the rate of cooling within the microstructure. Heat treating is often used to alter the mechanical properties of a metallic alloy, manipulating properties such as the hardness, strength, toughness, ductility, and elasticity. There are two mechanisms that may change an alloy's properties during heat treatment: the formation of martensite causes the crystals to deform intrinsically, and the diffusion mechanism causes changes in the homogeneity of the alloy. The crystal structure consists of atoms that are grouped in a very specific arrangement, called a lattice. In most elements, this order will rearrange itself, depending on conditions like temperature and pressure. This rearrangement called allotropy or polymorphism, may occur several times, at many different temperatures for a particular metal. In alloys, this rearrangement may cause an element that will not normally dissolve into the base metal to suddenly become soluble, while a reversal of the allotropy will make the elements either partially or completely insoluble. When in the soluble state, the process of diffusion causes the atoms of the dissolved element to spread out, attempting to form a homogenous distribution within the crystals of the base metal. If the alloy is cooled to an insoluble state, the atoms of the dissolved constituents (solutes) may migrate out of the solution. This type of diffusion, called precipitation, leads to nucleation, where the migrating atoms group together at the grain-boundaries. This forms a microstructure generally consisting of two or more distinct phases. For instance, steel that has been heated above the austenizing temperature (red to orange-hot, or around to depending on carbon content), and then cooled slowly, forms a laminated structure composed of alternating layers of ferrite and cementite, becoming soft pearlite. After heating the steel to the austenite phase and then quenching it in water, the microstructure will be in the martensitic phase. This is due to the fact that the steel will change from the austenite phase to the martensite phase after quenching. Some pearlite or ferrite may be present if the quench did not rapidly cool off all the steel. Unlike iron-based alloys, most heat-treatable alloys do not experience a ferrite transformation. In these alloys, the nucleation at the grain-boundaries often reinforces the structure of the crystal matrix. These metals harden by precipitation. Typically a slow process, depending on temperature, this is often referred to as "age hardening". Many metals and non-metals exhibit a martensite transformation when cooled quickly (with external media like oil, polymer, water, etc.). When a metal is cooled very quickly, the insoluble atoms may not be able to migrate out of the solution in time. This is called a "diffusionless transformation." When the crystal matrix changes to its low-temperature arrangement, the atoms of the solute become trapped within the lattice. The trapped atoms prevent the crystal matrix from completely changing into its low-temperature allotrope, creating shearing stresses within the lattice. When some alloys are cooled quickly, such as steel, the martensite transformation hardens the metal, while in others, like aluminum, the alloy becomes softer. Effects of composition The specific composition of an alloy system will usually have a great effect on the results of heat treating. If the percentage of each constituent is just right, the alloy will form a single, continuous microstructure upon cooling. Such a mixture is said to be eutectoid. However, If the percentage of the solutes varies from the eutectoid mixture, two or more different microstructures will usually form simultaneously. A hypo eutectoid solution contains less of the solute than the eutectoid mix, while a hypereutectoid solution contains more. Eutectoid alloys A eutectoid (eutectic-like) alloy is similar in behavior to a eutectic alloy. A eutectic alloy is characterized by having a single melting point. This melting point is lower than that of any of the constituents, and no change in the mixture will lower the melting point any further. When a molten eutectic alloy is cooled, all of the constituents will crystallize into their respective phases at the same temperature. A eutectoid alloy is similar, but the phase change occurs, not from a liquid, but from a solid solution. Upon cooling a eutectoid alloy from the solution temperature, the constituents will separate into different crystal phases, forming a single microstructure. A eutectoid steel, for example, contains 0.77% carbon. Upon cooling slowly, the solution of iron and carbon (a single phase called austenite) will separate into platelets of the phases ferrite and cementite. This forms a layered microstructure called pearlite. Since pearlite is harder than iron, the degree of softness achievable is typically limited to that produced by the pearlite. Similarly, the hardenability is limited by the continuous martensitic microstructure formed when cooled very fast. Hypoeutectoid alloys A hypoeutectic alloy has two separate melting points. Both are above the eutectic melting point for the system but are below the melting points of any constituent forming the system. Between these two melting points, the alloy will exist as part solid and part liquid. The constituent with the higher melting point will solidify first. When completely solidified, a hypoeutectic alloy will often be in a solid solution. Similarly, a hypoeutectoid alloy has two critical temperatures, called "arrests". Between these two temperatures, the alloy will exist partly as the solution and partly as a separate crystallizing phase, called the "pro eutectoid phase". These two temperatures are called the upper (A3) and lower (A1) transformation temperatures. As the solution cools from the upper transformation temperature toward an insoluble state, the excess base metal will often be forced to "crystallize-out", becoming the pro eutectoid. This will occur until the remaining concentration of solutes reaches the eutectoid level, which will then crystallize as a separate microstructure. For example, a hypoeutectoid steel contains less than 0.77% carbon. Upon cooling a hypoeutectoid steel from the austenite transformation temperature, small islands of proeutectoid-ferrite will form. These will continue to grow and the carbon will recede until the eutectoid concentration in the rest of the steel is reached. This eutectoid mixture will then crystallize as a microstructure of pearlite. Since ferrite is softer than pearlite, the two microstructures combine to increase the ductility of the alloy. Consequently, the hardenability of the alloy is lowered. Hypereutectoid alloys A hypereutectic alloy also has different melting points. However, between these points, it is the constituent with the higher melting point that will be solid. Similarly, a hypereutectoid alloy has two critical temperatures. When cooling a hypereutectoid alloy from the upper transformation temperature, it will usually be the excess solutes that crystallize-out first, forming the pro-eutectoid. This continues until the concentration in the remaining alloy becomes eutectoid, which then crystallizes into a separate microstructure. A hypereutectoid steel contains more than 0.77% carbon. When slowly cooling hypereutectoid steel, the cementite will begin to crystallize first. When the remaining steel becomes eutectoid in composition, it will crystallize into pearlite. Since cementite is much harder than pearlite, the alloy has greater hardenability at a cost in ductility. Effects of time and temperature Proper heat treating requires precise control over temperature, time held at a certain temperature and cooling rate. With the exception of stress-relieving, tempering, and aging, most heat treatments begin by heating an alloy beyond a certain transformation, or arrest (A), temperature. This temperature is referred to as an "arrest" because at the A temperature the metal experiences a period of hysteresis. At this point, all of the heat energy is used to cause the crystal change, so the temperature stops rising for a short time (arrests) and then continues climbing once the change is complete. Therefore, the alloy must be heated above the critical temperature for a transformation to occur. The alloy will usually be held at this temperature long enough for the heat to completely penetrate the alloy, thereby bringing it into a complete solid solution. Iron, for example, has four critical-temperatures, depending on carbon content. Pure iron in its alpha (room temperature) state changes to nonmagnetic gamma-iron at its A2 temperature, and weldable delta-iron at its A4 temperature. However, as carbon is added, becoming steel, the A2 temperature splits into the A3 temperature, also called the austenizing temperature (all phases become austenite, a solution of gamma iron and carbon) and its A1 temperature (austenite changes into pearlite upon cooling). Between these upper and lower temperatures the pro eutectoid phase forms upon cooling. Because a smaller grain size usually enhances mechanical properties, such as toughness, shear strength and tensile strength, these metals are often heated to a temperature that is just above the upper critical temperature, in order to prevent the grains of solution from growing too large. For instance, when steel is heated above the upper critical-temperature, small grains of austenite form. These grow larger as the temperature is increased. When cooled very quickly, during a martensite transformation, the austenite grain-size directly affects the martensitic grain-size. Larger grains have large grain-boundaries, which serve as weak spots in the structure. The grain size is usually controlled to reduce the probability of breakage. The diffusion transformation is very time-dependent. Cooling a metal will usually suppress the precipitation to a much lower temperature. Austenite, for example, usually only exists above the upper critical temperature. However, if the austenite is cooled quickly enough, the transformation may be suppressed for hundreds of degrees below the lower critical temperature. Such austenite is highly unstable and, if given enough time, will precipitate into various microstructures of ferrite and cementite. The cooling rate can be used to control the rate of grain growth or can even be used to produce partially martensitic microstructures. However, the martensite transformation is time-independent. If the alloy is cooled to the martensite transformation (Ms) temperature before other microstructures can fully form, the transformation will usually occur at just under the speed of sound. When austenite is cooled but kept above the martensite start temperature Ms so that a martensite transformation does not occur, the austenite grain size will have an effect on the rate of nucleation, but it is generally temperature and the rate of cooling that controls the grain size and microstructure. When austenite is cooled extremely slowly, it will form large ferrite crystals filled with spherical inclusions of cementite. This microstructure is referred to as "sphereoidite". If cooled a little faster, then coarse pearlite will form. Even faster, and fine pearlite will form. If cooled even faster, bainite will form, with more complete bainite transformation occurring depending on the time held above martensite start Ms. Similarly, these microstructures will also form, if cooled to a specific temperature and then held there for a certain time. Most non-ferrous alloys are also heated in order to form a solution. Most often, these are then cooled very quickly to produce a martensite transformation, putting the solution into a supersaturated state. The alloy, being in a much softer state, may then be cold worked. This causes work hardening that increases the strength and hardness of the alloy. Moreover, the defects caused by plastic deformation tend to speed up precipitation, increasing the hardness beyond what is normal for the alloy. Even if not cold worked, the solutes in these alloys will usually precipitate, although the process may take much longer. Sometimes these metals are then heated to a temperature that is below the lower critical (A1) temperature, preventing recrystallization, in order to speed-up the precipitation. Types of heat treatment Complex heat treating schedules, or "cycles", are often devised by metallurgists to optimize an alloy's mechanical properties. In the aerospace industry, a superalloy may undergo five or more different heat treating operations to develop the desired properties. This can lead to quality problems depending on the accuracy of the furnace's temperature controls and timer. These operations can usually be divided into several basic techniques. Annealing Annealing consists of heating a metal to a specific temperature and then cooling at a rate that will produce a refined microstructure, either fully or partially separating the constituents. The rate of cooling is generally slow. Annealing is most often used to soften a metal for cold working, to improve machinability, or to enhance properties like electrical conductivity. In ferrous alloys, annealing is usually accomplished by heating the metal beyond the upper critical temperature and then cooling very slowly, resulting in the formation of pearlite. In both pure metals and many alloys that cannot be heat treated, annealing is used to remove the hardness caused by cold working. The metal is heated to a temperature where recrystallization can occur, thereby repairing the defects caused by plastic deformation. In these metals, the rate of cooling will usually have little effect. Most non-ferrous alloys that are heat-treatable are also annealed to relieve the hardness of cold working. These may be slowly cooled to allow full precipitation of the constituents and produce a refined microstructure. Ferrous alloys are usually either "full annealed" or "process annealed". Full annealing requires very slow cooling rates, in order to form coarse pearlite. In process annealing, the cooling rate may be faster; up to, and including normalizing. The main goal of process annealing is to produce a uniform microstructure. Non-ferrous alloys are often subjected to a variety of annealing techniques, including "recrystallization annealing", "partial annealing", "full annealing", and "final annealing". Not all annealing techniques involve recrystallization, such as stress relieving. Normalizing Normalizing is a technique used to provide uniformity in grain size and composition (equiaxed crystals) throughout an alloy. The term is often used for ferrous alloys that have been austenitized and then cooled in the open air. Normalizing not only produces pearlite but also martensite and sometimes bainite, which gives harder and stronger steel but with less ductility for the same composition than full annealing. In the normalizing process the steel is heated to about 40 degrees Celsius above its upper critical temperature limit, held at this temperature for some time, and then cooled in air. Stress relieving Stress-relieving is a technique to remove or reduce the internal stresses created in metal. These stresses may be caused in a number of ways, ranging from cold working to non-uniform cooling. Stress-relieving is usually accomplished by heating a metal below the lower critical temperature and then cooling uniformly. Stress relieving is commonly used on items like air tanks, boilers and other pressure vessels, to remove a portion of the stresses created during the welding process. Aging Some metals are classified as precipitation hardening metals. When a precipitation hardening alloy is quenched, its alloying elements will be trapped in solution, resulting in a soft metal. Aging a "solutionized" metal will allow the alloying elements to diffuse through the microstructure and form intermetallic particles. These intermetallic particles will nucleate and fall out of the solution and act as a reinforcing phase, thereby increasing the strength of the alloy. Alloys may age " naturally" meaning that the precipitates form at room temperature, or they may age "artificially" when precipitates only form at elevated temperatures. In some applications, naturally aging alloys may be stored in a freezer to prevent hardening until after further operations - assembly of rivets, for example, maybe easier with a softer part. Examples of precipitation hardening alloys include 2000 series, 6000 series, and 7000 series aluminium alloy, as well as some superalloys and some stainless steels. Steels that harden by aging are typically referred to as maraging steels, from a combination of the term "martensite aging". Quenching Quenching is a process of cooling a metal at a rapid rate. This is most often done to produce a martensite transformation. In ferrous alloys, this will often produce a harder metal, while non-ferrous alloys will usually become softer than normal. To harden by quenching, a metal (usually steel or cast iron) must be heated above the upper critical temperature (Steel: above 815~900 degrees Celsius) and then quickly cooled. Depending on the alloy and other considerations (such as concern for maximum hardness vs. cracking and distortion), cooling may be done with forced air or other gases, (such as nitrogen). Liquids may be used, due to their better thermal conductivity, such as oil, water, a polymer dissolved in water, or a brine. Upon being rapidly cooled, a portion of austenite (dependent on alloy composition) will transform to martensite, a hard, brittle crystalline structure. The quenched hardness of a metal depends on its chemical composition and quenching method. Cooling speeds, from fastest to slowest, go from brine, polymer (i.e. mixtures of water + glycol polymers), freshwater, oil, and forced air. However, quenching certain steel too fast can result in cracking, which is why high-tensile steels such as AISI 4140 should be quenched in oil, tool steels such as ISO 1.2767 or H13 hot work tool steel should be quenched in forced air, and low alloy or medium-tensile steels such as XK1320 or AISI 1040 should be quenched in brine. Some Beta titanium based alloys have also shown similar trends of increased strength through rapid cooling. However, most non-ferrous metals, like alloys of copper, aluminum, or nickel, and some high alloy steels such as austenitic stainless steel (304, 316), produce an opposite effect when these are quenched: they soften. Austenitic stainless steels must be quenched to become fully corrosion resistant, as they work-harden significantly. Tempering Untempered martensitic steel, while very hard, is too brittle to be useful for most applications. A method for alleviating this problem is called tempering. Most applications require that quenched parts be tempered. Tempering consists of heating steel below the lower critical temperature, (often from 400˚F to 1105˚F or 205˚C to 595˚C, depending on the desired results), to impart some toughness. Higher tempering temperatures (maybe up to 1,300˚F or 700˚C, depending on the alloy and application) are sometimes used to impart further ductility, although some yield strength is lost. Tempering may also be performed on normalized steels. Other methods of tempering consist of quenching to a specific temperature, which is above the martensite start temperature, and then holding it there until pure bainite can form or internal stresses can be relieved. These include austempering and martempering. Tempering colors Steel that has been freshly ground or polished will form oxide layers when heated. At a very specific temperature, the iron oxide will form a layer with a very specific thickness, causing thin-film interference. This causes colors to appear on the surface of the steel. As the temperature is increased, the iron oxide layer grows in thickness, changing the color. These colors, called tempering colors, have been used for centuries to gauge the temperature of the metal. 350˚F (176˚C), light yellowish 400˚F (204˚C), light-straw 440˚F (226˚C), dark-straw 500˚F (260˚C), brown 540˚F (282˚C), purple 590˚F (310˚C), deep blue 640˚F (337˚C), light blue The tempering colors can be used to judge the final properties of the tempered steel. Very hard tools are often tempered in the light to the dark straw range, whereas springs are often tempered to the blue. However, the final hardness of the tempered steel will vary, depending on the composition of the steel. Higher-carbon tool steel will remain much harder after tempering than spring steel (of slightly less carbon) when tempered at the same temperature. The oxide film will also increase in thickness over time. Therefore, steel that has been held at 400˚F for a very long time may turn brown or purple, even though the temperature never exceeded that needed to produce a light straw color. Other factors affecting the final outcome are oil films on the surface and the type of heat source used. Selective heat treating Many heat treating methods have been developed to alter the properties of only a portion of an object. These tend to consist of either cooling different areas of an alloy at different rates, by quickly heating in a localized area and then quenching, by thermochemical diffusion, or by tempering different areas of an object at different temperatures, such as in differential tempering. Differential hardening Some techniques allow different areas of a single object to receive different heat treatments. This is called differential hardening. It is common in high quality knives and swords. The Chinese jian is one of the earliest known examples of this, and the Japanese katana may be the most widely known. The Nepalese Khukuri is another example. This technique uses an insulating layer, like layers of clay, to cover the areas that are to remain soft. The areas to be hardened are left exposed, allowing only certain parts of the steel to fully harden when quenched. Flame hardening Flame hardening is used to harden only a portion of the metal. Unlike differential hardening, where the entire piece is heated and then cooled at different rates, in flame hardening, only a portion of the metal is heated before quenching. This is usually easier than differential hardening, but often produces an extremely brittle zone between the heated metal and the unheated metal, as cooling at the edge of this heat-affected zone is extremely rapid. Induction hardening Induction hardening is a surface hardening technique in which the surface of the metal is heated very quickly, using a no-contact method of induction heating. The alloy is then quenched, producing a martensite transformation at the surface while leaving the underlying metal unchanged. This creates a very hard, wear-resistant surface while maintaining the proper toughness in the majority of the object. Crankshaft journals are a good example of an induction hardened surface. Case hardening Case hardening is a thermochemical diffusion process in which an alloying element, most commonly carbon or nitrogen, diffuses into the surface of a monolithic metal. The resulting interstitial solid solution is harder than the base material, which improves wear resistance without sacrificing toughness. Laser surface engineering is a surface treatment with high versatility, selectivity and novel properties. Since the cooling rate is very high in laser treatment, metastable even metallic glass can be obtained by this method. Cold and cryogenic treating Although quenching steel causes the austenite to transform into martensite, all of the austenite usually does not transform. Some austenite crystals will remain unchanged even after quenching below the martensite finish (Mf) temperature. Further transformation of the austenite into martensite can be induced by slowly cooling the metal to extremely low temperatures. Cold treating generally consists of cooling the steel to around -115˚F (-81˚C), but does not eliminate all of the austenite. Cryogenic treating usually consists of cooling to much lower temperatures, often in the range of -315˚F (-192˚C), to transform most of the austenite into martensite. Cold and cryogenic treatments are typically done immediately after quenching, before any tempering, and will increase the hardness, wear resistance, and reduce the internal stresses in the metal but, because it is really an extension of the quenching process, it may increase the chances of cracking during the procedure. The process is often used for tools, bearings, or other items that require good wear resistance. However, it is usually only effective in high-carbon or high-alloy steels in which more than 10% austenite is retained after quenching. Decarburization The heating of steel is sometimes used as a method to alter the carbon content. When steel is heated in an oxidizing environment, the oxygen combines with the iron to form an iron-oxide layer, which protects the steel from decarburization. When the steel turns to austenite, however, the oxygen combines with iron to form a slag, which provides no protection from decarburization. The formation of slag and scale actually increases decarburization, because the iron oxide keeps oxygen in contact with the decarburization zone even after the steel is moved into an oxygen-free environment, such as the coals of a forge. Thus, the carbon atoms begin combining with the surrounding scale and slag to form both carbon monoxide and carbon dioxide, which is released into the air. Steel contains a relatively small percentage of carbon, which can migrate freely within the gamma iron. When austenitized steel is exposed to air for long periods of time, the carbon content in the steel can be lowered. This is the opposite from what happens when steel is heated in a reducing environment, in which carbon slowly diffuses further into the metal. In an oxidizing environment, the carbon can readily diffuse outwardly, so austenitized steel is very susceptible to decarburization. This is often used for cast steel, where a high carbon-content is needed for casting, but a lower carbon-content is desired in the finished product. It is often used on cast-irons to produce malleable cast iron, in a process called "white tempering". This tendency to decarburize is often a problem in other operations, such as blacksmithing, where it becomes more desirable to austenize the steel for the shortest amount of time possible to prevent too much decarburization. Specification of heat treatment Usually the end condition is specified instead of the process used in heat treatment. Case hardening Case hardening is specified by "hardness" and "case depth". The case depth can be specified in two ways: total case depth or effective case depth. The total case depth is the true depth of the case. For most alloys, the effective case depth is the depth of the case that has a hardness equivalent of HRC50; however, some alloys specify a different hardness (40-60 HRC) at effective case depth; this is checked on a Tukon microhardness tester. This value can be roughly approximated as 65% of the total case depth; however, the chemical composition and hardenability can affect this approximation. If neither type of case depth is specified the total case depth is assumed. For case hardened parts the specification should have a tolerance of at least ±. If the part is to be ground after heat treatment, the case depth is assumed to be after grinding. The Rockwell hardness scale used for the specification depends on the depth of the total case depth, as shown in the table below. Usually, hardness is measured on the Rockwell "C" scale, but the load used on the scale will penetrate through the case if the case is less than . Using Rockwell "C" for a thinner case will result in a false reading. For cases that are less than thick a Rockwell scale cannot reliably be used, so is specified instead. File hard is approximately equivalent to 58 HRC. When specifying the hardness either a range should be given or the minimum hardness specified. If a range is specified at least 5 points should be given. Through hardening Only hardness is listed for through hardening. It is usually in the form of HRC with at least a five-point range. Annealing The hardness for an annealing process is usually listed on the HRB scale as a maximum value. It is a process to refine grain size, improve strength, remove residual stress, and affect the electromagnetic properties... Types of furnaces Furnaces used for heat treatment can be split into two broad categories: batch furnaces and continuous furnaces. Batch furnaces are usually manually loaded and unloaded, whereas continuous furnaces have an automatic conveying system to provide a constant load into the furnace chamber. Batch furnaces Batch systems usually consist of an insulated chamber with a steel shell, a heating system, and an access door to the chamber. Box-type furnace Many basic box-type furnaces have been upgraded to a semi-continuous batch furnace with the addition of integrated quench tanks and slow-cool chambers. These upgraded furnaces are a very commonly used piece of equipment for heat-treating. Car-type furnace Also known as a " bogie hearth", the car furnace is an extremely large batch furnace. The floor is constructed as an insulated movable car that is moved in and out of the furnace for loading and unloading. The car is usually sealed using sand seals or solid seals when in position. Due to the difficulty in getting a sufficient seal, car furnaces are usually used for non-atmosphere processes. Elevator-type furnace Similar in type to the car furnace, except that the car and hearth are rolled into position beneath the furnace and raised by means of a motor-driven mechanism, elevator furnaces can handle large heavy loads and often eliminate the need for any external cranes and transfer mechanisms. Bell-type furnace Bell furnaces have removable covers called bells, which are lowered over the load and hearth by crane. An inner bell is placed over the hearth and sealed to supply a protective atmosphere. An outer bell is lowered to provide the heat supply. Pit furnaces Furnaces that are constructed in a pit and extend to floor level or slightly above are called pit furnaces. Workpieces can be suspended from fixtures, held in baskets, or placed on bases in the furnace. Pit furnaces are suited to heating long tubes, shafts, and rods by holding them in a vertical position. This manner of loading provides minimal distortion. Salt bath furnaces Salt baths are used in a wide variety of heat treatment processes including neutral hardening, liquid carburising, liquid nitriding, austempering, martempering and tempering. Parts are loaded into a pot of molten salt where they are heated by conduction, giving a very readily available source of heat. The core temperature of a part rises in temperature at approximately the same rate as its surface in a salt bath. Salt baths utilize a variety of salts for heat treatment, with cyanide salts being the most extensively used. Concerns about associated occupation health and safety, and expensive waste management and disposal due to their environmental effects have made the use of salt baths less attractive in recent years. Consequently, many salt baths are being replaced by more environmentally friendly fluidized bed furnaces. Fluidised bed furnaces A fluidised bed consists of a cylindrical retort made from high-temperature alloy, filled with sand-like aluminum oxide particulate. Gas (air or nitrogen) is bubbled through the oxide and the sand moves in such a way that it exhibits fluid-like behavior, hence the term fluidized. The solid-solid contact of the oxide gives very high thermal conductivity and excellent temperature uniformity throughout the furnace, comparable to those seen in a salt bath.
Technology
Metallurgy
null
182255
https://en.wikipedia.org/wiki/Trench
Trench
A trench is a type of excavation or depression in the ground that is generally deeper than it is wide (as opposed to a swale or a bar ditch), and narrow compared with its length (as opposed to a simple hole or pit). In geology, trenches result from erosion by rivers or by geological movement of tectonic plates. In civil engineering, trenches are often created to install underground utilities such as gas, water, power and communication lines. In construction, trenches are dug for foundations of buildings, retaining walls and dams, and for cut-and-cover construction of tunnels. In archaeology, the "trench method" is used for searching and excavating ancient ruins or to dig into strata of sedimented material. In geotechnical engineering, trench investigations locate faults and investigate deep soil properties. In trench warfare, soldiers occupy trenches to protect them against weapons fire and artillery. Trenches are dug using manual tools such as shovel and pickaxe or heavy equipment such as backhoe, trencher, and excavator. For deep trenches, the instability of steep earthen walls requires engineering and safety techniques such as shoring. Trenches are usually considered temporary structures that are backfilled with soil after construction or abandoned after use. Some trenches are stabilized using durable materials such as concrete to create open passages such as canal and sunken roadways. Geology Some trenches are created as a result of erosion by running water or by glaciers (which may have long since disappeared). Others, such as rift valleys or oceanic trenches, are created by geological movement of tectonic plates. Some oceanic trenches include the Mariana Trench and the Aleutian Trench. The former geoform is relatively deep (approximately ), linear and narrow, and is formed by plate subduction when plates converge. Civil engineering In the civil engineering fields of construction and maintenance of infrastructure, trenches play a major role. They are used for installation of underground infrastructure or utilities (such as gas mains, water mains, communication lines and pipelines) that would be obstructive or easily damaged if placed above ground. Trenches are needed later for access to these installations for service. They may be created to search for pipes and other infrastructure whose exact location is no longer known ("search trench" or "search slit"). Finally, trenches may be created as the first step of creating a foundation wall. Trench shoring is often used in trenchworks to protect workers and stabilise the steep walls. An alternative to digging trenches is to create a utility tunnel. Such a tunnel may be dug by boring or by using a trench for cut-and-cover construction. The advantages of utility tunnels are the reduction of maintenance manholes, one-time relocation, and less excavation and repair, compared with separate cable ducts for each service. When they are well mapped, they also allow rapid access to all utilities without having to dig access trenches or resort to confused and often inaccurate utility maps. An important advantage to placing utilities underground is public safety. Underground power lines, whether in common or separate channels, prevent downed utility cables from blocking roads, thus speeding emergency access after natural disasters such as earthquakes, hurricanes, and tsunamis. In some cases, a large trench is dug and deliberately preserved (not filled in), often for transport purposes. This is typically done to install depressed motorways, open railway cuttings, or canals. However, these large, permanent trenches are significant barriers to other forms of travel, and often become de facto boundaries between neighborhoods or other spaces. Military engineering Trenches have often been dug for military purposes. In the pre-firearm era, they were mainly a type of hindrance to an attacker of a fortified location, such as the moat around a castle (this is technically called a ditch). An early example of this can be seen in the Battle of the Trench, a religious war, one of the early battles fought by Muhammad. With the advent of accurate firearms, trenches were used to shelter troops. Trench warfare and tactics evolved further in the Crimean War, the American Civil War and World War I, until systems of extensive main trenches, backup trenches (in case the first lines were overrun) and communication trenches often stretched dozens of kilometres along a front without interruption, and some kilometres further back from the front line. The area of land between trenches in trench warfare is known as "No Man's Land" because it often offers no protection from enemy fire. After WW1 had concluded, the trench became a symbol of WW1 and its horrors. Gallery Archaeology Trenches are used for searching and excavating ancient ruins or to dig into strata of sedimented material to get a sideways (layered) view of the deposits – with a hope of being able to place found objects or materials in a chronological order. The advantage of this method is that it destroys only a small part of the site (those areas where the trenches, often arranged in a grid pattern, are located). However, this method also has the disadvantage of only revealing small slices of the whole volume, and modern archeological digs usually employ combination methods. Safety Trenches that are deeper than about 1.5 m present safety risks arising from their steep walls and confined space. These risks are similar those from pits or any steep-walled excavations. The risks include falling, injury from cave-in (wall collapse), inability to escape the trench, drowning and asphyxiation. Falling into the trench. Mitigation methods include barriers such as railings or fencing. Injury from cave-in, meaning collapse of a steep wall. Mitigation includes construction of sloped walls (sloped trench) or stepped walls (benched trench). For vertical walls, trench shoring stabilizes the walls, and trench shielding provides a barrier against collapsed material. The risk of cave-in increases from surcharge load, which is any weight placed outside the trench near its edge. These loads include the spoil pile (soil excavated from the trench) or heavy equipment. These add extra stress to the walls of the trench. Inability to escape the trench because of steep and unstable walls, which may be difficult to climb. Ladders, stairs, or ramps allow exit. Cranes may assist rescue. Drowning in water or mud that has accumulated in the trench from rain, seepage, or leaking water pipes. Asphyxiation, poisoning, fire and explosion from gasses that are denser than air that have settled in a trench. These may come from nearby industrial processing of these gasses, intentional use within the trench, or leakage from nearby plumbing. These present an asphyxiation hazard and may also be toxic. Burnable gasses such as natural gas present a fire and explosion risk. Oxidizers such as pure oxygen increase the risk of fire from other fuels present in the trench. Gasses such as pure nitrogen and natural gas have densities similar to pure air but are denser when cold, for example when they have evaporated from liquid form, and may creep along the ground and fill the trench. Ventilation fans and ducts reduce the risk. Oxygen sensors and other gas sensors detect the danger; alarms from the sensors can warn the occupants.
Technology
Earthworks
null
182283
https://en.wikipedia.org/wiki/Miles%20per%20hour
Miles per hour
Miles per hour (mph, m.p.h., MPH, or mi/h) is a British imperial and United States customary unit of speed expressing the number of miles travelled in one hour. It is used in the United Kingdom, the United States, and a number of smaller countries, most of which are UK or US territories, or have close historical ties with the UK or US. Usage Road traffic Speed limits and road traffic speeds are given in miles per hour in the following jurisdictions: Antigua and Barbuda Bahamas Belize Dominica Grenada Liberia (occasionally) Marshall Islands Micronesia Palau Saint Kitts and Nevis Saint Lucia Saint Vincent and the Grenadines United Kingdom The following British Overseas Territories: Anguilla British Virgin Islands British Indian Ocean Territory Cayman Islands Falkland Islands Montserrat Saint Helena, Ascension and Tristan da Cunha Turks and Caicos Islands The Crown dependencies: Bailiwick of Guernsey Isle of Man Jersey United States The following United States overseas dependencies: American Samoa Guam Northern Mariana Islands Puerto Rico United States Virgin Islands Rail networks Miles per hour is the unit used on the US, Canadian and Irish rail systems. Miles per hour is also used on British rail systems, excluding trams, some light metro systems, the Channel Tunnel and High Speed 1. Nautical and aeronautical usage Nautical and aeronautical applications favour the knot as a common unit of speed. (One knot is one nautical mile per hour, with a nautical mile being exactly 1,852 metres or about 6,076 feet.) Other usage In some countries mph may be used to express the speed of delivery of a ball in sporting events such as cricket, tennis and baseball. Conversions {| |- |valign=top rowspan=4|1 mph |= (exactly) |- |= (exactly) |}
Physical sciences
Speed
Basics and measurement
182358
https://en.wikipedia.org/wiki/Cubit
Cubit
The cubit is an ancient unit of length based on the distance from the elbow to the tip of the middle finger. It was primarily associated with the Sumerians, Egyptians, and Israelites. The term cubit is found in the Bible regarding Noah's Ark, the Ark of the Covenant, the Tabernacle, and Solomon's Temple. The common cubit was divided into 6 palms × 4 fingers = 24 digits. Royal cubits added a palm for 7 palms × 4 fingers = 28 digits. These lengths typically ranged from , with an ancient Roman cubit being as long as . Cubits of various lengths were employed in many parts of the world in antiquity, during the Middle Ages and as recently as early modern times. The term is still used in hedgelaying, the length of the forearm being frequently used to determine the interval between stakes placed within the hedge. Etymology The English word "cubit" comes from the Latin noun "elbow", from the verb "to lie down", from which also comes the adjective "recumbent". Ancient Egyptian royal cubit The ancient Egyptian royal cubit () is the earliest attested standard measure. Cubit rods were used for the measurement of length. A number of these rods have survived: two are known from the tomb of Maya, the treasurer of the 18th dynasty pharaoh Tutankhamun, in Saqqara; another was found in the tomb of Kha (TT8) in Thebes. Fourteen such rods, including one double cubit rod, were described and compared by Lepsius in 1865. These cubit rods range from in length and are divided into seven palms; each palm is divided into four fingers, and the fingers are further subdivided. Early evidence for the use of this royal cubit comes from the Early Dynastic Period: on the Palermo Stone, the flood level of the Nile river during the reign of the Pharaoh Djer is given as measuring 6 cubits and 1 palm. Use of the royal cubit is also known from Old Kingdom architecture, from at least as early as the construction of the Step Pyramid of Djoser designed by Imhotep in around 2700 BC. Ancient Mesopotamian units of measurement Ancient Mesopotamian units of measurement originated in the loosely organized city-states of Early Dynastic Sumer. Each city, kingdom and trade guild had its own standards until the formation of the Akkadian Empire when Sargon of Akkad issued a common standard. This standard was improved by Naram-Sin, but fell into disuse after the Akkadian Empire dissolved. The standard of Naram-Sin was readopted in the Ur III period by the Nanše Hymn which reduced a plethora of multiple standards to a few agreed upon common groupings. Successors to Sumerian civilization including the Babylonians, Assyrians, and Persians continued to use these groupings. The Classical Mesopotamian system formed the basis for Elamite, Hebrew, Urartian, Hurrian, Hittite, Ugaritic, Phoenician, Babylonian, Assyrian, Persian, Arabic, and Islamic metrologies. The Classical Mesopotamian System also has a proportional relationship, by virtue of standardized commerce, to Bronze Age Harappan and Egyptian metrologies. In 1916, during the last years of the Ottoman Empire and in the middle of World War I, the German assyriologist Eckhard Unger found a copper-alloy bar while excavating at Nippur. The bar dates from and Unger claimed it was used as a measurement standard. This irregularly formed and irregularly marked graduated rule supposedly defined the Sumerian cubit as about . There is some evidence that cubits were used to measure angular separation. The Babylonian Astronomical Diary for 568–567 BCE refers to Jupiter being one cubit behind the elbow of Sagittarius. One cubit measures about 2 degrees. Biblical cubit The standard of the cubit () in different countries and in different ages has varied. This realization led the rabbis of the 2nd century CE to clarify the length of their cubit, saying that the measure of the cubit of which they have spoken "applies to the cubit of middle-size". In this case, the requirement is to make use of a standard 6 handbreadths to each cubit, and which handbreadth was not to be confused with an outstretched palm, but rather one that was clenched and which handbreadth has the standard width of 4 fingerbreadths (each fingerbreadth being equivalent to the width of a thumb, about 2.25 cm). This puts the handbreadth at roughly , and 6 handbreadths (1 cubit) at . Epiphanius of Salamis, in his treatise On Weights and Measures, describes how it was customary, in his day, to take the measurement of the biblical cubit: "The cubit is a measure, but it is taken from the measure of the forearm. For the part from the elbow to the wrist and the palm of the hand is called the cubit, the middle finger of the cubit measure being also extended at the same time and there being added below (it) the span, that is, of the hand, taken all together." Rabbi Avraham Chaim Naeh put the linear measurement of a cubit at . Avrohom Yeshaya Karelitz (the "Chazon Ish"), dissenting, put the length of a cubit at . Rabbi and philosopher Maimonides, following the Talmud, makes a distinction between the cubit of 6 handbreadths used in ordinary measurements, and the cubit of 5 handbreadths used in measuring the Golden Altar, the base of the altar of burnt offerings, its circuit and the horns of the altar. Ancient Greece In ancient Greek units of measurement, the standard forearm cubit measured approximately The short forearm cubit from the knuckle of the middle finger (i.e., fist clenched) to the elbow, measured approximately . Ancient Rome In ancient Rome, according to Vitruvius, a cubit was equal to Roman feet or 6 palm widths (approximately ). A 120-centimetre cubit (approximately four feet long), called the Roman ulna, was common in the Roman empire, which cubit was measured from the fingers of the outstretched arm opposite the man's hip.; also, with Islamic world In the Islamic world, the cubit () had a similar origin, being originally defined as the arm from the elbow to the tip of the middle finger. Several different cubit lengths were current in the medieval Islamic world for the unit of length, ranging from , and in turn the was commonly subdivided into six handsbreadths (), and each handsbreadth into four fingerbreadths (). The most commonly used definitions were: the legal cubit (), also known as the hand cubit (), cubit of Yusuf (, named after the 8th-century Abu Yusuf), postal cubit (), "freed" cubit () and thread cubit (). It measured , although in the Abbasid Caliphate it measured , possibly as a result of reforms of Caliph al-Ma'mun (). the black cubit (), adopted in the Abbasid period and fixed by the measure used in the Nilometer on Rawda Island at . It is also known as the common cubit (), sack-cloth cubit (), and was the most commonly used in the Maghreb and Islamic Spain under the name . the king's cubit (), inherited from the Sassanid Persians. It measured eight for a total of on average. It was this measure used by Ziyad ibn Abihi for his survey of Iraq, and is hence also known as Ziyadi cubit () or survey cubit (). From Caliph al-Mansur () it was also known as the Hashemite cubit (). Other identical measures were the work cubit () and likely also the , which measures . the cloth cubit, which fluctuated widely according to region: the Egyptian cubit ( or ) measured , that of Damascus , that of Aleppo , that of Baghdad , and that of Istanbul . A variety of more local or specific cubit measures were developed over time: the "small" Hashemite cubit of , also known as the cubit of Bilal (, named after the 8th-century Basran Bilal ibn Abi Burda); the Egyptian carpenter's cubit () or architect's cubit () of , reduced and standardized to in the 19th century; the house cubit () of , introduced by the Abbasid-era Ibn Abi Layla; the cubit of Umar () of and its double, the scale cubit () established by al-Ma'mun and used mainly for measuring canals. In medieval and early modern Persia, the cubit (usually known as ) was either the legal cubit of , or the Isfahan cubit of . A royal cubit () appeared in the 17th century with , while a "shortened" cubit () of (likely derived from the widely used cloth cubit of Aleppo) was used for cloth. The measure survived into the 20th century, with 1 equal to . Mughal India also had its own royal cubit () of . Other systems Other measurements based on the length of the forearm include some lengths of ell, the Russian lokot (), the Indian , the Thai , the Malay , the Tamil , the Telugu (), the Khmer , and the Tibetan (). Cubit arm in heraldry A cubit arm in heraldry may be dexter or sinister. It may be vested (with a sleeve) and may be shown in various positions, most commonly erect, but also fesswise (horizontal), bendwise (diagonal) and is often shown grasping objects. It is most often used erect as a crest, for example by the families of Poyntz of Iron Acton, Rolle of Stevenstone and Turton.
Physical sciences
Length and distance
null
182451
https://en.wikipedia.org/wiki/Ethylenediaminetetraacetic%20acid
Ethylenediaminetetraacetic acid
Ethylenediaminetetraacetic acid (EDTA), also called EDTA acid, is an aminopolycarboxylic acid with the formula [CH2N(CH2CO2H)2]2. This white, slightly water-soluble solid is widely used to bind to iron (Fe2+/Fe3+) and calcium ions (Ca2+), forming water-soluble complexes even at neutral pH. It is thus used to dissolve Fe- and Ca-containing scale as well as to deliver iron ions under conditions where its oxides are insoluble. EDTA is available as several salts, notably disodium EDTA, sodium calcium edetate, and tetrasodium EDTA, but these all function similarly. Uses EDTA Is widely used in industry. It also has applications in food preservation, medicine, cosmetics, water softening, in laboratories, and other fields. Industrial EDTA is mainly used to sequester (bind or confine) metal ions in aqueous solution. In the textile industry, it prevents metal ion impurities from modifying colours of dyed products. In the pulp and paper industry, EDTA inhibits the ability of metal ions, especially Mn2+, from catalysing the disproportionation of hydrogen peroxide, which is used in chlorine-free bleaching. Gas scrubbing Aqueous [Fe(EDTA)]− is used for removing ("scrubbing") hydrogen sulfide from gas streams. This conversion is achieved by oxidising the hydrogen sulfide to elemental sulfur, which is non-volatile: 2 [Fe(EDTA)]− + H2S → 2 [Fe(EDTA)]2− + S + 2 H+ In this application, the iron(III) centre is reduced to its iron(II) derivative, which can then be reoxidised by air. In a similar manner, nitrogen oxides are removed from gas streams using [Fe(EDTA)]2−. Food In a similar manner, EDTA is added to some food as a preservative or stabiliser to prevent catalytic oxidative decolouration, which is catalysed by metal ions. Water softener The reduction of water hardness in laundry applications and the dissolution of scale in boilers both rely on EDTA and related complexants to bind Ca2+, Mg2+, as well as other metal ions. Once bound to EDTA, these metal complexes are less likely to form precipitates or to interfere with the action of the soaps and detergents. For similar reasons, cleaning solutions often contain EDTA. In a similar manner EDTA is used in the cement industry for the determination of free lime and free magnesia in cement and clinkers. The solubilisation of Fe3+ ions at or below near neutral pH can be accomplished using EDTA. This property is useful in agriculture including hydroponics. However, given the pH dependence of ligand formation, EDTA is not helpful for improving iron solubility in above neutral soils. Otherwise, at near-neutral pH and above, iron(III) forms insoluble salts, which are less bioavailable to susceptible plant species. Ion-exchange chromatography EDTA was used in separation of the lanthanide metals by ion-exchange chromatography. Perfected by F. H. Spedding et al. in 1954, the method relies on the steady increase in stability constant of the lanthanide EDTA complexes with atomic number. Using sulfonated polystyrene beads and Cu2+ as a retaining ion, EDTA causes the lanthanides to migrate down the column of resin while separating into bands of pure lanthanides. The lanthanides elute in order of decreasing atomic number. Due to the expense of this method, relative to countercurrent solvent extraction, ion exchange is now used only to obtain the highest purities of lanthanides (typically greater than 99.99%). Medicine Sodium calcium edetate, an EDTA derivative, is used to bind metal ions in the practice of chelation therapy, such as for treating mercury and lead poisoning. It is used in a similar manner to remove excess iron from the body. This therapy is used to treat the complication of repeated blood transfusions, as would be applied to treat thalassaemia. In testing In medical diagnosis and organ function tests (here, kidney function test), the chromium(III) complex [Cr(EDTA)]− (as radioactive chromium-51 (51Cr)) is administered intravenously and its filtration into the urine is monitored. This method is useful for evaluating glomerular filtration rate (GFR) in nuclear medicine. EDTA is used extensively in the analysis of blood. It is an anticoagulant for blood samples for CBC/FBCs, where the EDTA chelates the calcium present in the blood specimen, arresting the coagulation process and preserving blood cell morphology. Tubes containing EDTA are marked with lavender (purple) or pink tops. EDTA is also in tan top tubes for lead testing and can be used in royal blue top tubes for trace metal testing. EDTA is a slime dispersant, and has been found to be highly effective in reducing bacterial growth during implantation of intraocular lenses (IOLs). Dentistry Dentists and endodontists use EDTA solutions to remove inorganic debris (smear layer) and lubricate the root canals in endodontics. This procedure helps prepare root canals for obturation. Furthermore, EDTA solutions with the addition of a surfactant loosen up calcifications inside a root canal and allow instrumentation (canal shaping) and facilitate apical advancement of a file in a tight or calcified root canal towards the apex. Eyedrops It serves as a preservative (usually to enhance the action of another preservative such as benzalkonium chloride or thiomersal) in ocular preparations and eyedrops. Alternative medicine Some alternative practitioners believe EDTA acts as an antioxidant, preventing free radicals from injuring blood vessel walls, therefore reducing atherosclerosis. These ideas are unsupported by scientific studies, and seem to contradict some currently accepted principles. The U.S. FDA has not approved it for the treatment of atherosclerosis. Cosmetics In shampoos, cleaners, and other personal care products, EDTA salts are used as a sequestering agent to improve their stability in air. Laboratory applications In the laboratory, EDTA is widely used for scavenging metal ions: In biochemistry and molecular biology, ion depletion is commonly used to deactivate metal-dependent enzymes, either as an assay for their reactivity or to suppress damage to DNA, proteins, and polysaccharides. EDTA also acts as a selective inhibitor against dNTP hydrolyzing enzymes (Taq polymerase, dUTPase, MutT), liver arginase and horseradish peroxidase independently of metal ion chelation. These findings urge the rethinking of the utilisation of EDTA as a biochemically inactive metal ion scavenger in enzymatic experiments. In analytical chemistry, EDTA is used in complexometric titrations and analysis of water hardness or as a masking agent to sequester metal ions that would interfere with the analyses. EDTA finds many specialised uses in the biomedical labs, such as in veterinary ophthalmology as an anticollagenase to prevent the worsening of corneal ulcers in animals. In tissue culture, EDTA is used as a chelating agent that binds to calcium and prevents joining of cadherins between cells, preventing clumping of cells grown in liquid suspension, or detaching adherent cells for passaging. In histopathology, EDTA can be used as a decalcifying agent making it possible to cut sections using a microtome once the tissue sample is demineralised. EDTA is also known to inhibit a range of metallopeptidases, the method of inhibition occurs via the chelation of the metal ion required for catalytic activity. EDTA can also be used to test for bioavailability of heavy metals in sediments. However, it may influence the bioavailability of metals in solution, which may pose concerns regarding its effects in the environment, especially given its widespread uses and applications. Other The oxidising properties of [Fe(EDTA)]− are used in photography to solubilise silver particles. EDTA is also used to remove crud (corroded metals) from fuel rods in nuclear reactors. Side effects EDTA exhibits low acute toxicity with (rat) of 2.0 g/kg to 2.2 g/kg. It has been found to be both cytotoxic and weakly genotoxic in laboratory animals. Oral exposures have been noted to cause reproductive and developmental effects. The same study also found that both dermal exposure to EDTA in most cosmetic formulations and inhalation exposure to EDTA in aerosolised cosmetic formulations would produce exposure levels below those seen to be toxic in oral dosing studies. Synthesis The compound was first described in 1935 by Ferdinand Münz, who prepared the compound from ethylenediamine and chloroacetic acid. Today, EDTA is mainly synthesised from ethylenediamine (1,2-diaminoethane), formaldehyde, and sodium cyanide. This route yields the tetrasodium EDTA, which is converted in a subsequent step into the acid forms: H2NCH2CH2NH2 + 4 CH2O + 4 NaCN + 4 H2O → (NaO2CCH2)2NCH2CH2N(CH2CO2Na)2 + 4 NH3 (NaO2CCH2)2NCH2CH2N(CH2CO2Na)2 + 4 HCl → (HO2CCH2)2NCH2CH2N(CH2CO2H)2 + 4 NaCl This process is used to produce about 80,000 tonnes of EDTA each year. Impurities cogenerated by this route include glycine and nitrilotriacetic acid; they arise from reactions of the ammonia coproduct. Nomenclature To describe EDTA and its various protonated forms, chemists distinguish between EDTA4−, the conjugate base that is the ligand, and H4EDTA, the precursor to that ligand. At very low pH (very acidic conditions) the fully protonated H6EDTA2+ form predominates, whereas at very high pH or very basic condition, the fully deprotonated EDTA4− form is prevalent. In this article, the term EDTA is used to mean H4−xEDTAx−, whereas in its complexes EDTA4− stands for the tetraanion ligand. Coordination chemistry principles In coordination chemistry, EDTA4− is a member of the aminopolycarboxylic acid family of ligands. EDTA4− usually binds to a metal cation through its two amines and four carboxylates, i.e., it is a hexadentate ("six-toothed") chelating agent. Many of the resulting coordination compounds adopt octahedral geometry. Although of little consequence for its applications, these octahedral complexes are chiral. The cobalt(III) anion [Co(EDTA)]− has been resolved into enantiomers. Many complexes of EDTA4− adopt more complex structures due to either the formation of an additional bond to water, i.e. seven-coordinate complexes, or the displacement of one carboxylate arm by water. The iron(III) complex of EDTA is seven-coordinate. Early work on the development of EDTA was undertaken by Gerold Schwarzenbach in the 1940s. EDTA forms especially strong complexes with Mn(II), Cu(II), Fe(III), Pb(II) and Co(III). Several features of EDTA's complexes are relevant to its applications. First, because of its high denticity, this ligand has a high affinity for metal cations: [Fe(H2O)6]3+ + H4EDTA [Fe(EDTA)]− + 6 H2O + 4 H+ Keq = 1025.1 Written in this way, the equilibrium quotient shows that metal ions compete with protons for binding to EDTA. Because metal ions are extensively enveloped by EDTA, their catalytic properties are often suppressed. Finally, since complexes of EDTA4− are anionic, they tend to be highly soluble in water. For this reason, EDTA is able to dissolve deposits of metal oxides and carbonates. The pKa values of free EDTA are 0, 1.5, 2, 2.66 (deprotonation of the four carboxyl groups) and 6.16, 10.24 (deprotonation of the two amino groups). Environmental concerns Abiotic degradation EDTA is in such widespread use that questions have been raised whether it is a persistent organic pollutant. While EDTA serves many positive functions in different industrial, pharmaceutical and other avenues, the longevity of EDTA can pose serious issues in the environment. The degradation of EDTA is slow. It mainly occurs abiotically in the presence of sunlight. The most important process for the elimination of EDTA from surface waters is direct photolysis at wavelengths below 400 nm. Depending on the light conditions, the photolysis half-lives of iron(III) EDTA in surface waters can range as low as 11.3 minutes up to more than 100 hours. Degradation of FeEDTA, but not EDTA itself, produces iron complexes of the triacetate (ED3A), diacetate (EDDA), and monoacetate (EDMA) – 92% of EDDA and EDMA biodegrades in 20 hours while ED3A displays significantly higher resistance. Many environmentally-abundant EDTA species (such as Mg2+ and Ca2+) are more persistent. Biodegradation In many industrial wastewater treatment plants, EDTA elimination can be achieved at about 80% using microorganisms. Resulting byproducts are ED3A and iminodiacetic acid (IDA) – suggesting that both the backbone and acetyl groups were attacked. Some microorganisms have even been discovered to form nitrates out of EDTA, but they function optimally at moderately alkaline conditions of pH 9.0–9.5. Several bacterial strains isolated from sewage treatment plants efficiently degrade EDTA. Specific strains include Agrobacterium radiobacter ATCC 55002 and the sub-branches of Pseudomonadota like BNC1, BNC2, and strain DSM 9103. The three strains share similar properties of aerobic respiration and are classified as gram-negative bacteria. Unlike photolysis, the chelated species is not exclusive to iron(III) in order to be degraded. Rather, each strain uniquely consumes varying metal–EDTA complexes through several enzymatic pathways. Agrobacterium radiobacter only degrades Fe(III) EDTA while BNC1 and DSM 9103 are not capable of degrading iron(III) EDTA and are more suited for calcium, barium, magnesium and manganese(II) complexes. EDTA complexes require dissociation before degradation. Alternatives to EDTA Interest in environmental safety has raised concerns about biodegradability of aminopolycarboxylates such as EDTA. These concerns incentivize the investigation of alternative aminopolycarboxylates. Candidate chelating agents include nitrilotriacetic acid (NTA), iminodisuccinic acid (IDS), polyaspartic acid, S,S-ethylenediamine-N,N′-disuccinic acid (EDDS), methylglycinediacetic acid (MGDA), and L-Glutamic acid N,N-diacetic acid, tetrasodium salt (GLDA). Iminodisuccinic acid (IDS) Commercially used since 1998, iminodisuccinic acid (IDS) biodegrades by about 80% after only 7 days. IDS binds to calcium exceptionally well and forms stable compounds with other heavy metal ions. In addition to having a lower toxicity after chelation, IDS is degraded by Agrobacterium tumefaciens (BY6), which can be harvested on a large scale. The enzymes involved, IDS epimerase and C−N lyase, do not require any cofactors. Polyaspartic acid Polyaspartic acid, like IDS, binds to calcium and other heavy metal ions. It has many practical applications including corrosion inhibitors, wastewater additives, and agricultural polymers. A Polyaspartic acid-based laundry detergent was the first laundry detergent in the world to receive the EU flower ecolabel. Calcium binding ability of polyaspartic acid has been exploited for targeting of drug-loaded nanocarriers to bone. Preparation of hydrogels based on polyaspartic acid, in a variety of physical forms ranging from fiber to particle, can potentially enable facile separation of the chelated ions from a solution. Therefore, despite being weaker than EDTA, polyaspartic acid can still be regarded as a viable alternative due to these features as well as biocompatibility, and biodegradability. S,S-Ethylenediamine-N,N′-disuccinic acid (EDDS) A structural isomer of EDTA, ethylenediamine-N,N′-disuccinic acid (EDDS) is readily biodegradable at high rate in its S,S form. Methylglycinediacetic acid (MGDA) Trisodium dicarboxymethyl alaninate, also known as methylglycinediacetic acid (MGDA), has a high rate of biodegradation at over 68%, but unlike many other chelating agents can degrade without the assistance of adapted bacteria. Additionally, unlike EDDS or IDS, MGDA can withstand higher temperatures while maintaining a high stability as well as the entire pH range. MGDA has been shown to be an effective chelating agent, with a capacity for mobilization comparable with that of nitrilotriacetic acid (NTA), with application to water for industrial use and for the removal of calcium oxalate from urine from patients with kidney stones. Methods of detection and analysis The most sensitive method of detecting and measuring EDTA in biological samples is selected reaction monitoring capillary electrophoresis mass spectrometry (SRM-CE/MS), which has a detection limit of 7.3 ng/mL in human plasma and a quantitation limit of 15 ng/mL. This method works with sample volumes as small as 7–8 nL. EDTA has also been measured in non-alcoholic beverages using high performance liquid chromatography (HPLC) at a level of 2.0 μg/mL. In popular culture In the movie Blade (1998), EDTA is used as a weapon to kill vampires, exploding when in contact with vampire blood.
Physical sciences
Specific acids
Chemistry
182500
https://en.wikipedia.org/wiki/Microdictyon
Microdictyon
Microdictyon is an extinct armoured worm-like panarthropod coated with net-like scleritic plates, known from the Early Cambrian Maotianshan shale of Yunnan China and other parts of the world. Microdictyon is part of the ill-defined taxon – Lobopodia – that includes several other odd worm-like animals that resembling worm with legs, such as Hallucigenia, Onychodictyon, Cardiodictyon, Luolishania, and Paucipodia. The isolated sclerites of Microdictyon are known from other Lower Cambrian deposits. Microdictyon sclerites appear to have moulted; one sclerite seems to have been preserved during ecdysis. Microdictyon sinicum (Chen, Hou and Lu, 1989) is typical. The wormlike animal has ten pairs of sclerites (suggestions that these may be eyes or eye-like structures have no weight) on the sides, matched to a pair of tentacle-like feet below. The head and posterior are tubular and featureless. Species composition Type species. Microdictyon effusum Bengtson, Matthews et Missarzhevsky, 1981; Lower Cambrian, Atdabanian Stage, Kazakhstan; Atdabanian and Botomian Stages, Russia (Siberian Platform) and England; Lower Cambrian, Sweden. In addition to the type species, 13 species: M. anus Tong, 1989, Lower Cambrian, upper Meishucunian Stage (= Atdabanian Stage), China (Shaanxi). M. chinense (Hao et Shu, 1987), Lower Cambrian, Qiongzhusi Stage (= upper Atdabanian-lowermost Botomian Stages), China (Shaanxi); Atdabanian through Botomian stages, Siberian Platform. M. cuneum Wotte et Sundberg, 2017, Lower Cambrian, Montezuman Stage, the United States. M. depressum Bengtson, 1990, Lower Cambrian, Atdabanian through Botomian Stages, South Australia. M. fuchengense Li et Zhu, 2001, Lower Cambrian, upper Meishucunian Stage (Atdabanian Stage), China (Shaanxi). M. jinshaense Zhang et Aldridge, 2007, Lower Cambrian, Qiongzhusi Stage (= upper Atdabanian Stage-lowermost Botomian), China(Shaanxi). M. montezumaensis Wotte et Sundberg, 2017, Lower Cambrian, Montezuman Stage, the United States. M. rhomboidale Bengtson, Matthews et Missarzhevsky, 1986, Lower Cambrian, upper parts of the Atdabanian Stage, Kazakhstan; Atdabanian Stage, Canada, the United States (M. cf. rhomboidale). M. robisoni Bengtson, Matthews et Missarzhevsky, 1986, Middle Cambrian, Amgan Stage, the United States; M. rozanovi Demidenko, 2006, Lower Cambrian, Toyonian Stage, Siberian Platform. M. sinicum Chen, Hou et Lu, 1989, Lower Cambrian, upper Meishucunian Stage (= Atdabanian Stage) Stage, China (Yunnan. M. sphaeroides Hinz, 1987, Lower Cambrian, Atdabanian Stage, Great Britain. M. tenuiporatum Bengtson, Matthews et Missarzhevsky, 1986, Lower Cambrian, Atdabanian Stage, Siberian Platform. A picture can be found at https://web.archive.org/web/20030730043530/http://paws.wcu.edu/dperlmutr/earlyfauna.html. The name Microdictyon is also used for a genus of green algae.
Biology and health sciences
Ecdysozoa
Animals
182520
https://en.wikipedia.org/wiki/Plesiosauroidea
Plesiosauroidea
Plesiosauroidea (; Greek: 'near, close to' and 'lizard') is an extinct clade of carnivorous marine reptiles. They have the snake-like longest neck to body ratio of any reptile. Plesiosauroids are known from the Jurassic and Cretaceous periods. After their discovery, some plesiosauroids were said to have resembled "a snake threaded through the shell of a turtle", although they had no shell. Plesiosauroidea appeared at the Early Jurassic Period (late Sinemurian stage) and thrived until the K-Pg extinction, at the end of the Cretaceous Period. The oldest confirmed plesiosauroid is Plesiosaurus itself, as all younger taxa were recently found to be pliosauroids. While they were Mesozoic diapsid reptiles that lived at the same time as dinosaurs, they did not belong to the latter. Gastroliths are frequently found associated with plesiosaurs. History of discovery The first complete plesiosauroid skeletons were found in England by Mary Anning, in the early 19th century, and were amongst the first fossil vertebrates to be described by science. Plesiosauroid remains were found by the Scottish geologist Hugh Miller in 1844 in the rocks of the Great Estuarine Group (then known as 'Series') of western Scotland. Many others have been found, some of them virtually complete, and new discoveries are made frequently. One of the finest specimens was found in 2002 on the coast of Somerset (England) by someone fishing from the shore. This specimen, called the Collard specimen after its finder, was on display in Taunton Museum in 2007. Another, less complete, skeleton was also found in 2002, in the cliffs at Filey, Yorkshire, England, by an amateur palaeontologist. The preserved skeleton is displayed at Rotunda Museum in Scarborough. Description Plesiosauroids had a broad body and a short tail. They retained their ancestral two pairs of limbs, which evolved into large flippers. It has been determined by teeth records that several sea-dwelling reptiles, including plesiosauroids, had a warm-blooded metabolism similar to that of mammals. They could generate endothermic heat to survive in colder habitats. Evolution Plesiosauroids evolved from earlier, similar forms such as pistosaurs. There are a number of families of plesiosauroids, which retain the same general appearance and are distinguished by various specific details. These include the Plesiosauridae, unspecialized types which are limited to the Early Jurassic period; Cryptoclididae, (e.g. Cryptoclidus), with a medium-long neck and somewhat stocky build; Elasmosauridae, with very long, flexible necks and tiny heads; and the Cimoliasauridae, a poorly known group of small Cretaceous forms. According to traditional classifications, all plesiosauroids have a small head and long neck but, in recent classifications, one short-necked and large-headed Cretaceous group, the Polycotylidae, are included under the Plesiosauroidea, rather than under the traditional Pliosauroidea. Size of different plesiosaurs varied significantly, with an estimated length of Trinacromerum being three meters and Mauisaurus growing to twenty meters. Relationships Within Plesiosauroidea, there is a more exclusive group, Cryptoclidia. Cryptoclidia was named and defined as a node clade in 2010 by Hilary Ketchum and Roger Benson: the group consisting of the last common ancestor of Cryptoclidus eurymerus and Polycotylus latipinnis; and all its descendants. The smaller group within Cryptoclidia was erected prior, in 2007 under the name "Leptocleidoidea". Although established as a clade, the name Leptocleidoidea implies that it is a superfamily. Leptocleidoidea is placed within the superfamily Plesiosauroidea, so it was renamed Leptocleidia by Hilary F. Ketchum and Roger B. J. Benson (2010) to avoid confusion with ranks. Leptocleidia is a node-based taxon which was defined by Ketchum and Benson as "Leptocleidus superstes, Polycotylus latipinnis, their most recent common ancestor and all of its descendants". The following cladogram follows an analysis by Benson & Druckenmiller (2014). Behavior Unlike their pliosauroid cousins, plesiosauroids (with the exception of the Polycotylidae) were probably slow swimmers. It is likely that they cruised slowly below the surface of the water, using their long flexible neck to move their head into position to snap up unwary fish or cephalopods. Their four-flippered swimming adaptation may have given them exceptional maneuverability, so that they could swiftly rotate their bodies as an aid to catching prey. Contrary to many reconstructions of plesiosauroids, it would have been impossible for them to lift their head and long neck above the surface, in the "swan-like" pose that is often shown. Even if they had been able to bend their necks upward to that degree (which they could not), gravity would have tipped their body forward and kept most of the heavy neck in the water. On 12 August 2011, researchers from the U.S. described a fossil of a pregnant plesiosaur found on a Kansas ranch in 1987. The plesiosauroid, Polycotylus latippinus, has confirmed that these predatory marine reptiles gave birth to single, large, live offspring—contrary to other marine reptile reproduction which typically involves a large number of small babies. Before this study, plesiosauroids had sometimes been portrayed crawling out of water to lay eggs in the manner of sea turtles, but experts had long suspected that their anatomy was not compatible with movement on land. The adult plesiosaur measures long and the juvenile is long.
Biology and health sciences
Prehistoric marine reptiles
Animals
182664
https://en.wikipedia.org/wiki/Surface-to-air%20missile
Surface-to-air missile
A surface-to-air missile (SAM), also known as a ground-to-air missile (GTAM) or surface-to-air guided weapon (SAGW), is a missile designed to be launched from the ground or the sea to destroy aircraft or other missiles. It is one type of anti-aircraft system; in modern armed forces, missiles have replaced most other forms of dedicated anti-aircraft weapons, with anti-aircraft guns pushed into specialized roles. The first attempt at SAM development took place during World War II, but no operational systems were introduced. Further development in the 1940s and 1950s led to operational systems being introduced by most major forces during the second half of the 1950s. Smaller systems, suitable for close-range work, evolved through the 1960s and 1970s, to modern systems that are man-portable. Shipborne systems followed the evolution of land-based models, starting with long-range weapons and steadily evolving toward smaller designs to provide a layered defence. This evolution of design increasingly pushed gun-based systems into the shortest-range roles. The American Nike Ajax was the first operational SAM system, and the Soviet Union's S-75 Dvina was the most-produced SAM system. Widely used modern examples include the Patriot and S-300 wide-area systems, SM-6 and MBDA Aster Missile naval missiles, and short-range man-portable systems like the Stinger and 9K38 Igla. History The first known idea for a guided surface-to-air missile was in 1925, when a beam riding system was proposed whereby a rocket would follow a searchlight beam onto a target. A selenium cell was mounted on the tip of each of the rocket's four tail fins, with the cells facing backwards. When one selenium cell was no longer in the light beam, it would be steered in the opposite direction back into the beam. The first historical mention of a concept and design of a surface-to-air missile in which a drawing was presented, was by inventor Gustav Rasmus in 1931, who proposed a design that would home in on the sound of an aircraft's engines. World War II During World War II, efforts were started to develop surface-to-air missiles as it was generally considered that flak was of little use against bombers of ever-increasing performance. The lethal radius of a flak shell is fairly small, and the chance of delivering a "hit" is essentially a fixed percentage per round. In order to attack a target, guns fire continually while the aircraft are in range in order to launch as many shells as possible, increasing the chance that one of these will end up within the lethal range. Against the Boeing B-17, which operated just within the range of the numerous German eighty-eights, an average of 2,805 rounds had to be fired per bomber destroyed. Bombers flying at higher altitudes require larger guns and shells to reach them. This greatly increases the cost of the system, and (generally) slows the rate of fire. Faster aircraft fly out of range more quickly, reducing the number of rounds fired against them. Against late-war designs like the Boeing B-29 Superfortress or jet-powered designs like the Arado Ar 234, flak would be essentially useless. This potential was already obvious by 1942, when Walther von Axthelm outlined the growing problems with flak defences that he predicted would soon be dealing with "aircraft speeds and flight altitudes [that] will gradually reach and between ." This was seen generally; in November 1943 the Director of Gunnery Division of the Royal Navy concluded that guns would be useless against jets, stating "No projectile of which control is lost when it leaves the ship can be of any use to us in this matter." Axis efforts The first serious consideration of a SAM development project was a series of conversations that took place in Germany during 1941. In February, Friederich Halder proposed a "flak rocket" concept, which led Walter Dornberger to ask Wernher von Braun to prepare a study on a guided missile able to reach between altitude. Von Braun became convinced a better solution was a crewed rocket interceptor, and said as much to the director of the T-Amt, Roluf Lucht, in July. The directors of the Luftwaffe flak arm were not interested in crewed aircraft, and the resulting disagreements between the teams delayed serious consideration of a SAM for two years. Von Axthelm published his concerns in 1942, and the subject saw serious consideration for the first time; initial development programs for liquid- and solid-fuel rockets became part of the Flak Development Program of 1942. By this point serious studies by the Peenemünde team had been prepared, and several rocket designs had been proposed, including 1940's Feuerlilie, and 1941's Wasserfall and Henschel Hs 117 Schmetterling. None of these projects saw any real development until 1943, when the first large-scale raids by the Allied air forces started. As the urgency of the problem grew, new designs were added, including Enzian and Rheintochter, as well as the unguided Taifun which was designed to be launched in waves. In general, these designs could be split into two groups. One set of designs would be boosted to altitude in front of the bombers and then flown towards them on a head-on approach at low speeds comparable to crewed aircraft. These designs included the Feuerlilie, Schmetterling and Enzian. The second group were high-speed missiles, typically supersonic, that flew directly towards their targets from below. These included Wasserfall and Rheintochter. Both types used radio control for guidance, either by eye, or by comparing the returns of the missile and target on a single radar screen. Development of all these systems was carried out at the same time, and the war ended before any of them was ready for combat use. The infighting between various groups in the military also delayed development. Some extreme fighter designs, like the Komet and Natter, also overlapped with SAMs in their intended uses. Albert Speer was especially supportive of missile development. In his opinion, had they been consistently developed from the start, the large scale bomber raids of 1944 would have been impossible. Allied efforts The British developed unguided antiaircraft rockets (operated under the name Z Battery) close to the start of World War II, but the air superiority usually held by the Allies meant that the demand for similar weapons was not as acute. When several Allied ships were sunk in 1943 by Henschel Hs 293 and Fritz X glide bombs, Allied interest changed. These weapons were released from stand-off distances, with the bomber remaining outside the range of the ship's antiaircraft guns, and the missiles themselves were too small and fast to be attacked effectively. To combat this threat, the U.S. Navy launched Operation Bumblebee to develop a ramjet-powered missile to destroy the launching aircraft at long range. The initial performance goal was to target an intercept at a horizontal range of and altitude, with a warhead for a 30 to 60 percent kill probability. This weapon did not emerge for 16 years, when it entered operation as the RIM-8 Talos. Heavy shipping losses to kamikaze attacks during the Liberation of the Philippines and the Battle of Okinawa provided additional incentive for guided missile development. This led to the British Fairey Stooge and Brakemine efforts, and the U.S. Navy's SAM-N-2 Lark. The Lark ran into considerable difficulty and it never entered operational use. The end of the war led to the British efforts being used strictly for research and development throughout their lifetime. Post-war deployments In the immediate post-war era, SAM developments were under way around the world, with several of these entering service in the early- and mid-1950s. Coming to the same conclusions as the Germans regarding flak, the U.S. Army started its Project Nike developments in 1944. Led by Bell Labs, the Nike Ajax was tested in production form in 1952, becoming the first operational SAM system when it was activated in March 1954. Concerns about Ajax's ability to deal with formations of aircraft led to greatly updated version of the same basic design entering service in 1958 as the Nike Hercules, the first nuclear-armed SAM. The U.S. Army Air Forces had also considered collision-course weapons (like the German radio-controlled concepts) and launched Project Thumper in 1946. This was merged with another project, Wizard, and emerged as the CIM-10 Bomarc in 1959. The Bomarc had a range of over 500 km, but it was quite expensive and somewhat unreliable. Development of Oerlikon's RSD 58 started in 1947, and was a closely held secret until 1955. Early versions of the missile were available for purchase as early as 1952, but never entered operational service. The RSD 58 used beam riding guidance, which has limited performance against high-speed aircraft, as the missile is unable to "lead" the target to a collision point. Examples were purchased by several nations for testing and training purposes, but no operational sales were made. The Soviet Union began development of a SAM system in earnest with the opening of the Cold War. Joseph Stalin was worried that Moscow would be subjected to American and British air raids, like those against Berlin, and, in 1951, he demanded that a missile system to counter a 900 bomber raid be built as quickly as possible. This led to the S-25 Berkut system (NATO reporting name: SA-1 "Guild"), which was designed, developed and deployed in a rush program. Early units entered operational service on 7 May 1955, and the entire system ringing Moscow was completely activated by June 1956. The system failed, however, to detect, track, and intercept the only overflight of the Soviet capital Moscow by a U-2 reconnaissance plane on July 5, 1956. The S-25 was a static system, but efforts were also put into a smaller design that would be much more mobile. This emerged in 1957 as the famous S-75 Dvina (SA-2 "Guideline"), a portable system, with very high performance, that remained in operation into the 2000s. The Soviet Union remained at the forefront of SAM development throughout its history; and Russia has followed suit. The early British developments with Stooge and Brakemine were successful, but further development was curtailed in the post-war era. These efforts picked up again with the opening of the Cold War, following the "Stage Plan" of improving UK air defences with new radars, fighters and missiles. Two competing designs were proposed for "Stage 1", based on common radar and control units, and these emerged as the RAF's Bristol Bloodhound in 1958, and the Army's English Electric Thunderbird in 1959. A third design followed the American Bumblebee efforts in terms of role and timeline, and entered service in 1961 as the Sea Slug. War in Vietnam The Vietnam War was the first modern war in which guided antiaircraft missiles seriously challenged highly advanced supersonic jet aircraft. It would also be the first and only time that the latest and most modern air defense technologies of the Soviet Union and the most modern jet fighter planes and bombers of the United States confronted each other in combat (if one does not count the Yom Kippur War wherein IAF was challenged by Syrian SA-3s). The USAF responded to this threat with increasingly effective means. Early efforts to directly attack the missiles sites as part of Operation Spring High and Operation Iron Hand were generally unsuccessful, but the introduction of Wild Weasel aircraft carrying Shrike missiles and the Standard ARM missile changed the situation dramatically. Feint and counterfeint followed as each side introduced new tactics to try to gain the upper hand. By the time of Operation Linebacker II in 1972, the Americans had gained critical information about the performance and operations of the S-75 (via Arab S-75 systems captured by Israel), and used these missions as a way to demonstrate the capability of strategic bombers to operate in a SAM saturated environment. Their first missions appeared to demonstrate the exact opposite, with the loss of three B-52s and several others damaged in a single mission. Dramatic changes followed, and by the end of the series, missions were carried out with additional chaff, ECM, Iron Hand, and other changes that dramatically changed the score. By the conclusion of the Linebacker II campaign, the shootdown rate of the S-75 against the B-52s was 7.52% (15 B-52s were shot down, 5 B-52s were heavily damaged for 266 missiles) During the war, The Soviet Union supplied 7,658 SAMs to North Vietnam, and their defense forces conducted about 5,800 launches, usually in multiples of three. By the war's end, the U.S lost a total of 3,374 aircraft in combat operations. According to the North Vietnamese, 31% were shot down by S-75 missiles (1,046 aircraft, or 5.6 missiles per one kill); 60% were shot down by anti-aircraft guns; and 9% were shot down by MiG fighters. The S-75 missile system significantly improved the effectiveness of North Vietnamese anti-aircraft artillery, which used data from S-75 radar stations However, the U.S states only 205 of those aircraft were lost to North Vietnamese surface-to-air missiles. Smaller, faster All of these early systems were "heavyweight" designs with limited mobility and requiring considerable set-up time. However, they were also increasingly effective. By the early 1960s, the deployment of SAMs had rendered high-speed high-altitude flight in combat practically suicidal. The way to avoid this was to fly lower, below the line-of-sight of missile's radar systems. This demanded very different aircraft, like the F-111, TSR-2, and Panavia Tornado. Consequently, SAMs evolved rapidly in the 1960s. As their targets were now being forced to fly lower due to the presence of the larger missiles, engagements would necessarily be at short ranges, and occur quickly. Shorter ranges meant the missiles could be much smaller, which aided them in terms of mobility. By the mid-1960s, almost all modern armed forces had short-range missiles mounted on trucks or light armour that could move with the armed forces they protected. Examples include the 2K12 Kub (SA-6) and 9K33 Osa (SA-8), MIM-23 Hawk, Rapier, Roland and Crotale. The introduction of sea-skimming missiles in the late 1960s and 1970s led to additional mid- and short-range designs for defence against these targets. The UK's Sea Cat was an early example that was designed specifically to replace the Bofors 40 mm gun on its mount, and became the first operational point-defense SAM. The American RIM-7 Sea Sparrow quickly proliferated into a wide variety of designs fielded by most navies. Many of these are adapted from earlier mobile designs, but the special needs of the naval role has resulted in the continued existence of many custom missiles. MANPADS As aircraft moved ever lower, and missile performance continued to improve, eventually it became possible to build an effective man-portable anti-aircraft missile. Known as MANPADS, the first example was a Royal Navy system known as the Holman Projector, used as a last-ditch weapon on smaller ships. The Germans also produced a similar short-range weapon known as Fliegerfaust, but it entered operation only on a very limited scale. The performance gap between this weapon and jet fighters of the post-war era was so great that such designs would not be effective. By the 1960s, technology had closed this gap to a degree, leading to the introduction of the FIM-43 Redeye, SA-7 Grail and Blowpipe. Rapid improvement in the 1980s led to second generation designs, like the FIM-92 Stinger, 9K34 Strela-3 (SA-14), Igla-1 and Starstreak, with dramatically improved performance. By the 1990s to the 2010s, the Chinese had developed designs drawing influence from these, notably the FN-6 and the QW series. Through the evolution of SAMs, improvements were also being made to anti-aircraft artillery, but the missiles pushed them into ever shorter-range roles. By the 1980s, the only remaining widespread use was point-defense of airfields and ships, especially against cruise missiles. By the 1990s, even these roles were being encroached on by new MANPADS and similar short-range weapons, like the RIM-116 Rolling Airframe Missile. General information Surface-to-air missiles are classified by their guidance, mobility, altitude and range. Mobility, maneuverability and range Missiles able to fly longer distances are generally heavier, and therefore less mobile. This leads to three "natural" classes of SAM systems; heavy long-range systems that are fixed or semi-mobile, medium-range vehicle-mounted systems that can fire on the move, and short-range man-portable air-defense systems (MANPADS). Modern long-range weapons include the MIM-104 Patriot and S-300 systems, which have effective ranges on the order of and offer relatively good mobility and short unlimbering times. These compare with older systems with similar or less range, like the MIM-14 Nike Hercules or S-75 Dvina, which required fixed sites of considerable size. Much of this performance increase is due to improved rocket fuels and ever-smaller electronics in the guidance systems. Some very long-range systems remain, notably the Russian S-400, which has a range of . Medium-range designs, like the Rapier and 2K12 Kub, are specifically designed to be highly mobile with very fast, or zero, setup times. Many of these designs were mounted on armoured vehicles, allowing them to keep pace with mobile operations in a conventional war. Once a major group unto itself, medium-range designs have seen less development since the 1990s, as the focus has changed to unconventional warfare. Developments have also been made in onboard maneuverability. Israel's David's Sling Stunner missile is designed to intercept the newest generation of tactical ballistic missiles at low altitude. The multi-stage interceptor consists of a solid-fuel, rocket motor booster, followed by an asymmetrical kill vehicle with advanced steering for super-maneuverability during the kill-stage. A three-pulse motor provides additional acceleration and maneuverability during the terminal phase. MANPAD systems first developed in the 1960s and proved themselves in battle during the 1970s. MANPADS normally have ranges on the order of and are effective against attack helicopters and aircraft making ground attacks. Against fixed wing aircraft, they can be very effective, forcing them to fly outside the missile's envelope and thereby greatly reducing their effectiveness in ground-attack roles. MANPAD systems are sometimes used with vehicle mounts to improve maneuverability, like the Avenger system. These systems have encroached on the performance niche formerly filled by dedicated mid-range systems. Ship-based anti-aircraft missiles are also considered to be SAMs, although in practice it is expected that they would be more widely used against sea skimming missiles rather than aircraft. Virtually all surface warships can be armed with SAMs, and naval SAMs are a necessity for all front-line surface warships. Some warship types specialize in anti-air warfare e.g. cruisers equipped with the Aegis combat system or cruisers with the S-300F Fort missile system. Modern Warships may carry all three types (from long-range to short-range) of SAMs as a part of their multi-layered air defence. Guidance systems SAM systems generally fall into two broad groups based on their guidance systems, those using radar and those using some other means. Longer range missiles generally use radar for early detection and guidance. Early SAM systems generally used tracking radars and fed guidance information to the missile using radio control concepts, referred to in the field as command guidance. Through the 1960s, the semi-active radar homing (SARH) concept became much more common. In SARH, the reflections of the tracking radar's broadcasts are picked up by a receiver in the missile, which homes in on this signal. SARH has the advantage of leaving most of the equipment on the ground, while also eliminating the need for the ground station to communicate with the missile after launch. Smaller missiles, especially MANPADS, generally use infrared homing guidance systems. These have the advantage of being "fire-and-forget", once launched they will home on the target on their own with no external signals needed. In comparison, SARH systems require the tracking radar to illuminate the target, which may require them to be exposed through the attack. Systems combining an infrared seeker as a terminal guidance system on a missile using SARH are also known, like the MIM-46 Mauler, but these are generally rare. Some newer short-range systems use a variation of the SARH technique, but based on laser illumination instead of radar. These have the advantage of being small and very fast acting, as well as highly accurate. A few older designs use purely optical tracking and command guidance, perhaps the best known example of this is the British Rapier system, which was initially an all-optical system with high accuracy. All SAM systems from the smallest to the largest generally include identified as friend or foe (IFF) systems to help identify the target before being engaged. While IFF is not as important with MANPADs, as the target is almost always visually identified prior to launch, most modern MANPADs do include it. Target acquisition Long-range systems generally use radar systems for target detection, and depending on the generation of system, may "hand off" to a separate tracking radar for attack. Short range systems are more likely to be entirely visual for detection. Hybrid systems are also common. The MIM-72 Chaparral was fired optically, but normally operated with a short range early warning radar that displayed targets to the operator. This radar, the FAAR, was taken into the field with a Gama Goat and set up behind the lines. Information was passed to the Chaparral via a data link. Likewise, the UK's Rapier system included a simple radar that displayed the rough direction of a target on a series of lamps arranged in a circle. The missile operator would point his telescope in that rough direction and then hunt for the target visually.
Technology
Missiles
null
182693
https://en.wikipedia.org/wiki/Clock%20signal
Clock signal
In electronics and especially synchronous digital circuits, a clock signal (historically also known as logic beat) is an electronic logic signal (voltage or current) which oscillates between a high and a low state at a constant frequency and is used like a metronome to synchronize actions of digital circuits. In a synchronous logic circuit, the most common type of digital circuit, the clock signal is applied to all storage devices, flip-flops and latches, and causes them all to change state simultaneously, preventing race conditions. A clock signal is produced by an electronic oscillator called a clock generator. The most common clock signal is in the form of a square wave with a 50% duty cycle. Circuits using the clock signal for synchronization may become active at either the rising edge, falling edge, or, in the case of double data rate, both in the rising and in the falling edges of the clock cycle. Digital circuits Most integrated circuits (ICs) of sufficient complexity use a clock signal in order to synchronize different parts of the circuit, cycling at a rate slower than the worst-case internal propagation delays. In some cases, more than one clock cycle is required to perform a predictable action. As ICs become more complex, the problem of supplying accurate and synchronized clocks to all the circuits becomes increasingly difficult. The preeminent example of such complex chips is the microprocessor, the central component of modern computers, which relies on a clock from a crystal oscillator. The only exceptions are asynchronous circuits such as asynchronous CPUs. A clock signal might also be gated, that is, combined with a controlling signal that enables or disables the clock signal for a certain part of a circuit. This technique is often used to save power by effectively shutting down portions of a digital circuit when they are not in use, but comes at a cost of increased complexity in timing analysis. Single-phase clock Most modern synchronous circuits use only a "single phase clock" – in other words, all clock signals are (effectively) transmitted on 1 wire. Two-phase clock In synchronous circuits, a "two-phase clock" refers to clock signals distributed on 2 wires, each with non-overlapping pulses. Traditionally one wire is called "phase 1" or "φ1" (phi1), the other wire carries the "phase 2" or "φ2" signal. Because the two phases are guaranteed non-overlapping, gated latches rather than edge-triggered flip-flops can be used to store state information so long as the inputs to latches on one phase only depend on outputs from latches on the other phase. Since a gated latch uses only four gates versus six gates for an edge-triggered flip-flop, a two phase clock can lead to a design with a smaller overall gate count but usually at some penalty in design difficulty and performance. Metal oxide semiconductor (MOS) ICs typically used dual clock signals (a two-phase clock) in the 1970s. These were generated externally for both the Motorola 6800 and Intel 8080 microprocessors. The next generation of microprocessors incorporated the clock generation on chip. The 8080 uses a 2 MHz clock but the processing throughput is similar to the 1 MHz 6800. The 8080 requires more clock cycles to execute a processor instruction. Due to their dynamic logic, the 6800 has a minimum clock rate of 100 kHz and the 8080 has a minimum clock rate of 500 kHz. Higher speed versions of both microprocessors were released by 1976. The 6501 requires an external 2-phase clock generator. The MOS Technology 6502 uses the same 2-phase logic internally, but also includes a two-phase clock generator on-chip, so it only needs a single phase clock input, simplifying system design. 4-phase clock Some early integrated circuits use four-phase logic, requiring a four phase clock input consisting of four separate, non-overlapping clock signals. This was particularly common among early microprocessors such as the National Semiconductor IMP-16, Texas Instruments TMS9900, and the Western Digital MCP-1600 chipset used in the DEC LSI-11. Four phase clocks have only rarely been used in newer CMOS processors such as the DEC WRL MultiTitan microprocessor. and in Intrinsity's Fast14 technology. Most modern microprocessors and microcontrollers use a single-phase clock. Clock multiplier Many modern microcomputers use a "clock multiplier" which multiplies a lower frequency external clock to the appropriate clock rate of the microprocessor. This allows the CPU to operate at a much higher frequency than the rest of the computer, which affords performance gains in situations where the CPU does not need to wait on an external factor (like memory or input/output). Dynamic frequency change The vast majority of digital devices do not require a clock at a fixed, constant frequency. As long as the minimum and maximum clock periods are respected, the time between clock edges can vary widely from one edge to the next and back again. Such digital devices work just as well with a clock generator that dynamically changes its frequency, such as spread-spectrum clock generation, dynamic frequency scaling, etc. Devices that use static logic do not even have a maximum clock period (or in other words, minimum clock frequency); such devices can be slowed and paused indefinitely, then resumed at full clock speed at any later time. Other circuits Some sensitive mixed-signal circuits, such as precision analog-to-digital converters, use sine waves rather than square waves as their clock signals, because square waves contain high-frequency harmonics that can interfere with the analog circuitry and cause noise. Such sine wave clocks are often differential signals, because this type of signal has twice the slew rate, and therefore half the timing uncertainty, of a single-ended signal with the same voltage range. Differential signals radiate less strongly than a single line. Alternatively, a single line shielded by power and ground lines can be used. In CMOS circuits, gate capacitances are charged and discharged continually. A capacitor does not dissipate energy, but energy is wasted in the driving transistors. In reversible computing, inductors can be used to store this energy and reduce the energy loss, but they tend to be quite large. Alternatively, using a sine wave clock, CMOS transmission gates and energy-saving techniques, the power requirements can be reduced. Distribution The most effective way to get the clock signal to every part of a chip that needs it, with the lowest skew, is a metal grid. In a large microprocessor, the power used to drive the clock signal can be over 30% of the total power used by the entire chip. The whole structure with the gates at the ends and all amplifiers in between have to be loaded and unloaded every cycle. To save energy, clock gating temporarily shuts off part of the tree. The clock distribution network (or clock tree, when this network forms a tree such as an H-tree) distributes the clock signal(s) from a common point to all the elements that need it. Since this function is vital to the operation of a synchronous system, much attention has been given to the characteristics of these clock signals and the electrical networks used in their distribution. Clock signals are often regarded as simple control signals; however, these signals have some very special characteristics and attributes. Clock signals are typically loaded with the greatest fanout and operate at the highest speeds of any signal within the synchronous system. Since the data signals are provided with a temporal reference by the clock signals, the clock waveforms must be particularly clean and sharp. Furthermore, these clock signals are particularly affected by technology scaling (see Moore's law), in that long global interconnect lines become significantly more resistive as line dimensions are decreased. This increased line resistance is one of the primary reasons for the increasing significance of clock distribution on synchronous performance. Finally, the control of any differences and uncertainty in the arrival times of the clock signals can severely limit the maximum performance of the entire system and create catastrophic race conditions in which an incorrect data signal may latch within a register. Most synchronous digital systems consist of cascaded banks of sequential registers with combinational logic between each set of registers. The functional requirements of the digital system are satisfied by the logic stages. Each logic stage introduces delay that affects timing performance, and the timing performance of the digital design can be evaluated relative to the timing requirements by a timing analysis. Often special consideration must be made to meet the timing requirements. For example, the global performance and local timing requirements may be satisfied by the careful insertion of pipeline registers into equally spaced time windows to satisfy critical worst-case timing constraints. The proper design of the clock distribution network helps ensure that critical timing requirements are satisfied and that no race conditions exist (see also clock skew). The delay components that make up a general synchronous system are composed of the following three individual subsystems: the memory storage elements, the logic elements, and the clocking circuitry and distribution network. Novel structures are currently under development to ameliorate these issues and provide effective solutions. Important areas of research include resonant clocking techniques ("resonant clock mesh"), on-chip optical interconnect, and local synchronization methodologies.
Technology
Functional circuits
null
182728
https://en.wikipedia.org/wiki/De%20Havilland%20Comet
De Havilland Comet
The de Havilland DH.106 Comet is the world's first commercial jet airliner. Developed and manufactured by de Havilland in the United Kingdom, the Comet 1 prototype first flew in 1949. It features an aerodynamically clean design with four de Havilland Ghost turbojet engines buried in the wing roots, a pressurised cabin, and large windows. For the era, it offered a relatively quiet, comfortable passenger cabin and was commercially promising at its debut in 1952. Within a year of the airliner's entry into service, three Comets were lost in highly publicized accidents after suffering catastrophic mishaps mid-flight. Two of these were found to be caused by structural failure resulting from metal fatigue in the airframe, a phenomenon not fully understood at the time; the other was due to overstressing of the airframe during flight through severe weather. The Comet was withdrawn from service and extensively tested. Design and construction flaws, including improper riveting and dangerous stress concentrations around square cut-outs for the ADF (automatic direction finder) antennas were ultimately identified. As a result, the Comet was extensively redesigned, with structural reinforcements and other changes. Rival manufacturers heeded the lessons learned from the Comet when developing their own aircraft. Although sales never fully recovered, the improved Comet 2 and the prototype Comet 3 culminated in the redesigned Comet 4 series which debuted in 1958 and remained in commercial service until 1981. The Comet was also adapted for a variety of military roles such as VIP, medical and passenger transport, as well as surveillance; the last Comet 4, used as a research platform, made its final flight in 1997. The most extensive modification resulted in a specialised maritime patrol derivative, the Hawker Siddeley Nimrod, which remained in service with the Royal Air Force until 2011, over 60 years after the Comet's first flight. Development Origins On 11 March 1943, the Cabinet of the United Kingdom formed the Brabazon Committee, which was tasked with determining the UK's airliner needs after the conclusion of the Second World War. One of its recommendations was for the development and production of a pressurised, transatlantic mailplane that could carry of payload at a cruising speed of non-stop. Aviation company de Havilland was interested in this requirement, but chose to challenge the then widely held view that jet engines were too fuel-hungry and unreliable for such a role. As a result, committee member Sir Geoffrey de Havilland, head of the de Havilland company, used his personal influence and his company's expertise to champion the development of a jet-propelled aircraft; proposing a specification for a pure turbojet-powered design. The committee accepted the proposal, calling it the "Type IV" (of five designs), and in 1945 awarded a development and production contract to de Havilland under the designation Type 106. The type and design were to be so advanced that de Havilland had to undertake the design and development of both the airframe and the engines. This was because in 1945 no turbojet engine manufacturer in the world was drawing-up a design specification for an engine with the thrust and specific fuel consumption that could power an aircraft at the proposed cruising altitude (), speed, and transatlantic range as was called for by the Type 106. First-phase development of the DH.106 focused on short- and intermediate-range mailplanes with small passenger compartments and as few as six seats, before being redefined as a long-range airliner with a capacity of 24 seats. Out of all the Brabazon designs, the DH.106 was seen as the riskiest: both in terms of introducing untried design elements and for the financial commitment involved. Nevertheless, the British Overseas Airways Corporation (BOAC) found the Type IV's specifications attractive, and initially proposed a purchase of 25 aircraft; in December 1945, when a firm contract was created, the order total was revised to 10. A design team was formed in 1946 under the leadership of chief designer Ronald Bishop, who had been responsible for the Mosquito fighter-bomber. Several unorthodox configurations were considered, ranging from canard to tailless designs; All were rejected. The Ministry of Supply was interested in the most radical of the proposed designs, and ordered two experimental tailless DH 108s to serve as proof of concept aircraft for testing swept-wing configurations in both low-speed and high-speed flight. During flight tests, the DH 108 gained a reputation for being accident-prone and unstable, leading de Havilland and BOAC to gravitate to conventional configurations and, necessarily, designs with less technical risk. The DH 108s were later modified to test the DH.106's power controls. In September 1946, before completion of the DH 108s, BOAC requests necessitated a redesign of the DH.106 from its previous 24-seat configuration to a larger 36-seat version. With no time to develop the technology necessary for a proposed tailless configuration, Bishop opted for a more conventional 20-degree swept-wing design with unswept tail surfaces, married to an enlarged fuselage accommodating 36 passengers in a four-abreast arrangement with a central aisle. Replacing previously specified Halford H.1 Goblin engines, four new, more-powerful Rolls-Royce Avons were to be incorporated in pairs buried in the wing roots; Halford H.2 Ghost engines were eventually applied as an interim solution while the Avons cleared certification. The redesigned aircraft was named the DH.106 Comet in December 1947. Revised first orders from BOAC and British South American Airways totalled 14 aircraft, with delivery projected for 1952. Testing and prototypes As the Comet represented a new category of passenger aircraft, more rigorous testing was a development priority. From 1947 to 1948, de Havilland conducted an extensive research and development phase, including the use of several stress test rigs at Hatfield Aerodrome for small components and large assemblies alike. Sections of pressurised fuselage were subjected to high-altitude flight conditions via a large decompression chamber on-site and tested to failure. Tracing fuselage failure points proved difficult with this method, and de Havilland ultimately switched to conducting structural tests with a water tank that could be safely configured to increase pressures gradually. The entire forward fuselage section was tested for metal fatigue by repeatedly pressurising to overpressure and depressurising through more than 16,000 cycles, equivalent to about 40,000 hours of airline service. The windows were also tested under a pressure of , above expected pressures at the normal service ceiling of . One window frame survived , about 1,250 per cent over the maximum pressure it was expected to encounter in service. The first prototype DH.106 Comet (carrying Class B markings G-5-1) was completed in 1949 and was initially used to conduct ground tests and brief early flights. The prototype's maiden flight, out of Hatfield Aerodrome, took place on 27 July 1949 and lasted 31 minutes. At the controls was de Havilland chief test pilot John "Cats Eyes" Cunningham, a famous night-fighter pilot of the Second World War, along with co-pilot Harold "Tubby" Waters, engineers John Wilson (electrics) and Frank Reynolds (hydraulics), and flight test observer Tony Fairbrother. The prototype was registered G-ALVG just before it was publicly displayed at the 1949 Farnborough Airshow before the start of flight trials. A year later, the second prototype G-5-2 made its maiden flight. The second prototype was registered G-ALZK in July 1950 and it was used by the BOAC Comet Unit at Hurn from April 1951 to carry out 500 flying hours of crew training and route-proving. Australian airline Qantas also sent its own technical experts to observe the performance of the prototypes, seeking to quell internal uncertainty about its prospective Comet purchase. Both prototypes could be externally distinguished from later Comets by the large single-wheeled main landing gear, which was replaced on production models starting with G-ALYP by four-wheeled bogies. Design Overview The Comet was an all-metal low-wing cantilever monoplane powered by four jet engines; it had a four-place cockpit occupied by two pilots, a flight engineer, and a navigator. The clean, low-drag design of the aircraft featured many design elements that were fairly uncommon at the time, including a swept-wing leading edge, integral wing fuel tanks, and four-wheel bogie main undercarriage units designed by de Havilland. Two pairs of turbojet engines (on the Comet 1s, Halford H.2 Ghosts, subsequently known as de Havilland Ghost 50 Mk1s) were buried in the wings. The original Comet was the approximate length of, but not as wide as, the later Boeing 737-100, and carried fewer people in a significantly more-spacious environment. BOAC installed 36 reclining "slumberseats" with centres on its first Comets, allowing for greater leg room in front and behind; Air France had 11 rows of seats with four seats to a row installed on its Comets. Large picture window views and table seating accommodations for a row of passengers afforded a feeling of comfort and luxury unusual for transportation of the period. Amenities included a galley that could serve hot and cold food and drinks, a bar, and separate men's and women's toilets. Provisions for emergency situations included several life rafts stored in the wings near the engines, and individual life vests were stowed under each seat. One of the most striking aspects of Comet travel was the quiet, "vibration-free flying" as touted by BOAC. For passengers used to propeller-driven airliners, smooth and quiet jet flight was a novel experience. Avionics and systems For ease of training and fleet conversion, de Havilland designed the Comet's flight deck layout with a degree of similarity to the Lockheed Constellation, an aircraft that was popular at the time with key customers such as BOAC. The cockpit included full dual-controls for the captain and first officer, and a flight engineer controlled several key systems, including fuel, air conditioning and electrical systems. The navigator occupied a dedicated station, with a table across from the flight engineer. Several of the Comet's avionics systems were new to civil aviation. One such feature was irreversible, powered flight controls, which increased the pilot's ease of control and the safety of the aircraft by preventing aerodynamic forces from changing the directed positions and placement of the aircraft's control surfaces. Many of the control surfaces, such as the elevators, were equipped with a complex gearing system as a safeguard against accidentally over-stressing the surfaces or airframe at higher speed ranges. The Comet had a total of four hydraulic systems: two primaries, one secondary, and a final emergency system for basic functions such as lowering the undercarriage. The undercarriage could also be lowered by a combination of gravity and a hand-pump. Power was syphoned from all four engines for the hydraulics, cabin air conditioning, and the de-icing system; these systems had operational redundancy in that they could keep working even if only a single engine was active. The majority of hydraulic components were centred in a single avionics bay. A pressurised refuelling system, developed by Flight Refuelling Ltd, allowed the Comet's fuel tanks to be refuelled at a far greater rate than by other methods. The cockpit was significantly altered for the Comet 4's introduction, on which an improved layout focusing on the onboard navigational suite was introduced. An EKCO E160 radar unit was installed in the Comet 4's nose cone, providing search functions as well as ground and cloud-mapping capabilities, and a radar interface was built into the Comet 4 cockpit along with redesigned instruments. Sud-Est's design bureau, while working on the Sud Aviation Caravelle in 1953, licensed several design features from de Havilland, building on previous collaborations on earlier licensed designs, including the DH 100 Vampire; the nose and cockpit layout of the Comet 1 was grafted onto the Caravelle. In 1969, when the Comet 4's design was modified by Hawker Siddeley to become the basis for the Nimrod, the cockpit layout was completely redesigned and bore little resemblance to its predecessors except for the control yoke. Fuselage Diverse geographic destinations and cabin pressurisation alike on the Comet demanded the use of a high proportion of alloys, plastics, and other materials new to civil aviation across the aircraft to meet certification requirements. The Comet's high cabin pressure and high operating speeds were unprecedented in commercial aviation, making its fuselage design an experimental process. At its introduction, Comet airframes would be subjected to an intense, high-speed operating schedule which included simultaneous extreme heat from desert airfields and frosty cold from the kerosene-filled fuel tanks, still cold from cruising at high altitude. The Comet's thin metal skin was composed of advanced new alloys and was both riveted and chemically bonded, which saved weight and reduced the risk of fatigue cracks spreading from the rivets. The chemical bonding process was accomplished using a new adhesive, Redux, which was liberally used in the construction of the wings and the fuselage of the Comet; it also had the advantage of simplifying the manufacturing process. When several of the fuselage alloys were discovered to be vulnerable to weakening via metal fatigue, a detailed routine inspection process was introduced. As well as thorough visual inspections of the outer skin, mandatory structural sampling was routinely conducted by both civil and military Comet operators. The need to inspect areas not easily viewable by the naked eye led to the introduction of widespread radiography examination in aviation; this also had the advantage of detecting cracks and flaws too small to be seen otherwise. Operationally, the design of the cargo holds led to considerable difficulty for the ground crew, especially baggage handlers at the airports. The cargo hold had its doors located directly underneath the aircraft, so each item of baggage or cargo had to be loaded vertically upward from the top of the baggage truck, then slid along the hold floor to be stacked inside. The individual pieces of luggage and cargo also had to be retrieved in a similarly slow manner at the arriving airport. Propulsion The Comet was powered by two pairs of turbojet engines buried in the wings close to the fuselage. Chief designer Bishop chose the Comet's embedded-engine configuration because it avoided the drag of podded engines and allowed for a smaller fin and rudder since the hazards of asymmetric thrust were reduced. The engines were outfitted with baffles to reduce noise emissions, and extensive soundproofing was also implemented to improve passenger conditions. Placing the engines within the wings had the advantage of a reduction in the risk of foreign object damage, which could seriously damage jet engines. The low-mounted engines and good placement of service panels also made aircraft maintenance easier to perform. The Comet's buried-engine configuration increased its structural weight and complexity. Armour had to be placed around the engine cells to contain debris from any serious engine failures; also, placing the engines inside the wing required a more complicated wing structure. The Comet 1 featured de Havilland Ghost 50 Mk1 turbojet engines. Two hydrogen peroxide-powered de Havilland Sprite booster rockets were originally intended to be installed to boost takeoff under hot and high altitude conditions from airports such as Khartoum and Nairobi. These were tested on 30 flights, but the Ghosts alone were considered powerful enough and some airlines concluded that rocket motors were impractical. Sprite fittings were retained on production aircraft. Comet 1s subsequently received more powerful Ghost DGT3 series engines. From the Comet 2 onward, the Ghost engines were replaced by the newer and more powerful Rolls-Royce Avon AJ.65 engines. To achieve optimum efficiency with the new powerplants, the air intakes were enlarged to increase mass air flow. Upgraded Avon engines were introduced on the Comet 3, and the Avon-powered Comet 4 was highly praised for its takeoff performance from high-altitude locations such as Mexico City where it was operated by Mexicana de Aviacion, a major scheduled passenger air carrier. Operational history Introduction The earliest production aircraft, registered G-ALYP ("Yoke Peter"), first flew on 9 January 1951 and was subsequently lent to BOAC for development flying by its Comet Unit. On 22 January 1952, the fifth production aircraft, registered G-ALYS, received the first Certificate of Airworthiness awarded to a Comet, six months ahead of schedule. On 2 May 1952, as part of BOAC's route-proving trials, G-ALYP took off on the world's first jetliner flight with fare-paying passengers and inaugurated scheduled service from London to Johannesburg. The final Comet from BOAC's initial order, registered G-ALYZ, began flying in September 1952 and carried cargo along South American routes while simulating passenger schedules. Prince Philip returned from the Helsinki Olympic Games with G-ALYS on 4 August 1952. Queen Elizabeth, the Queen Mother and Princess Margaret were guests on a special flight of the Comet on 30 June 1953 hosted by Sir Geoffrey and Lady de Havilland. Flights on the Comet were about twice as fast as advanced piston-engined aircraft such as the Douglas DC-6 ( vs , respectively), and a faster rate of climb further cut flight times. In August 1953 BOAC scheduled the nine-stop London to Tokyo flights by Comet for 36 hours, compared to 86 hours and 35 minutes on its Argonaut (a DC-4 variant) piston airliner. (Pan Am's DC-6B was scheduled for 46 hours 45 minutes.) The five-stop flight from London to Johannesburg was scheduled for 21 hr 20 min. In their first year, Comets carried 30,000 passengers. As the aircraft could be profitable with a load factor as low as 43 per cent, commercial success was expected. The Ghost engines allowed the Comet to fly above weather that competitors had to fly through. They ran smoothly and were less noisy than piston engines, had low maintenance costs and were fuel-efficient above . In summer 1953, eight BOAC Comets left London each week: three to Johannesburg, two to Tokyo, two to Singapore and one to Colombo. In 1953, the Comet appeared to have achieved success for de Havilland. Popular Mechanics wrote that Britain had a lead of three to five years on the rest of the world in jetliners. As well as the sales to BOAC, two French airlines, Union Aéromaritime de Transport and Air France, each acquired three Comet 1As, an upgraded variant with greater fuel capacity, for flights to West Africa and the Middle East. A slightly longer version of the Comet 1 with more powerful engines, the Comet 2, was being developed, and orders were placed by Air India, British Commonwealth Pacific Airlines, Japan Air Lines, Linea Aeropostal Venezolana, and Panair do Brasil. American carriers Capital Airlines, National Airlines and Pan Am placed orders for the planned Comet 3, an even-larger, longer-range version for transatlantic operations. Qantas was interested in the Comet 1 but concluded that a version with more range and better takeoff performance was needed for the London to Canberra route. Early hull losses On 26 October 1952, the Comet suffered its first hull loss when a BOAC flight departing Rome's Ciampino airport failed to become airborne and ran into rough ground at the end of the runway. Two passengers sustained minor injuries, but the aircraft, G-ALYZ, was a write-off. On 3 March 1953, a new Canadian Pacific Airlines Comet 1A, registered CF-CUN and named Empress of Hawaii, failed to become airborne while attempting a night takeoff from Karachi, Pakistan, on a delivery flight to Australia. The aircraft plunged into a dry drainage canal and collided with an embankment, killing all five crew and six passengers on board. The accident was the first fatal jetliner crash. In response, Canadian Pacific cancelled its remaining order for a second Comet 1A and never operated the type in commercial service. Both early accidents were originally attributed to pilot error, as overrotation had led to a loss of lift from the leading edge of the aircraft's wings. It was later determined that the Comet's wing profile experienced a loss of lift at a high angle of attack, and its engine inlets also suffered a lack of pressure recovery in the same conditions. As a result, de Havilland re-profiled the wings' leading edge with a pronounced "droop", and wing fences were added to control spanwise flow. A fictionalised investigation into the Comet's takeoff accidents was the subject of the novel Cone of Silence (1959) by Arthur David Beaty, a former BOAC captain. Cone of Silence was made into a film in 1960, and Beaty also recounted the story of the Comet's takeoff accidents in a chapter of his non-fiction work, Strange Encounters: Mysteries of the Air (1984). The Comet's second fatal accident occurred on 2 May 1953, when BOAC Flight 783, a Comet 1, registered G-ALYV, crashed in a severe thundersquall six minutes after taking off from Calcutta-Dum Dum (now Netaji Subhash Chandra Bose International Airport), India, killing all 43 on board. Witnesses observed the wingless Comet on fire plunging into the village of Jagalgori, leading investigators to suspect structural failure. India Court of Inquiry After the loss of G-ALYV, the Government of India convened a court of inquiry to examine the cause of the accident. Professor Natesan Srinivasan joined the inquiry as the main technical expert. A large portion of the aircraft was recovered and reassembled at Farnborough, during which the break-up was found to have begun with a left elevator spar failure in the horizontal stabilizer. The inquiry concluded that the aircraft had encountered extreme negative g-forces during takeoff; severe turbulence generated by adverse weather was determined to have induced down-loading, leading to the loss of the wings. Examination of the cockpit controls suggested that the pilot may have inadvertently over-stressed the aircraft when pulling out of a steep dive by over-manipulation of the fully powered flight controls. Investigators did not consider metal fatigue as a contributory cause. The inquiry's recommendations revolved around the enforcement of stricter speed limits during turbulence, and two significant design changes also resulted: all Comets were equipped with weather radar and the "Q feel" system was introduced, which ensured that control column forces (invariably called stick forces) would be proportional to control loads. This artificial feel was the first of its kind to be introduced in any aircraft. The Comet 1 and 1A had been criticised for a lack of "feel" in their controls, and investigators suggested that this might have contributed to the pilot's alleged over-stressing of the aircraft; Comet chief test pilot John Cunningham contended that the jetliner flew smoothly and was highly responsive in a manner consistent with other de Havilland aircraft. Comet disasters of 1954 Just over a year later, Rome's Ciampino airport, the site of the first Comet hull loss, was the origin of a more-disastrous Comet flight. On 10 January 1954, 20 minutes after taking off from Ciampino, the first production Comet, G-ALYP, broke up in mid-air while operating BOAC Flight 781 and crashed into the Mediterranean off the Italian island of Elba with the loss of all 35 on board. With no witnesses to the disaster and only partial radio transmissions as incomplete evidence, no obvious reason for the crash could be deduced. Engineers at de Havilland immediately recommended 60 modifications aimed at any possible design flaw, while the Abell Committee met to determine potential causes of the crash. BOAC also voluntarily grounded its Comet fleet pending investigation into the causes of the accident. Abell Committee Court of Inquiry Media attention centred on potential sabotage; other speculation ranged from clear-air turbulence to an explosion of vapour in an empty fuel tank. The Abell Committee focused on six potential aerodynamic and mechanical causes: control flutter (which had led to the loss of DH 108 prototypes), structural failure due to high loads or metal fatigue of the wing structure, failure of the powered flight controls, failure of the window panels leading to explosive decompression, or fire and other engine problems. The committee concluded that fire was the most likely cause of the problem, and changes were made to the aircraft to protect the engines and wings from damage that might lead to another fire. During the investigation, the Royal Navy conducted recovery operations. The first pieces of wreckage were discovered on 12 February 1954 and the search continued until September 1954, by which time 70 per cent by weight of the main structure, 80 per cent of the power section, and 50 per cent of the aircraft's systems and equipment had been recovered. The forensic reconstruction effort had just begun when the Abell Committee reported its findings. No apparent fault in the aircraft was found, and the British government decided against opening a further public inquiry into the accident. The prestigious nature of the Comet project, particularly for the British aerospace industry, and the financial impact of the aircraft's grounding on BOAC's operations both served to pressure the inquiry to end without further investigation. Comet flights resumed on 23 March 1954. On 8 April 1954, Comet G-ALYY ("Yoke Yoke"), on charter to South African Airways, was on a leg from Rome to Cairo (of a longer route, SA Flight 201 from London to Johannesburg), when it crashed in the Mediterranean near Naples with the loss of all 21 passengers and crew on board. The Comet fleet was immediately grounded once again and a large investigation board was formed under the direction of the Royal Aircraft Establishment (RAE). Prime Minister Winston Churchill tasked the Royal Navy with helping to locate and retrieve the wreckage so that the cause of the accident could be determined. The Comet's Certificate of Airworthiness was revoked, and Comet 1 line production was suspended at the Hatfield factory while the BOAC fleet was permanently grounded, cocooned and stored. Cohen Committee Court of Inquiry On 19 October 1954, the Cohen Committee was established to examine the causes of the Comet crashes. Chaired by Lord Cohen, the committee tasked an investigation team led by Sir Arnold Hall, Director of the RAE at Farnborough, to perform a more-detailed investigation. Hall's team began considering fatigue as the most likely cause of both accidents and initiated further research into measurable strain on the aircraft's skin. With the recovery of large sections of G-ALYP from the Elba crash and BOAC's donation of an identical airframe, G-ALYU, for further examination, an extensive "water torture" test eventually provided conclusive results. This time, the entire fuselage was tested in a dedicated water tank that was built specifically at Farnborough to accommodate its full length. In water-tank testing, engineers subjected G-ALYU to repeated repressurisation and over-pressurisation, and on 24 June 1954, after 3,057 flight cycles (1,221 actual and 1,836 simulated), G-ALYU burst open. Hall, Geoffrey de Havilland and Bishop were immediately called to the scene, where the water tank was drained to reveal that the fuselage had ripped open at a bolt hole, forward of the forward left escape hatch cut out. The failure then occurred longitudinally along a fuselage stringer at the widest point of the fuselage and through a cut out for an escape hatch. The skin thickness was discovered to be insufficient to distribute the load across the structure, leading to overloading of fuselage frames adjacent to fuselage cut outs. (Cohen Inquiry accident report Fig 7). The fuselage frames did not have sufficient strength to prevent the crack from propagating. Although the fuselage failed after a number of cycles that represented three times the life of G-ALYP at the time of the accident, it was still much earlier than expected. A further test reproduced the same results. Based on these findings, Comet 1 structural failures could be expected at anywhere from 1,000 to 9,000 cycles. Before the Elba accident, G-ALYP had made 1,290 pressurised flights, while G-ALYY had made 900 pressurised flights before crashing. Dr P. B. Walker, Head of the Structures Department at the RAE, said he was not surprised by this, noting that the difference was about three to one, and previous experience with metal fatigue suggested a total range of nine to one between experiment and outcome in the field could result in failure. The RAE also reconstructed about two-thirds of G-ALYP at Farnborough and found fatigue crack growth from a rivet hole at the low-drag fibreglass forward aperture around the Automatic Direction Finder, which had caused a catastrophic break-up of the aircraft in high-altitude flight. The exact origin of the fatigue failure could not be identified but was localised to the ADF antenna cut out. A countersunk bolt hole and manufacturing damage that had been repaired at the time of construction using methods that were common, but were likely insufficient allowing for the stresses involved, were both located along the failure crack. Once the crack initiated the skin failed from the point of the ADF cut out and propagated downward and rearward along a stringer resulting in an explosive decompression. It was also found that the punch-rivet construction technique employed in the Comet's design had exacerbated its structural fatigue problems; the aircraft's windows had been engineered to be glued and riveted, but had been punch-riveted only. Unlike drill riveting, the imperfect nature of the hole created by punch-riveting could cause fatigue cracks to start developing around the rivet. Principal investigator Hall accepted the RAE's conclusion of design and construction flaws as the likely explanation for G-ALYU's structural failure after 3,060 pressurisation cycles. Earlier structural indications The issue of the lightness of Comet 1 construction (in order to not tax the relatively low thrust de Havilland Ghost engines), had been noted by de Havilland test pilot John Wilson, while flying the prototype during a Farnborough flypast in 1949. On the flight, he was accompanied by Chris Beaumont, Chief Test Pilot of the de Havilland Engine Company who stood in the entrance to the cockpit behind the Flight Engineer. He stated "Every time we pulled 2 1/2-3G to go around the corner, Chris found that the floor on which he was standing, bulging up and there was a loud bang at that speed from the nose of the aircraft where the skin 'panted' (flexed), so when we heard this bang we knew without checking the airspeed indicator, that we were doing 340 knots. In later years we realised that these were the indications of how flimsy the structure really was." Square window myths Despite findings of the Cohen Inquiry, a number of myths have evolved around the cause of the Comet 1's accidents. Most commonly quoted are the 'square' passenger windows. While the report noted that stress around fuselage cut-outs, emergency exits and windows was found to be much higher than expected due to DeHavilland's assumptions and testing methods the passenger windows shape has been commonly misunderstood and cited as a cause of the fuselage failure. In fact the mention of 'windows' in the Cohen report's conclusion, refers specifically to the origin point of failure in the ADF Antenna cut-out 'windows', located above the cockpit, not passenger windows. The shape of the passenger windows were not indicated in any failure mode detailed in the accident report and were not viewed as a contributing factor. A number of other pressurised airliners of the period including the Boeing 377 Stratocruiser, Douglas DC-7, and DC-8 had larger and more 'square' windows than the Comet 1, and experienced no such failures. In fact, the Comet 1's window general shape resembles a slightly larger Boeing 737 window mounted horizontally. They are rectangular not square, have rounded corners and are within 5% of the radius of the Boeing 737 windows and virtually identical to modern airliners. Paul Withey, Professor of Casting at the University of Birmingham School of Metallurgy states in a video presentation delivered in 2019, analysing all available data that: "The fact that DeHavilland put oval windows into later marks, is not because of any 'squareness' of the windows that caused failure." "DeHavilland went to oval windows on the subsequent Marks because it was easier to Redux them in (use adhesive) – nothing to do with the stress concentration and it's purely to remove rivets." (from the structure) Surviving Comet 1s can be seen on view at the RAF Museum Cosford and the DeHavilland Museum at Salisbury Hall, London Colney. Response In responding to the report de Havilland stated: "Now that the danger of high level fatigue in pressure cabins has been generally appreciated, de Havillands will take adequate measures to deal with this problem. To this end we propose to use thicker gauge materials in the pressure cabin area and to strengthen and redesign windows and cut outs and so lower the general stress to a level at which local stress concentrations either at rivets and bolt holes or as such may occur by reason of cracks caused accidentally during manufacture or subsequently, will not constitute a danger." The Cohen inquiry closed on 24 November 1954, having "found that the basic design of the Comet was sound", and made no observations or recommendations regarding the shape of the windows. De Havilland nonetheless began a refit programme to strengthen the fuselage and wing structure, employing thicker-gauge skin and replacing the rectangular windows and panels with rounded versions, although this was not related to the erroneous 'square' window claim, as can be seen by the fact that the fuselage escape hatch cut-outs (the source of the failure in test aircraft G-ALYU) retained their rectangular shape. Following the Comet enquiry, aircraft were designed to "fail-safe" or safe-life standards, though several subsequent catastrophic fatigue failures, such as Aloha Airlines Flight 243 of April 28, 1988 have occurred. Resumption of service With the discovery of the structural problems of the early series, all remaining Comets were withdrawn from service, while de Havilland launched a major effort to build a new version that would be both larger and stronger. All outstanding orders for the Comet 2 were cancelled by airline customers. All production Comet 2s were also modified with thicker gauge skin to better distribute loads and alleviate the fatigue problems (most of these served with the RAF as the Comet C2); a programme to produce a Comet 2 with more powerful Avons was delayed. The prototype Comet 3 first flew in July 1954 and was tested in an unpressurised state pending completion of the Cohen inquiry. Comet commercial flights would not resume until 1958. Development flying and route proving with the Comet 3 allowed accelerated certification of what was destined to be the most successful variant of the type, the Comet 4. All airline customers for the Comet 3 subsequently cancelled their orders and switched to the Comet 4, which was based on the Comet 3 but with improved fuel capacity. BOAC ordered 19 Comet 4s in March 1955, and American operator Capital Airlines ordered 14 Comets in July 1956. Capital's order included 10 Comet 4As, a variant modified for short-range operations with a stretched fuselage and short wings, lacking the pinion (outboard wing) fuel tanks of the Comet 4. Financial problems and a takeover by United Airlines meant that Capital would never operate the Comet. The Comet 4 first flew on 27 April 1958 and received its Certificate of Airworthiness on 24 September 1958; the first was delivered to BOAC the next day. The base price of a new Comet 4 was roughly £1.14 million (£ million in ). The Comet 4 enabled BOAC to inaugurate the first regular jet-powered transatlantic services on 4 October 1958 between London and New York (albeit still requiring a fuel stop at Gander International Airport, Newfoundland, on westward North Atlantic crossings). While BOAC gained publicity as the first to provide transatlantic jet service, by the end of the month rival Pan American World Airways was flying the Boeing 707 on the New York-Paris route, with a fuel stop at Gander in both directions, and in 1960 began flying Douglas DC-8's on its transatlantic routes as well. The American jets were larger, faster, longer-ranged and more cost-effective than the Comet. After analysing route structures for the Comet, BOAC reluctantly cast-about for a successor, and in 1956 entered into an agreement with Boeing to purchase the 707. The Comet 4 was ordered by two other airlines: Aerolíneas Argentinas took delivery of six Comet 4s from 1959 to 1960, using them between Buenos Aires and Santiago, New York and Europe, and East African Airways received three new Comet 4s from 1960 to 1962 and operated them to the United Kingdom and to Kenya, Tanzania, and Uganda. The Comet 4A ordered by Capital Airlines was instead built for BEA as the Comet 4B, with a further fuselage stretch of and seating for 99 passengers. The first Comet 4B flew on 27 June 1959 and BEA began Tel Aviv to London-Heathrow services on 1 April 1960. Olympic Airways was the only other customer to order the type. The last Comet 4 variant, the Comet 4C, first flew on 31 October 1959 and entered service with Mexicana in 1960. The Comet 4C had the Comet 4B's longer fuselage and the longer wings and extra fuel tanks of the original Comet 4, which gave it a longer range than the 4B. Ordered by Kuwait Airways, Middle East Airlines, Misrair (later Egyptair), and Sudan Airways, it was the most popular Comet variant. Later service In 1959, BOAC began shifting its Comets from transatlantic routes and released the Comet to associate companies, making the Comet 4's ascendancy as a premier airliner brief. Besides the 707 and DC-8, the introduction of the Vickers VC10 allowed competing aircraft to assume the high-speed, long-range passenger service role pioneered by the Comet. In 1960, as part of a government-backed consolidation of the British aerospace industry, de Havilland itself was acquired by Hawker Siddeley, within which it became a wholly owned division. In the 1960s, orders declined, a total of 76 Comet 4s being delivered from 1958 to 1964. In November 1965, BOAC retired its Comet 4s from revenue service; other operators continued commercial passenger flights with the Comet until 1981. Dan-Air played a significant role in the fleet's later history and, at one time, owned all 49 remaining airworthy civil Comets. On 14 March 1997 a Comet 4C serial XS235 and named Canopus, which had been acquired by the British Ministry of Technology and used for radio, radar and avionics trials, made the last documented production Comet flight. Legacy The Comet is widely regarded as both an adventurous step forward and a supreme tragedy; the aircraft's legacy includes advances in aircraft design and in accident investigations. The inquiries into the accidents that plagued the Comet 1 were perhaps some of the most extensive and revolutionary that have ever taken place, establishing precedents in accident investigation; many of the deep-sea salvage and aircraft reconstruction techniques employed have remained in use within the aviation industry. In spite of the Comet being subjected to what was then the most rigorous testing of any contemporary airliner, pressurisation and the dynamic stresses involved were not thoroughly understood at the time of the aircraft's development, nor was the concept of metal fatigue. Though these lessons could be implemented on the drawing board for future aircraft, corrections could only be retroactively applied to the Comet. According to de Havilland's chief test pilot John Cunningham, who had flown the prototype's first flight, representatives from American manufacturers such as Boeing and Douglas privately disclosed that if de Havilland had not experienced the Comet's pressurisation problems first, it would have happened to them. Cunningham likened the Comet to the later Concorde and added that he had assumed that the aircraft would change aviation, which it subsequently did. Aviation author Bill Withuhn concluded that the Comet had pushed "'the state-of-the-art' beyond its limits." Aeronautical-engineering firms were quick to respond to the Comet's commercial advantages and technical flaws alike; other aircraft manufacturers learned from, and profited by, the hard-earned lessons embodied by de Havilland's Comet. The Comet's buried engines were used on some other early jet airliners, such as the Tupolev Tu-104, but later aircraft, such as the Boeing 707 and Douglas DC-8, differed by employing podded engines held on pylons beneath the wings. Boeing stated that podded engines were selected for their passenger airliners because buried engines carried a higher risk of catastrophic wing failure in the event of engine fire. In response to the Comet tragedies, manufacturers also developed ways of pressurisation testing, often going so far as to explore rapid depressurisation; subsequent fuselage skins were of a greater thickness than the skin of the Comet. Variants Comet 1 The Comet 1 was the first model produced, a total of 12 aircraft in service and test. Following closely the design features of the two prototypes, the only noticeable change was the adoption of four-wheel bogie main undercarriage units, replacing the single main wheels. Four Ghost 50 Mk 1 engines were fitted (later replaced by more powerful Ghost DGT3 series engines). The span was , and overall length ; the maximum takeoff weight was over and over 40 passengers could be carried. An updated Comet 1A was offered with higher-allowed weight, greater fuel capacity, and water-methanol injection; 10 were produced. In the wake of the 1954 disasters, all Comet 1s and 1As were brought back to Hatfield, placed in a protective cocoon and retained for testing. All were substantially damaged in stress testing or were scrapped entirely. Comet 1X: Two RCAF Comet 1As were rebuilt with heavier-gauge skins to a Comet 2 standard for the fuselage, and renamed Comet 1X. Comet 1XB: Four Comet 1As were upgraded to a 1XB standard with a reinforced fuselage structure and oval windows. Both 1X series were limited in number of pressurisation cycles. The DH 111 Comet Bomber, a nuclear bomb-carrying variant developed to Air Ministry specification B35/46, was submitted to the Air Ministry on 27 May 1948. It had been originally proposed in 1948 as the "PR Comet", a high-altitude photo reconnaissance adaptation of the Comet 1. The Ghost DGT3-powered airframe featured a narrowed fuselage, a bulbous nose with H2S Mk IX radar, and a four-crewmember pressurised cockpit under a large bubble canopy. Fuel tanks carrying were added to attain a range of . The proposed DH 111 received a negative evaluation from the Royal Aircraft Establishment over serious concerns regarding weapons storage; this, along with the redundant capability offered by the RAF's proposed V bomber trio, led de Havilland to abandon the project on 22 October 1948. Comet 2 The Comet 2 had a slightly larger wing, higher fuel capacity and more-powerful Rolls-Royce Avon engines, which all improved the aircraft's range and performance; its fuselage was longer than the Comet 1's. Design changes had been made to make the aircraft more suitable for transatlantic operations. Following the Comet 1 disasters, these models were rebuilt with heavier-gauge skin and rounded windows, and the Avon engines featuring larger air intakes and outward-curving jet tailpipes. A total of 12 of the 44-seat Comet 2s were ordered by BOAC for the South Atlantic route. The first production aircraft (G-AMXA) flew on 27 August 1953. Although these aircraft performed well on test flights on the South Atlantic, their range was still not suitable for the North Atlantic. All but four Comet 2s were allocated to the RAF, deliveries beginning in 1955. Modifications to the interiors allowed the Comet 2s to be used in several roles. For VIP transport, the seating and accommodations were altered and provisions for carrying medical equipment including iron lungs were incorporated. Specialised signals intelligence and electronic surveillance capability was later added to some airframes. Comet 2X: Limited to a single Comet Mk 1 powered by four Rolls-Royce Avon 502 turbojet engines and used as a development aircraft for the Comet 2. Comet 2E: Two Comet 2 airliners were fitted with Avon 504s in the inner nacelles and Avon 524s in the outer ones. These aircraft were used by BOAC for proving flights during 1957–1958. Comet T2: The first two of 10 Comet 2s for the RAF were fitted out as crew trainers, the first aircraft (XK669) flying initially on 9 December 1955. Comet C2: Eight Comet 2s originally destined for the civil market were completed for the RAF and assigned to No. 216 Squadron. Comet 2R: Three Comet 2s were modified for use in radar and electronic systems development, initially assigned to No. 90 Group (later Signals Command) for the RAF. In service with No. 192 and No. 51 Squadrons, the 2R series was equipped to monitor Warsaw Pact signal traffic and operated in this role from 1958. Comet 3 The Comet 3, which flew for the first time on 19 July 1954, was a Comet 2 lengthened by and powered by Avon M502 engines developing . The variant added wing pinion tanks, and offered greater capacity and range. The Comet 3 was destined to remain a development series since it did not incorporate the fuselage-strengthening modifications of the later series aircraft, and was not able to be fully pressurised. Only two Comet 3s began construction; G-ANLO, the only airworthy Comet 3, was demonstrated at the Farnborough SBAC Show in September 1954. The other Comet 3 airframe was not completed to production standard and was used primarily for ground-based structural and technology testing during development of the similarly sized Comet 4. Another nine Comet 3 airframes were not completed and their construction was abandoned at Hatfield. In BOAC colours, G-ANLO was flown by John Cunningham in a marathon round-the-world promotional tour in December 1955. As a flying testbed, it was later modified with Avon RA29 engines fitted, as well as replacing the original long-span wings with reduced span wings as the Comet 3B and demonstrated in British European Airways (BEA) livery at the Farnborough Airshow in September 1958. Assigned in 1961 to the Blind Landing Experimental Unit (BLEU) at RAE Bedford, the final testbed role played by G–ANLO was in automatic landing system experiments. When retired in 1973, the airframe was used for foam-arrester trials before the fuselage was salvaged at BAE Woodford, to serve as the mock-up for the Nimrod. Comet 4 The Comet 4 was a further improvement on the stretched Comet 3 with even greater fuel capacity. The design had progressed significantly from the original Comet 1, growing by and typically seating 74 to 81 passengers compared to the Comet 1's 36 to 44 (119 passengers could be accommodated in a special charter seating package in the later 4C series). The Comet 4 was considered the definitive series, having a longer range, higher cruising speed and higher maximum takeoff weight. These improvements were possible largely because of Avon engines, with twice the thrust of the Comet 1's Ghosts. Deliveries to BOAC began on 30 September 1958 with two 48-seat aircraft, which were used to initiate the first scheduled transatlantic services. Comet 4B: Originally developed for Capital Airlines as the 4A, the 4B featured greater capacity through a 2m longer fuselage, and a shorter wingspan; 18 were produced. Comet 4C: This variant featured the Comet 4's wings and the 4B's longer fuselage; 28 were produced. The last two Comet 4C fuselages were used to build prototypes of the Hawker Siddeley Nimrod maritime patrol aircraft. A Comet 4C (SA-R-7) was ordered by Saudi Arabian Airlines with an eventual disposition to the Saudi Royal Flight for the exclusive use of King Saud bin Abdul Aziz. Extensively modified at the factory, the aircraft included a VIP front cabin, a bed, special toilets with gold fittings and was distinguished by a green, gold and white colour scheme with polished wings and lower fuselage that was commissioned from aviation artist John Stroud. Following its first flight, the special order Comet 4C was described as "the world's first executive jet." Comet 5 proposal The Comet 5 was proposed as an improvement over previous models, including a wider fuselage with five-abreast seating, a wing with greater sweep and podded Rolls-Royce Conway engines. Without support from the Ministry of Transport, the proposal languished as a hypothetical aircraft and was never realised. Hawker Siddeley Nimrod The last two Comet 4C aircraft produced were modified as prototypes (XV148 & XV147) to meet a British requirement for a maritime patrol aircraft for the Royal Air Force; initially named "Maritime Comet", the design was designated Type HS 801. This variant became the Hawker Siddeley Nimrod and production aircraft were built at the Hawker Siddeley factory at Woodford Aerodrome. Entering service in 1969, five Nimrod variants were produced. The final Nimrod aircraft were retired in June 2011. Operators The original operators of the early Comet 1 and the Comet 1A were BOAC, Union Aéromaritime de Transport and Air France. All early Comets were withdrawn from service for accident inquiries, during which orders from British Commonwealth Pacific Airlines, Japan Air Lines, Linea Aeropostal Venezolana, National Airlines, Pan American World Airways and Panair do Brasil were cancelled. When the redesigned Comet 4 entered service, it was flown by customers BOAC, Aerolíneas Argentinas, and East African Airways, while the Comet 4B variant was operated by customers BEA and Olympic Airways and the Comet 4C model was flown by customers Kuwait Airways, Mexicana, Middle East Airlines, Misrair Airlines and Sudan Airways. Other operators used the Comet either through leasing arrangements or through second-hand acquisitions. BOAC's Comet 4s were leased out to Air Ceylon, Air India, AREA Ecuador, Central African Airways and Qantas; after 1965 they were sold to AREA Ecuador, Dan-Air, Mexicana, Malaysian Airways, and the Ministry of Defence. BEA's Comet 4Bs were chartered by Cyprus Airways, Malta Airways and Transportes Aéreos Portugueses. Channel Airways obtained five Comet 4Bs from BEA in 1970 for inclusive tour charters. Dan-Air bought all of the surviving flyable Comet 4s from the late 1960s into the 1970s; some were for spares reclamation, but most were operated on the carrier's inclusive-tour charters; a total of 48 Comets of all marks were acquired by the airline. In military service, the United Kingdom's Royal Air Force was the largest operator, with 51 Squadron (1958–1975; Comet C2, 2R), 192 Squadron (1957–1958; Comet C2, 2R), 216 Squadron (1956–1975; Comet C2 and C4), and the Royal Aircraft Establishment using the aircraft. The Royal Canadian Air Force also operated Comet 1As (later retrofitted to 1XB) through its 412 Squadron from 1953 to 1963. Accidents and incidents The Comet was involved in 25 hull-loss accidents, including 13 fatal crashes which resulted in 492 fatalities. Pilot error was blamed for the type's first fatal accident, which occurred during takeoff at Karachi, Pakistan, on 3 March 1953 and involved a Canadian Pacific Airlines Comet 1A. Three fatal Comet 1 crashes were due to structural problems, specifically British Overseas Airways Corporation flight 783 on 2 May 1953, British Overseas Airways Corporation flight 781 on 10 January 1954, and South African Airways flight 201 on 8 April 1954. These accidents led to the grounding of the entire Comet fleet. After design modifications were implemented, Comet services resumed on October 4, 1958, with Comet 4s. Pilot error resulting in controlled flight into terrain was blamed for five fatal Comet 4 accidents: an Aerolíneas Argentinas crash near Asunción, Paraguay, on 27 August 1959, Aerolíneas Argentinas Flight 322 at Campinas near São Paulo, Brazil, on 23 November 1961, United Arab Airlines Flight 869 in Thailand's Khao Yai mountains on 19 July 1962, a Saudi Arabian Government crash in the Italian Alps on 20 March 1963, and United Arab Airlines Flight 844 in Tripoli, Libya, on 2 January 1971. The Dan-Air de Havilland Comet crash in Spain's Montseny range on 3 July 1970 was attributed to navigational errors by air traffic control and pilots. Other fatal Comet 4 accidents included a British European Airways crash in Ankara, Turkey, following instrument failure on 21 December 1961, a United Arab Airlines Flight 869 crash during inclement weather near Bombay, India, on 28 July 1963, and the terrorist bombing of Cyprus Airways Flight 284 off the Turkish coast on 12 October 1967. Nine Comets, including Comet 1s operated by BOAC and Union Aeromaritime de Transport and Comet 4s flown by Aerolíneas Argentinas, Dan-Air, Malaysian Airlines and United Arab Airlines, were irreparably damaged during takeoff or landing accidents that were survived by all on board. A hangar fire damaged a No. 192 Squadron RAF Comet 2R beyond repair on 13 September 1957, and three Middle East Airlines Comet 4Cs were destroyed by Israeli troops at Beirut, Lebanon, on 28 December 1968. Aircraft on display Since retirement, three early-generation Comet airframes have survived in museum collections. The only complete remaining Comet 1, a Comet 1XB with the registration G-APAS, the last Comet 1 built, is displayed at the RAF Museum Cosford. Though painted in BOAC colours, it never flew for the airline, having been first delivered to Air France and then to the Ministry of Supply after conversion to 1XB standard; this aircraft also served with the RAF as XM823. The sole surviving Comet fuselage with the original square-shaped windows, part of a Comet 1A registered F-BGNX, has undergone restoration and is on display at the de Havilland Aircraft Museum near St Albans in Hertfordshire, England. A Comet C2 Sagittarius with serial XK699, later maintenance serial 7971M, was on display at the gate of RAF Lyneham in Wiltshire, England from 1987. In 2012, with the planned closure of RAF Lyneham, the aircraft was slated to be dismantled and shipped to the RAF Museum Cosford where it was to be re-assembled for display. The move was cancelled due to the level of corrosion and the majority of the airframe was scrapped in 2013, the cockpit section going to the Boscombe Down Aviation Collection at Old Sarum Airfield. Six complete Comet 4s are housed in museum collections. The Imperial War Museum Duxford has a Comet 4 (G-APDB), originally in Dan-Air colours as part of its Flight Line Display, and later in BOAC livery at its AirSpace building. A Comet 4B (G-APYD) is stored in a facility at the Science Museum at Wroughton in Wiltshire, England. Comet 4Cs are exhibited at the Flugausstellung Peter Junior at Hermeskeil, Germany (G-BDIW), the Museum of Flight Restoration Center near Everett, Washington (N888WA), and the National Museum of Flight near Edinburgh, Scotland (G-BDIX). The last Comet to fly, Comet 4C Canopus (XS235), is kept in running condition at Bruntingthorpe Aerodrome, where fast taxi-runs are regularly conducted. Since the 2000s, several parties have proposed restoring Canopus, which is maintained by a staff of volunteers, to airworthy, fully flight-capable condition. The Bruntingthorpe Aerodrome also displays a related Hawker Siddeley Nimrod MR2 aircraft. Specifications In popular culture
Technology
Specific aircraft_2
null
182732
https://en.wikipedia.org/wiki/Sedimentary%20basin
Sedimentary basin
Sedimentary basins are region-scale depressions of the Earth's crust where subsidence has occurred and a thick sequence of sediments have accumulated to form a large three-dimensional body of sedimentary rock. They form when long-term subsidence creates a regional depression that provides accommodation space for accumulation of sediments. Over millions or tens or hundreds of millions of years the deposition of sediment, primarily gravity-driven transportation of water-borne eroded material, acts to fill the depression. As the sediments are buried, they are subject to increasing pressure and begin the processes of compaction and lithification that transform them into sedimentary rock. Sedimentary basins are created by deformation of Earth's lithosphere in diverse geological settings, usually as a result of plate tectonic activity. Mechanisms of crustal deformation that lead to subsidence and sedimentary basin formation include the thinning of underlying crust; depression of the crust by sedimentary, tectonic or volcanic loading; or changes in the thickness or density of underlying or adjacent lithosphere. Once the process of basin formation has begun, the weight of the sediments being deposited in the basin adds a further load on the underlying crust that accentuates subsidence and thus amplifies basin development as a result of isostasy. The long-term preserved geologic record of a sedimentary basin is a large scale contiguous three-dimensional package of sedimentary rocks created during a particular period of geologic time, a 'stratigraphic succession', that geologists continue to refer to as a sedimentary basin even if it is no longer a bathymetric or topographic depression. The Williston Basin, Molasse basin and Magallanes Basin are examples of sedimentary basins that are no longer depressions. Basins formed in different tectonic regimes vary in their preservation potential. Intracratonic basins, which form on highly-stable continental interiors, have a high probability of preservation. In contrast, sedimentary basins formed on oceanic crust are likely to be destroyed by subduction. Continental margins formed when new ocean basins like the Atlantic are created as continents rift apart are likely to have lifespans of hundreds of millions of years, but may be only partially preserved when those ocean basins close as continents collide. Sedimentary basins are of great economic importance. Almost all the world's natural gas and petroleum and all of its coal are found in sedimentary rock. Many metal ores are found in sedimentary rocks formed in particular sedimentary environments. Sedimentary basins are also important from a purely scientific perspective because their sedimentary fill provides a record of Earth's history during the time in which the basin was actively receiving sediment. More than six hundred sedimentary basins have been identified worldwide. They range in areal size from tens of square kilometers to well over a million, and their sedimentary fills range from one to almost twenty kilometers in thickness. Classification A dozen or so common types of sedimentary basins are widely recognized and several classification schemes are proposed, however no single classification scheme is recognized as the standard. Most sedimentary basin classification schemes are based on one or more of these interrelated criteria: Plate tectonic setting - the proximity to a divergent, convergent or transform plate tectonic boundary and the type and origin of the tectonically-induced forces that cause a basin to form, specifically those active at the time of active sedimentation in the basin. Nature of underlying crust - basins formed on continental crust are quite different from those formed on oceanic crust as the two types of lithosphere have very different mechanical characteristics (rheology) and different densities, which means they respond differently to isostasy. Geodynamics of basin formation - the mechanical and thermal forces that cause lithosphere to subside to form a basin. Petroleum/economic potential - basin characteristics that influence the likelihood for the basin to have an accumulations of petroleum or the manner in which it formed. Widely-recognized types Although no one basin classification scheme has been widely adopted, several common types of sedimentary basins are widely accepted and well understood as distinct types. Over its complete lifespan a single sedimentary basin can go through multiple phases and evolve from one of these types to another, such as a rift process going to completion to form a passive margin. In this case the sedimentary rocks of the rift basin phase are overlain by those rocks deposited during the passive margin phase. Hybrid basins where a single regional basin results from the processes that are characteristic of multiple of these types are also possible. Mechanics of formation Sedimentary basins form as a result of regional subsidence of the lithosphere, mostly as a result of a few geodynamic processes. Lithospheric stretching If the lithosphere is caused to stretch horizontally, by mechanisms such as rifting (which is associated with divergent plate boundaries) or ridge-push or trench-pull (associated with convergent boundaries), the effect is believed to be twofold. The lower, hotter part of the lithosphere will "flow" slowly away from the main area being stretched, whilst the upper, cooler and more brittle crust will tend to fault (crack) and fracture. The combined effect of these two mechanisms is for Earth's surface in the area of extension to subside, creating a geographical depression which is then often infilled with water and/or sediments. (An analogy is a piece of rubber, which thins in the middle when stretched.) An example of a basin caused by lithospheric stretching is the North Sea – also an important location for significant hydrocarbon reserves. Another such feature is the Basin and Range Province which covers most of Nevada, forming a series of horst and graben structures. Tectonic extension at divergent boundaries where continental rifting is occurring can create a nascent ocean basin leading to either an ocean or the failure of the rift zone. Another expression of lithospheric stretching results in the formation of ocean basins with central ridges. The Red Sea is in fact an incipient ocean, in a plate tectonic context. The mouth of the Red Sea is also a tectonic triple junction where the Indian Ocean Ridge, Red Sea Rift and East African Rift meet. This is the only place on the planet where such a triple junction in oceanic crust is exposed subaerially. This is due to a high thermal buoyancy (thermal subsidence) of the junction, and also to a local crumpled zone of seafloor crust acting as a dam against the Red Sea. Lithospheric flexure Lithospheric flexure is another geodynamic mechanism that can cause regional subsidence resulting in the creation of a sedimentary basin. If a load is placed on the lithosphere, it will tend to flex in the manner of an elastic plate. The magnitude of the lithospheric flexure is a function of the imposed load and the flexural rigidity of the lithosphere, and the wavelength of flexure is a function of flexural rigidity of the lithospheric plate. Flexural rigidity is in itself, a function of the lithospheric mineral composition, thermal regime, and effective elastic thickness of the lithosphere. Plate tectonic processes that can create sufficient loads on the lithosphere to induce basin-forming processes include: formation of new mountain belts through orogeny create massive regional topographic highs that impose loads on the lithosphere and can result in foreland basins. growth of a volcanic arc as the result of subduction or even the formation of a hotspot volcanic chain. growth of an accretionary wedge and thrusting of it onto the overriding tectonic plate can contribute to the formation of forearc basins. After any kind of sedimentary basin has begun to form, the load created by the water and sediments filling the basin creates additional load, thus causing additional lithospheric flexure and amplifying the original subsidence that created the basin, regardless of the original cause of basin inception. Thermal subsidence Cooling of a lithospheric plate, particularly young oceanic crust or recently stretched continental crust, causes thermal subsidence. As the plate cools it shrinks and becomes denser through thermal contraction. Analogous to a solid floating in a liquid, as the lithospheric plate gets denser it sinks because it displaces more of the underlying mantle through an equilibrium process known as isostasy. Thermal subsidence is particularly measurable and observable with oceanic crust, as there is a well-established correlation between the age of the underlying crust and depth of the ocean. As newly-formed oceanic crust cools over a period of tens of millions of years. This is an important contribution to subsidence in rift basins, backarc basins and passive margins where they are underlain by newly-formed oceanic crust. Strike-slip deformation In strike-slip tectonic settings, deformation of the lithosphere occurs primarily in the plane of Earth as a result of near horizontal maximum and minimum principal stresses. Faults associated with these plate boundaries are primarily vertical. Wherever these vertical fault planes encounter bends, movement along the fault can create local areas of compression or tension. When the curve in the fault plane moves apart, a region of transtension occurs and sometimes is large enough and long-lived enough to create a sedimentary basin often called a pull-apart basin or strike-slip basin. These basins are often roughly rhombohedral in shape and may be called a rhombochasm. A classic rhombochasm is illustrated by the Dead Sea rift, where northward movement of the Arabian Plate relative to the Anatolian Plate has created a strike slip basin. The opposite effect is that of transpression, where converging movement of a curved fault plane causes collision of the opposing sides of the fault. An example is the San Bernardino Mountains north of Los Angeles, which result from convergence along a curve in the San Andreas Fault system. The Northridge earthquake was caused by vertical movement along local thrust and reverse faults "bunching up" against the bend in the otherwise strike-slip fault environment. Study of sedimentary basins The study of sedimentary basins as entities unto themselves is often referred to as sedimentary basin analysis. Study involving quantitative modeling of the dynamic geologic processes by which they evolved is called basin modelling. The sedimentary rocks comprising the fill of sedimentary basins hold the most complete historical record of the evolution of the earth's surface over time. Regional study of these rocks can be used as the primary record for different kinds of scientific investigation aimed at understanding and reconstructing the earth's past plate tectonics (paleotectonics), geography (paleogeography, climate (paleoclimatology), oceans (paleoceanography), habitats (paleoecology and paleobiogeography). Sedimentary basin analysis is thus an important area of study for purely scientific and academic reasons. There are however important economic incentives as well for understanding the processes of sedimentary basin formation and evolution because almost all of the world's fossil fuel reserves were formed in sedimentary basins. All of these perspectives on the history of a particular region are based on the study of a large three-dimensional body of sedimentary rocks that resulted from the fill of one or more sedimentary basins over time. The scientific studies of stratigraphy and in recent decades sequence stratigraphy are focused on understanding the three-dimensional architecture, packaging and layering of this body of sedimentary rocks as a record resulting from sedimentary processes acting over time, influenced by global sea level change and regional plate tectonics. Surface geologic study Where the sedimentary rocks comprising a sedimentary basin's fill are exposed at the earth's surface, traditional field geology and aerial photography techniques as well as satellite imagery can be used in the study of sedimentary basins. Subsurface geologic study Much of a sedimentary basin's fill often remains buried below the surface, often submerged in the ocean, and thus cannot be studied directly. Acoustic imaging using seismic reflection acquired through seismic data acquisition and studied through the specific sub-discipline of seismic stratigraphy is the primary means of understanding the three-dimensional architecture of the basin's fill through remote sensing. Direct sampling of the rocks themselves is accomplished via the drilling of boreholes and the retrieval of rock samples in the form of both core samples and drill cuttings. These allow geologists to study small samples of the rocks directly and also very importantly allow paleontologists to study the microfossils they contain (micropaleontology). At the time they are being drilled, boreholes are also surveyed by pulling electronic instruments along the length of the borehole in a process known as well logging. Well logging, which is sometimes appropriately called borehole geophysics, uses electromagnetic and radioactive properties of the rocks surrounding the borehole, as well as their interaction with the fluids used in the process of drilling the borehole, to create a continuous record of the rocks along the length of the borehole, displayed as of a family of curves. Comparison of well log curves between multiple boreholes can be used to understand the stratigraphy of a sedimentary basin, particularly if used in conjunction with seismic stratigraphy.
Physical sciences
Landforms
null
182765
https://en.wikipedia.org/wiki/Prussian%20blue
Prussian blue
Prussian blue (also known as Berlin blue, Brandenburg blue, Parisian and Paris blue) is a dark blue pigment produced by oxidation of ferrous ferrocyanide salts. It has the chemical formula . Turnbull's blue is essentially identical chemically, excepting that it has different impurities and particle sizes—because it is made from different reagents—and thus it has a slightly different color. Prussian blue was created in the early 18th century and is the first modern synthetic pigment. It is prepared as a very fine colloidal dispersion, because the compound is not soluble in water. It contains variable amounts of other ions and its appearance depends sensitively on the size of the colloidal particles. The pigment is used in paints, it became prominent in 19th-century () Japanese woodblock prints, and it is the traditional "blue" in technical blueprints. In medicine, orally administered Prussian blue is used as an antidote for certain kinds of heavy metal poisoning, e.g., by thallium(I) and radioactive isotopes of cesium. The therapy exploits Prussian blue's ion-exchange properties and high affinity for certain "soft" metal cations. It is on the World Health Organization's List of Essential Medicines, the most important medications needed in a basic health system. Prussian blue lent its name to prussic acid (hydrogen cyanide) derived from it. In German, hydrogen cyanide is called ('blue acid'). Cyanide also acquired its name from this relationship. History Prussian blue pigment is significant since it was the first stable and relatively lightfast blue pigment to be widely used since the loss of knowledge regarding the synthesis of Egyptian blue. European painters had previously used a number of pigments such as indigo dye, smalt, and Tyrian purple, and the extremely expensive ultramarine made from lapis lazuli. Japanese painters and woodblock print artists, likewise, did not have access to a long-lasting blue pigment until they began to import Prussian blue from Europe. Prussian blue (also () was probably synthesized for the first time by the paint maker Johann Jacob Diesbach in Berlin around 1706. The pigment is believed to have been accidentally created when Diesbach used potash tainted with blood to create some red cochineal dye. The original dye required potash, ferric sulfate, and dried cochineal. Instead, the blood, potash, and iron sulfate reacted to create a compound known as iron ferrocyanide, which, unlike the desired red pigment, has a very distinct blue hue. It was named and in 1709 by its first trader. The pigment readily replaced the expensive lapis lazuli-derived ultramarine and was an important topic in the letters exchanged between Johann Leonhard Frisch and the president of the Prussian Academy of Sciences, Gottfried Wilhelm Leibniz, between 1708 and 1716. It is first mentioned in a letter written by Frisch to Leibniz, from March 31, 1708. Not later than 1708, Frisch began to promote and sell the pigment across Europe. By August 1709, the pigment had been termed ; by November 1709, the German name had been used for the first time by Frisch. Frisch himself is the author of the first known publication of Prussian blue in the paper in 1710, as can be deduced from his letters. Diesbach had been working for Frisch since about 1701. To date, the Entombment of Christ, dated 1709 by Pieter van der Werff (Picture Gallery, Sanssouci, Potsdam) is the oldest known painting where Prussian blue was used. Around 1710, painters at the Prussian court were already using the pigment. At around the same time, Prussian blue arrived in Paris, where Antoine Watteau and later his successors Nicolas Lancret and Jean-Baptiste Pater used it in their paintings. François Boucher used the pigment extensively for both blues and greens. In 1731, Georg Ernst Stahl published an account of the first synthesis of Prussian blue. The story involves not only Diesbach, but also Johann Konrad Dippel. Diesbach was attempting to create a red lake pigment from cochineal, but obtained the blue instead as a result of the contaminated potash he was using. He borrowed the potash from Dippel, who had used it to produce his animal oil. No other known historical source mentions Dippel in this context. It is, therefore, difficult to judge the reliability of this story today. In 1724, the recipe was finally published by John Woodward. In 1752, French chemist Pierre J. Macquer made the important step of showing Prussian blue could be reduced to a salt of iron and a new acid, which could be used to reconstitute the dye. The new acid, hydrogen cyanide, first isolated from Prussian blue in pure form and characterized in 1782 by Swedish chemist Carl Wilhelm Scheele, was eventually given the name (literally "blue acid") because of its derivation from Prussian blue, and in English became known popularly as Prussic acid. Cyanide, a colorless anion that forms in the process of making Prussian blue, derives its name from the Greek word for dark blue. In the late 1800s, Rabbi Gershon Henoch Leiner, the Hasidic Rebbe of Radzin, dyed tzitziyot with Prussian blue made with sepia, believing that this was the true techeiles dye. Even though some have questioned its identity as techeiles because of its artificial production, and claimed that had Rabbi Leiner been aware of this he would have retracted his position that his dye was techeiles, others have disputed this and claimed that Rabbi Leiner would not have retracted. Military symbol From the beginning of the 18th century, Prussian blue was the predominant uniform coat color worn by the infantry and artillery regiments of the Prussian Army. As (dark blue), this shade achieved a symbolic importance and continued to be worn by most German soldiers for ceremonial and off-duty occasions until the outbreak of World War I, when it was superseded by greenish-gray field gray (). Synthesis Prussian blue is produced by oxidation of ferrous ferrocyanide salts. These white solids have the formula where = or . The iron in this material is all ferrous, hence the absence of deep color associated with the mixed valency. Oxidation of this white solid with hydrogen peroxide or sodium chlorate produces ferricyanide and affords Prussian blue. A "soluble" form, , which is really colloidal, can be made from potassium ferrocyanide and iron(III): The similar reaction of potassium ferricyanide and iron(II) results in the same colloidal solution, because is converted into ferrocyanide. The "insoluble" Prussian blue is obtained if, in the reactions above, an excess of Fe(III) is added:   Despite the fact that it is prepared from cyanide salts, Prussian blue is not toxic because the cyanide groups are tightly bound to iron. Both ferrocyanide (() and ferricyanide (() are particularly stable and non-toxic polymeric cyanometalates due to the strong iron coordination to cyanide ions. Although cyanide bonds well with transition metals in general like chromium, these non-iron coordination compounds are not as stable as iron cyanides, therefore increasing the risk of releasing CN− ions, and subsequently comparative toxicity. Turnbull's blue In former times, the addition of iron(II) salts to a solution of ferricyanide was thought to afford a material different from Prussian blue. The product was traditionally named Turnbull's blue (TB). X-ray diffraction and electron diffraction methods have shown, though, that the structures of PB and TB are identical. The differences in the colors for TB and PB reflect subtle differences in the methods of precipitation, which strongly affect particle size and impurity content. Prussian white Prussian white, also known as Berlin white or Everett's salt, is the sodium end-member of the totally reduced form of the Prussian blue in which all iron is present as Fe(II). It is a sodium hexacyanoferrate of Fe(II) of formula . Its molecular weight value is . A more generic formula allowing for the substitution of cations by cations is (in which A or B = or ). The Prussian white is closely related to the Prussian blue, but it significantly differs by its crystallographic structure, molecular framework pore size, and its color. The cubic sodium Prussian white, , and potassium Prussian white, , are candidates as cathode materials for Na-ion batteries. The insertion of and cations in the framework of potassium Prussian white provides favorable synergistic effects improving the long-term battery stability and increasing the number of possible recharge cycles, lengthening its service life. The large-size framework of Prussian white easily accommodating and cations facilitates their intercalation and subsequent extraction during the charge/discharge cycles. The spacious and rigid host crystal structure contributes to its volumetric stability against the internal swelling stress and strain developing in sodium-batteries after many cycles. The material also offers perspectives of high energy densities (Ah/kg) while providing high recharge rate, even at low temperature. Properties Prussian blue is a microcrystalline blue powder. It is insoluble, but the crystallites tend to form a colloid. Such colloids can pass through fine filters. Despite being one of the oldest known synthetic compounds, the composition of Prussian blue remained uncertain for many years. Its precise identification was complicated by three factors: Prussian blue is extremely insoluble, but also tends to form colloids Traditional syntheses tend to afford impure compositions Even pure Prussian blue is structurally complex, defying routine crystallographic analysis Crystal structure The chemical formula of insoluble Prussian blue is , where x = 14–16. The structure was determined by using IR spectroscopy, Mössbauer spectroscopy, X-ray crystallography, and neutron crystallography. Since X-ray diffraction cannot easily distinguish carbon from nitrogen in the presence of heavier elements such as iron, the location of these lighter elements is deduced by spectroscopic means, as well as by observing the distances from the iron atom centers. Neutron diffraction can easily distinguish N and C atoms, and it has been used to determine the detailed structure of Prussian blue and its analogs. PB has a face centered cubic lattice structure, with four iron(III) ions per unit cell. "Soluble" PB crystals contain interstitial ions; insoluble PB has interstitial water, instead. In ideal insoluble PB crystals, the cubic framework is built from Fe(II)–C–N–Fe(III) sequences, with Fe(II)–carbon distances of 1.92 Å and Fe(III)–nitrogen distances of 2.03 Å. One-fourth of the sites of subunits (supposedly at random) are vacant (empty), leaving three such groups on average per unit cell. The empty nitrogen sites are filled with water molecules instead, which are coordinated to Fe(III). The Fe(II) centers, which are low spin, are surrounded by six carbon ligands in an octahedral configuration. The Fe(III) centers, which are high spin, are octahedrally surrounded on average by 4.5 nitrogen atoms and 1.5 oxygen atoms (the oxygen from the six coordinated water molecules). Around eight (interstitial) water molecules are present in the unit cell, either as isolated molecules or hydrogen bonded to the coordinated water. It is worth noting that in soluble hexacyanoferrates Fe(II or III) is always coordinated to the carbon atom of a cyanide, whereas in crystalline Prussian blue Fe ions are coordinated to both C and N. The composition is notoriously variable due to the presence of lattice defects, allowing it to be hydrated to various degrees as water molecules are incorporated into the structure to occupy cation vacancies. The variability of Prussian blue's composition is attributable to its low solubility, which leads to its rapid precipitation without the time to achieve full equilibrium between solid and liquid. Color Prussian blue is strongly colored and tends towards black and dark blue when mixed into oil paints. The exact hue depends on the method of preparation, which dictates the particle size. The intense blue color of Prussian blue is associated with the energy of the transfer of electrons from Fe(II) to Fe(III). Many such mixed-valence compounds absorb certain wavelengths of visible light resulting from intervalence charge transfer. In this case, orange-red light around 680 nanometers in wavelength is absorbed, and the reflected light appears blue as a result. Like most high-chroma pigments, Prussian blue cannot be accurately displayed on a computer display. Prussian blue is electrochromic—changing from blue to colorless upon reduction. This change is caused by reduction of the Fe(III) to Fe(II), eliminating the intervalence charge transfer that causes Prussian blue's color. Use Pigment Because it is easily made, cheap, nontoxic, and intensely colored, Prussian blue has attracted many applications. It was adopted as a pigment very soon after its invention and was almost immediately widely used in oil paints, watercolor, and dyeing. The dominant uses are for pigments: about 12,000 tonnes of Prussian blue are produced annually for use in black and bluish inks. A variety of other pigments also contain the material. Engineer's blue and the pigment formed on cyanotypes—giving them their common name blueprints. Certain crayons were once colored with Prussian blue (later relabeled midnight blue). Similarly, Prussian blue is the basis for laundry bluing. Nanoparticles of Prussian blue are used as pigments in some cosmetics ingredients, according to the European Union Observatory for Nanomaterials. Medicine Prussian blue's ability to incorporate monovalent metallic cations () makes it useful as a sequestering agent for certain toxic heavy metals. Pharmaceutical-grade Prussian blue in particular is used for people who have ingested thallium () or radioactive caesium (). According to the International Atomic Energy Agency (IAEA), an adult male can eat at least 10 g of Prussian blue per day without serious harm. The U.S. Food and Drug Administration (FDA) has determined the "500-mg Prussian blue capsules, when manufactured under the conditions of an approved New Drug Application, can be found safe and effective therapy" in certain poisoning cases. Radiogardase (Prussian blue insoluble capsules ) is a commercial product for the removal of caesium-137 from the intestine, so indirectly from the bloodstream by intervening in the enterohepatic circulation of caesium-137, reducing the internal residency time (and exposure) by about two-thirds. In particular, it was used to adsorb and remove from those poisoned in the Goiânia accident in Brazil. Stain for iron Prussian blue is a common histopathology stain used by pathologists to detect the presence of iron in biopsy specimens, such as in bone marrow samples. The original stain formula, known historically (1867) as "Perls Prussian blue" after its inventor, German pathologist Max Perls (1843–1881), used separate solutions of potassium ferrocyanide and acid to stain tissue (these are now used combined, just before staining). Iron deposits in tissue then form the purple Prussian blue dye in place, and are visualized as blue or purple deposits. By machinists and toolmakers Engineer's blue, Prussian blue in an oily base, is the traditional material used for spotting metal surfaces such as surface plates and bearings for hand scraping. A thin layer of nondrying paste is applied to a reference surface and transfers to the high spots of the workpiece. The toolmaker then scrapes, stones, or otherwise removes the marked high spots. Prussian blue is preferable because it will not abrade the extremely precise reference surfaces as many ground pigments may. Other uses include marking gear teeth during assembly to determine their interface characteristics. In analytical chemistry Prussian blue is formed in the Prussian blue assay for total phenols. Samples and phenolic standards are given acidic ferric chloride and ferricyanide, which is reduced to ferrocyanide by the phenols. The ferric chloride and ferrocyanide react to form Prussian blue. Comparing the absorbance at 700 nm of the samples to the standards allows for the determination of total phenols or polyphenols. Household use Prussian blue is present in some preparations of laundry bluing, such as Mrs. Stewart's Bluing. Research Battery materials Prussian blue (PB) has been studied for its applications in electrochemical energy storage since 1978. Prussian Blue proper (the Fe-Fe solid) shows two well-defined reversible redox transitions in K+ solutions. Weakly solvated potassium ions (as well as Rb+ and Cs+, not shown) have the solvated radius, which fits the framework of Prussian Blue. On the other hand, the sizes of solvated Na+ and Li+ are too large for the PB cavity, and the intercalation of these ions is hindered and much slower. The low and high voltage sets of peaks in the cyclic voltammetry correspond to 1 and ⅔ electron per Fe atom, respectively. The high voltage set is due to the transition at the low-spin Fe ions coordinated to C-atoms. The low-voltage set is due to high-spin Fe ion coordinated to N-atoms. It is possible to replace the Fe metal centers in PB with other metal ions such as Mn, Co, Ni, Zn, etc. to form electrochemically active Prussian blue analogues (PBAs). PB/PBAs and their derivatives have also been evaluated as electrode materials for reversible alkali-ion insertion and extraction in lithium-ion battery, sodium-ion battery, and potassium-ion battery.
Physical sciences
Cyanide salts
Chemistry
182783
https://en.wikipedia.org/wiki/Stratigraphy
Stratigraphy
Stratigraphy is a branch of geology concerned with the study of rock layers (strata) and layering (stratification). It is primarily used in the study of sedimentary and layered volcanic rocks. Stratigraphy has three related subfields: lithostratigraphy (lithologic stratigraphy), biostratigraphy (biologic stratigraphy), and chronostratigraphy (stratigraphy by age). Historical development Catholic priest Nicholas Steno established the theoretical basis for stratigraphy when he introduced the law of superposition, the principle of original horizontality and the principle of lateral continuity in a 1669 work on the fossilization of organic remains in layers of sediment. The first practical large-scale application of stratigraphy was by William Smith in the 1790s and early 19th century. Known as the "Father of English geology", Smith recognized the significance of strata or rock layering and the importance of fossil markers for correlating strata; he created the first geologic map of England. Other influential applications of stratigraphy in the early 19th century were by Georges Cuvier and Alexandre Brongniart, who studied the geology of the region around Paris. Lithostratigraphy Variation in rock units, most obviously displayed as visible layering, is due to physical contrasts in rock type (lithology). This variation can occur vertically as layering (bedding), or laterally, and reflects changes in environments of deposition (known as facies change). These variations provide a lithostratigraphy or lithologic stratigraphy of the rock unit. Key concepts in stratigraphy involve understanding how certain geometric relationships between rock layers arise and what these geometries imply about their original depositional environment. The basic concept in stratigraphy, called the law of superposition, states: in an undeformed stratigraphic sequence, the oldest strata occur at the base of the sequence. Chemostratigraphy studies the changes in the relative proportions of trace elements and isotopes within and between lithologic units. Carbon and oxygen isotope ratios vary with time, and researchers can use those to map subtle changes that occurred in the paleoenvironment. This has led to the specialized field of isotopic stratigraphy. Cyclostratigraphy documents the often cyclic changes in the relative proportions of minerals (particularly carbonates), grain size, thickness of sediment layers (varves) and fossil diversity with time, related to seasonal or longer term changes in palaeoclimates. Biostratigraphy Biostratigraphy or paleontologic stratigraphy is based on fossil evidence in the rock layers. Strata from widespread locations containing the same fossil fauna and flora are said to be correlatable in time. Biologic stratigraphy was based on William Smith's principle of faunal succession, which predated, and was one of the first and most powerful lines of evidence for, biological evolution. It provides strong evidence for the formation (speciation) and extinction of species. The geologic time scale was developed during the 19th century, based on the evidence of biologic stratigraphy and faunal succession. This timescale remained a relative scale until the development of radiometric dating, which was based on an absolute time framework, leading to the development of chronostratigraphy. One important development is the Vail curve, which attempts to define a global historical sea-level curve according to inferences from worldwide stratigraphic patterns. Stratigraphy is also commonly used to delineate the nature and extent of hydrocarbon-bearing reservoir rocks, seals, and traps of petroleum geology. Chronostratigraphy Chronostratigraphy is the branch of stratigraphy that places an absolute age, rather than a relative age on rock strata. The branch is concerned with deriving geochronological data for rock units, both directly and inferentially, so that a sequence of time-relative events that created the rocks formation can be derived. The ultimate aim of chronostratigraphy is to place dates on the sequence of deposition of all rocks within a geological region, and then to every region, and by extension to provide an entire geologic record of the Earth. A gap or missing strata in the geological record of an area is called a stratigraphic hiatus. This may be the result of a halt in the deposition of sediment. Alternatively, the gap may be due to removal by erosion, in which case it may be called a stratigraphic vacuity. It is called a hiatus because deposition was on hold for a period of time. A physical gap may represent both a period of non-deposition and a period of erosion. A geologic fault may cause the appearance of a hiatus. Magnetostratigraphy Magnetostratigraphy is a chronostratigraphic technique used to date sedimentary and volcanic sequences. The method works by collecting oriented samples at measured intervals throughout a section. The samples are analyzed to determine their detrital remanent magnetism (DRM), that is, the polarity of Earth's magnetic field at the time a stratum was deposited. For sedimentary rocks this is possible because, as they fall through the water column, very fine-grained magnetic minerals (< 17 μm) behave like tiny compasses, orienting themselves with Earth's magnetic field. Upon burial, that orientation is preserved. For volcanic rocks, magnetic minerals, which form in the melt, orient themselves with the ambient magnetic field, and are fixed in place upon crystallization of the lava. Oriented paleomagnetic core samples are collected in the field; mudstones, siltstones, and very fine-grained sandstones are the preferred lithologies because the magnetic grains are finer and more likely to orient with the ambient field during deposition. If the ancient magnetic field were oriented similar to today's field (North Magnetic Pole near the North Rotational Pole), the strata would retain a normal polarity. If the data indicate that the North Magnetic Pole were near the South Rotational Pole, the strata would exhibit reversed polarity. Results of the individual samples are analyzed by removing the natural remanent magnetization (NRM) to reveal the DRM. Following statistical analysis, the results are used to generate a local magnetostratigraphic column that can then be compared against the Global Magnetic Polarity Time Scale. This technique is used to date sequences that generally lack fossils or interbedded igneous rocks. The continuous nature of the sampling means that it is also a powerful technique for the estimation of sediment-accumulation rates.
Physical sciences
Geology
null
182787
https://en.wikipedia.org/wiki/Fault%20%28geology%29
Fault (geology)
In geology, a fault is a planar fracture or discontinuity in a volume of rock across which there has been significant displacement as a result of rock-mass movements. Large faults within Earth's crust result from the action of plate tectonic forces, with the largest forming the boundaries between the plates, such as the megathrust faults of subduction zones or transform faults. Energy release associated with rapid movement on active faults is the cause of most earthquakes. Faults may also displace slowly, by aseismic creep. A fault plane is the plane that represents the fracture surface of a fault. A fault trace or fault line is a place where the fault can be seen or mapped on the surface. A fault trace is also the line commonly plotted on geologic maps to represent a fault. A fault zone is a cluster of parallel faults. However, the term is also used for the zone of crushed rock along a single fault. Prolonged motion along closely spaced faults can blur the distinction, as the rock between the faults is converted to fault-bound lenses of rock and then progressively crushed. Mechanisms of faulting Due to friction and the rigidity of the constituent rocks, the two sides of a fault cannot always glide or flow past each other easily, and so occasionally all movement stops. The regions of higher friction along a fault plane, where it becomes locked, are called asperities. Stress builds up when a fault is locked, and when it reaches a level that exceeds the strength threshold, the fault ruptures and the accumulated strain energy is released in part as seismic waves, forming an earthquake. Strain occurs accumulatively or instantaneously, depending on the liquid state of the rock; the ductile lower crust and mantle accumulate deformation gradually via shearing, whereas the brittle upper crust reacts by fracture – instantaneous stress release – resulting in motion along the fault. A fault in ductile rocks can also release instantaneously when the strain rate is too great. Slip, heave, throw Slip is defined as the relative movement of geological features present on either side of a fault plane. A fault's sense of slip is defined as the relative motion of the rock on each side of the fault concerning the other side. In measuring the horizontal or vertical separation, the throw of the fault is the vertical component of the separation and the heave of the fault is the horizontal component, as in "Throw up and heave out". The vector of slip can be qualitatively assessed by studying any drag folding of strata, which may be visible on either side of the fault. Drag folding is a zone of folding close to a fault that likely arises from frictional resistance to movement on the fault. The direction and magnitude of heave and throw can be measured only by finding common intersection points on either side of the fault (called a piercing point). In practice, it is usually only possible to find the slip direction of faults, and an approximation of the heave and throw vector. Hanging wall and footwall The two sides of a non-vertical fault are known as the hanging wall and footwall. The hanging wall occurs above the fault plane and the footwall occurs below it. This terminology comes from mining: when working a tabular ore body, the miner stood with the footwall under his feet and with the hanging wall above him. These terms are important for distinguishing different dip-slip fault types: reverse faults and normal faults. In a reverse fault, the hanging wall displaces upward, while in a normal fault the hanging wall displaces downward. Distinguishing between these two fault types is important for determining the stress regime of the fault movement. The problem of the hanging wall can lead to severe stresses and rock bursts, for example at Frood Mine. Fault types Faults are mainly classified in terms of the angle that the fault plane makes with the Earth's surface, known as the dip, and the direction of slip along the fault plane. Based on the direction of slip, faults can be categorized as: strike-slip, where the offset is predominantly horizontal, parallel to the fault trace; dip-slip, offset is predominantly vertical and/or perpendicular to the fault trace; or oblique-slip, combining strike-slip and dip-slip. Strike-slip faults In a strike-slip fault (also known as a wrench fault, tear fault or transcurrent fault), the fault surface (plane) is usually near vertical, and the footwall moves laterally either left or right with very little vertical motion. Strike-slip faults with left-lateral motion are also known as sinistral faults and those with right-lateral motion as dextral faults. Each is defined by the direction of movement of the ground as would be seen by an observer on the opposite side of the fault. A special class of strike-slip fault is the transform fault when it forms a plate boundary. This class is related to an offset in a spreading center, such as a mid-ocean ridge, or, less common, within continental lithosphere, such as the Dead Sea Transform in the Middle East or the Alpine Fault in New Zealand. Transform faults are also referred to as "conservative" plate boundaries since the lithosphere is neither created nor destroyed. Dip-slip faults Dip-slip faults can be either normal ("extensional") or reverse. The terminology of "normal" and "reverse" comes from coal mining in England, where normal faults are the most common. With the passage of time, a regional reversal between tensional and compressional stresses (or vice-versa) might occur, and faults may be reactivated with their relative block movement inverted in opposite directions to the original movement (fault inversion). In such a way, a normal fault may therefore become a reverse fault and vice versa. Normal faults In a normal fault, the hanging wall moves downward, relative to the footwall. The dip of most normal faults is at least 60 degrees but some normal faults dip at less than 45 degrees. Basin and range topography A downthrown block between two normal faults dipping towards each other is a graben. A block stranded between two grabens, and therefore two normal faults dipping away from each other, is a horst. A sequence of grabens and horsts on the surface of the Earth produces a characteristic basin and range topography. Listric faults A listric fault is a type of normal fault that has a concave-upward shape with the upper section near Earth's surface being steeper, becoming more horizontal with increased depth. Normal faults can evolve into listric faults with the fault plane curving into the Earth. They can also form where the hanging wall is absent (such as on a cliff), where the footwall may slump in a manner that creates multiple listric faults. Detachment faults The fault panes of listric faults can further flatten and evolve into a horizontal or near-horizontal plane, where slip progresses horizontally along a decollement. Extensional decollements can grow to great dimensions and form detachment faults, which are low-angle normal faults with regional tectonic significance. Due to the curvature of the fault plane, the horizontal extensional displacement on a listric fault implies a geometric "gap" between the hanging and footwalls of the fault forms when the slip motion occurs. To accommodate into the geometric gap, and depending on its rheology, the hanging wall might fold and slide downwards into the gap and produce rollover folding, or break into further faults and blocks which fill in the gap. If faults form, imbrication fans or domino faulting may form. Reverse faults A reverse fault is the opposite of a normal fault—the hanging wall moves up relative to the footwall. Reverse faults indicate compressive shortening of the crust. Thrust faults A thrust fault has the same sense of motion as a reverse fault, but with the dip of the fault plane at less than 45°. Thrust faults typically form ramps, flats and fault-bend (hanging wall and footwall) folds. A section of a hanging wall or foot wall where a thrust fault formed along a relatively weak bedding plane is known as a flat and a section where the thrust fault cut upward through the stratigraphic sequence is known as a ramp. Typically, thrust faults move within formations by forming flats and climbing up sections with ramps. This results in the hanging wall flat (or a portion thereof) lying atop the foot wall ramp as shown in the fault-bend fold diagram. Thrust faults form nappes and klippen in the large thrust belts. Subduction zones are a special class of thrusts that form the largest faults on Earth and give rise to the largest earthquakes. Oblique-slip faults A fault which has a component of dip-slip and a component of strike-slip is termed an oblique-slip fault. Nearly all faults have some component of both dip-slip and strike-slip; hence, defining a fault as oblique requires both dip and strike components to be measurable and significant. Some oblique faults occur within transtensional and transpressional regimes, and others occur where the direction of extension or shortening changes during the deformation but the earlier formed faults remain active. The hade angle is defined as the complement of the dip angle; it is the angle between the fault plane and a vertical plane that strikes parallel to the fault. Ring fault Ring faults, also known as caldera faults, are faults that occur within collapsed volcanic calderas and the sites of bolide strikes, such as the Chesapeake Bay impact crater. Ring faults are the result of a series of overlapping normal faults, forming a circular outline. Fractures created by ring faults may be filled by ring dikes. Synthetic and antithetic faults Synthetic and antithetic are terms used to describe minor faults associated with a major fault. Synthetic faults dip in the same direction as the major fault while the antithetic faults dip in the opposite direction. These faults may be accompanied by rollover anticlines (e.g. the Niger Delta Structural Style). Fault rock All faults have a measurable thickness, made up of deformed rock characteristic of the level in the crust where the faulting happened, of the rock types affected by the fault and of the presence and nature of any mineralising fluids. Fault rocks are classified by their textures and the implied mechanism of deformation. A fault that passes through different levels of the lithosphere will have many different types of fault rock developed along its surface. Continued dip-slip displacement tends to juxtapose fault rocks characteristic of different crustal levels, with varying degrees of overprinting. This effect is particularly clear in the case of detachment faults and major thrust faults. The main types of fault rock include: Cataclasite – a fault rock which is cohesive with a poorly developed or absent planar fabric, or which is incohesive, characterised by generally angular clasts and rock fragments in a finer-grained matrix of similar composition. Tectonic or fault breccia – a medium- to coarse-grained cataclasite containing >30% visible fragments. Fault gouge – an incohesive, clay-rich fine- to ultrafine-grained cataclasite, which may possess a planar fabric and containing <30% visible fragments. Rock clasts may be present Clay smear – clay-rich fault gouge formed in sedimentary sequences containing clay-rich layers which are strongly deformed and sheared into the fault gouge. Mylonite – a fault rock which is cohesive and characterized by a well-developed planar fabric resulting from tectonic reduction of grain size, and commonly containing rounded porphyroclasts and rock fragments of similar composition to minerals in the matrix Pseudotachylyte – ultrafine-grained glassy-looking material, usually black and flinty in appearance, occurring as thin planar veins, injection veins or as a matrix to pseudoconglomerates or breccias, which infills dilation fractures in the host rock. Pseudotachylyte likely only forms as the result of seismic slip rates and can act as a fault rate indicator on inactive faults. Impacts on structures and people In geotechnical engineering, a fault often forms a discontinuity that may have a large influence on the mechanical behavior (strength, deformation, etc.) of soil and rock masses in, for example, tunnel, foundation, or slope construction. The level of a fault's activity can be critical for (1) locating buildings, tanks, and pipelines and (2) assessing the seismic shaking and tsunami hazard to infrastructure and people in the vicinity. In California, for example, new building construction has been prohibited directly on or near faults that have moved within the Holocene Epoch (the last 11,700 years) of the Earth's geological history. Also, faults that have shown movement during the Holocene plus Pleistocene Epochs (the last 2.6 million years) may receive consideration, especially for critical structures such as power plants, dams, hospitals, and schools. Geologists assess a fault's age by studying soil features seen in shallow excavations and geomorphology seen in aerial photographs. Subsurface clues include shears and their relationships to carbonate nodules, eroded clay, and iron oxide mineralization, in the case of older soil, and lack of such signs in the case of younger soil. Radiocarbon dating of organic material buried next to or over a fault shear is often critical in distinguishing active from inactive faults. From such relationships, paleoseismologists can estimate the sizes of past earthquakes over the past several hundred years, and develop rough projections of future fault activity. Faults and ore deposits Many ore deposits lie on or are associated with faults. This is because the fractured rock associated with fault zones allow for magma ascent or the circulation of mineral-bearing fluids. Intersections of near-vertical faults are often locations of significant ore deposits. An example of a fault hosting valuable porphyry copper deposits is northern Chile's Domeyko Fault with deposits at Chuquicamata, Collahuasi, El Abra, El Salvador, La Escondida and Potrerillos. Further south in Chile Los Bronces and El Teniente porphyry copper deposit lie each at the intersection of two fault systems. Faults may not always act as conduits to surface. It has been proposed that deep-seated "misoriented" faults may instead be zones where magmas forming porphyry copper stagnate achieving the right time for—and type of—igneous differentiation. At a given time differentiated magmas would burst violently out of the fault-traps and head to shallower places in the crust where porphyry copper deposits would be formed. Groundwater As faults are zones of weakness, they facilitate the interaction of water with the surrounding rock and enhance chemical weathering. The enhanced chemical weathering increases the size of the weathered zone and hence creates more space for groundwater. Fault zones act as aquifers and also assist groundwater transport. Gallery
Physical sciences
Geology
null
182890
https://en.wikipedia.org/wiki/Kronecker%20delta
Kronecker delta
In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: or with use of Iverson brackets: For example, because , whereas because . The Kronecker delta appears naturally in many areas of mathematics, physics, engineering and computer science, as a means of compactly expressing its definition above. In linear algebra, the identity matrix has entries equal to the Kronecker delta: where and take the values , and the inner product of vectors can be written as Here the Euclidean vectors are defined as -tuples: and and the last step is obtained by using the values of the Kronecker delta to reduce the summation over . It is common for and to be restricted to a set of the form or , but the Kronecker delta can be defined on an arbitrary set. Properties The following equations are satisfied: Therefore, the matrix can be considered as an identity matrix. Another useful representation is the following form: This can be derived using the formula for the geometric series. Alternative notation Using the Iverson bracket: Often, a single-argument notation is used, which is equivalent to setting : In linear algebra, it can be thought of as a tensor, and is written . Sometimes the Kronecker delta is called the substitution tensor. Digital signal processing In the study of digital signal processing (DSP), the unit sample function represents a special case of a 2-dimensional Kronecker delta function where the Kronecker indices include the number zero, and where one of the indices is zero. In this case: Or more generally where: However, this is only a special case. In tensor calculus, it is more common to number basis vectors in a particular dimension starting with index 1, rather than index 0. In this case, the relation does not exist, and in fact, the Kronecker delta function and the unit sample function are different functions that overlap in the specific case where the indices include the number 0, the number of indices is 2, and one of the indices has the value of zero. While the discrete unit sample function and the Kronecker delta function use the same letter, they differ in the following ways. For the discrete unit sample function, it is more conventional to place a single integer index in square braces; in contrast the Kronecker delta can have any number of indexes. Further, the purpose of the discrete unit sample function is different from the Kronecker delta function. In DSP, the discrete unit sample function is typically used as an input function to a discrete system for discovering the system function of the system which will be produced as an output of the system. In contrast, the typical purpose of the Kronecker delta function is for filtering terms from an Einstein summation convention. The discrete unit sample function is more simply defined as: In addition, the Dirac delta function is often confused for both the Kronecker delta function and the unit sample function. The Dirac delta is defined as: Unlike the Kronecker delta function and the unit sample function , the Dirac delta function does not have an integer index, it has a single continuous non-integer value . To confuse matters more, the unit impulse function is sometimes used to refer to either the Dirac delta function , or the unit sample function . Notable properties The Kronecker delta has the so-called sifting property that for : and if the integers are viewed as a measure space, endowed with the counting measure, then this property coincides with the defining property of the Dirac delta function and in fact Dirac's delta was named after the Kronecker delta because of this analogous property. In signal processing it is usually the context (discrete or continuous time) that distinguishes the Kronecker and Dirac "functions". And by convention, generally indicates continuous time (Dirac), whereas arguments like , , , , , and are usually reserved for discrete time (Kronecker). Another common practice is to represent discrete sequences with square brackets; thus: . The Kronecker delta is not the result of directly sampling the Dirac delta function. The Kronecker delta forms the multiplicative identity element of an incidence algebra. Relationship to the Dirac delta function In probability theory and statistics, the Kronecker delta and Dirac delta function can both be used to represent a discrete distribution. If the support of a distribution consists of points , with corresponding probabilities , then the probability mass function of the distribution over can be written, using the Kronecker delta, as Equivalently, the probability density function of the distribution can be written using the Dirac delta function as Under certain conditions, the Kronecker delta can arise from sampling a Dirac delta function. For example, if a Dirac delta impulse occurs exactly at a sampling point and is ideally lowpass-filtered (with cutoff at the critical frequency) per the Nyquist–Shannon sampling theorem, the resulting discrete-time signal will be a Kronecker delta function. Generalizations If it is considered as a type tensor, the Kronecker tensor can be written with a covariant index and contravariant index : This tensor represents: The identity mapping (or identity matrix), considered as a linear mapping or The trace or tensor contraction, considered as a mapping The map , representing scalar multiplication as a sum of outer products. The or multi-index Kronecker delta of order is a type tensor that is completely antisymmetric in its upper indices, and also in its lower indices. Two definitions that differ by a factor of are in use. Below, the version is presented has nonzero components scaled to be . The second version has nonzero components that are , with consequent changes scaling factors in formulae, such as the scaling factors of in below disappearing. Definitions of the generalized Kronecker delta In terms of the indices, the generalized Kronecker delta is defined as: Let be the symmetric group of degree , then: Using anti-symmetrization: In terms of a determinant: Using the Laplace expansion (Laplace's formula) of determinant, it may be defined recursively: where the caron, , indicates an index that is omitted from the sequence. When (the dimension of the vector space), in terms of the Levi-Civita symbol: More generally, for , using the Einstein summation convention: Contractions of the generalized Kronecker delta Kronecker Delta contractions depend on the dimension of the space. For example, where is the dimension of the space. From this relation the full contracted delta is obtained as The generalization of the preceding formulas is Properties of the generalized Kronecker delta The generalized Kronecker delta may be used for anti-symmetrization: From the above equations and the properties of anti-symmetric tensors, we can derive the properties of the generalized Kronecker delta: which are the generalized version of formulae written in . The last formula is equivalent to the Cauchy–Binet formula. Reducing the order via summation of the indices may be expressed by the identity Using both the summation rule for the case and the relation with the Levi-Civita symbol, the summation rule of the Levi-Civita symbol is derived: The 4D version of the last relation appears in Penrose's spinor approach to general relativity that he later generalized, while he was developing Aitken's diagrams, to become part of the technique of Penrose graphical notation. Also, this relation is extensively used in S-duality theories, especially when written in the language of differential forms and Hodge duals. Integral representations For any integers and , the Kronecker delta can be written as a complex contour integral using a standard residue calculation. The integral is taken over the unit circle in the complex plane, oriented counterclockwise. An equivalent representation of the integral arises by parameterizing the contour by an angle around the origin. The Kronecker comb The Kronecker comb function with period is defined (using DSP notation) as: where and are integers. The Kronecker comb thus consists of an infinite series of unit impulses that are units apart, aligned so one of the impulses occurs at zero. It may be considered to be the discrete analog of the Dirac comb.
Mathematics
Specific functions
null
182945
https://en.wikipedia.org/wiki/Ephedrine
Ephedrine
Ephedrine is a central nervous system (CNS) stimulant and sympathomimetic agent that is often used to prevent low blood pressure during anesthesia. It has also been used for asthma, narcolepsy, and obesity but is not the preferred treatment. It is of unclear benefit in nasal congestion. It can be taken by mouth or by injection into a muscle, vein, or just under the skin. Onset with intravenous use is fast, while injection into a muscle can take 20minutes, and by mouth can take an hour for effect. When given by injection, it lasts about an hour, and when taken by mouth, it can last up to four hours. Common side effects include trouble sleeping, anxiety, headache, hallucinations, high blood pressure, fast heart rate, loss of appetite, and urinary retention. Serious side effects include stroke and heart attack. While probably safe in pregnancy, its use in this population is poorly studied. Use during breastfeeding is not recommended. Ephedrine works by inducing the release of norepinephrine and hence indirectly activating the α- and β-adrenergic receptors. Chemically, ephedrine is a substituted amphetamine and is the (1R,2S)-enantiomer of β-hydroxy-N-methylamphetamine. Ephedrine was first isolated in 1885 and came into commercial use in 1926. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. It can normally be found in plants of the Ephedra genus. Over-the-counter dietary supplements containing ephedrine are illegal in the United States, with the exception of those used in traditional Chinese medicine, where its presence is noted by má huáng. Medical uses Ephedrine is a non-catecholamine sympathomimetic with cardiovascular effects similar to those of adrenaline/epinephrine: increased blood pressure, heart rate, and contractility. Like pseudoephedrine it is a bronchodilator, with pseudoephedrine having considerably less effect. Ephedrine may decrease motion sickness, but it has mainly been used to decrease the sedating effects of other medications used for motion sickness. Ephedrine is also found to have quick and long-lasting responsiveness in congenital myasthenic syndrome in early childhood and also even in adults with a novel COLQ mutation. Ephedrine is administered by intravenous boluses. Redosing usually requires increased doses to offset the development of tachyphylaxis, which is attributed to the depletion of catecholamine stores. Weight loss Ephedrine promotes modest short-term weight loss, specifically fat loss, but its long-term effects are unknown. In mice, ephedrine is known to stimulate thermogenesis in the brown adipose tissue, but because adult humans have only small amounts of brown fat, thermogenesis is assumed to take place mostly in the skeletal muscle. Ephedrine also decreases gastric emptying. Methylxanthines such as caffeine and theophylline have a synergistic effect with ephedrine for weight loss. This led to the creation and marketing of compound products. One of them, known as the ECA stack, contains ephedrine with caffeine and aspirin. It is a popular supplement taken by bodybuilders seeking to cut body fat before a competition. A 2021 systematic review found that ephedrine led to a weight loss greater than placebo, raised heart rate, and reduced LDL and raised HDL, with no statistically significant difference in blood pressure. Available forms Ephedrine is available as a prescription-only pharmaceutical drug in the form of an intravenous solution, under brand names including Akovaz, Corphedra, Emerphed, and Rezipres as well as in generic forms, in the United States. It is also available over-the-counter in the form of 12.5 and 25mg oral tablets for use as a bronchodilator and as a 0.5% concentration nasal spray for use as a decongestant. The drug is additionally available in combination with guaifenesin in the form of oral tablets and liquids. Ephedrine is provided as the hydrochloride or sulfate salt in pharmaceutical formulations. Contraindications Ephedrine should not be used in conjunction with certain antidepressants, namely norepinephrine-dopamine reuptake inhibitors (NDRIs), as this increases the risk of symptoms due to excessive serum levels of norepinephrine. Bupropion is an example of an antidepressant with an amphetamine-like structure similar to ephedrine, and it is an NDRI. Its action bears more resemblance to amphetamine than to fluoxetine in that its primary mode of therapeutic action involves norepinephrine and to a lesser degree dopamine, but it also releases some serotonin from presynaptic clefts. It should not be used with ephedrine, as it may increase the likelihood of side effects. Ephedrine should be used with caution in patients with inadequate fluid replacement, impaired adrenal function, hypoxia, hypercapnia, acidosis, hypertension, hyperthyroidism, prostatic hypertrophy, diabetes mellitus, cardiovascular disease, during delivery if maternal blood pressure is >130/80 mmHg, and during lactation. Contraindications for the use of ephedrine include: closed-angle glaucoma, phaeochromocytoma, asymmetric septal hypertrophy (idiopathic hypertrophic subaortic stenosis), concomitant or recent (previous 14 days) monoamine oxidase inhibitor (MAOI) therapy, general anaesthesia with halogenated hydrocarbons (particularly halothane), tachyarrhythmias or ventricular fibrillation, or hypersensitivity to ephedrine or other stimulants. Ephedrine should not be used at any time during pregnancy unless specifically indicated by a qualified physician and only when other options are unavailable. Side effects Ephedrine is a potentially dangerous natural compound; the US Food and Drug Administration had received over 18,000 reports of adverse effects in people using it. Adverse drug reactions (ADRs) are more common with systemic administration (e.g. injection or oral administration) compared to topical administration (e.g. nasal instillations). ADRs associated with ephedrine therapy include Cardiovascular: tachycardia, cardiac arrhythmias, angina pectoris, vasoconstriction with hypertension Dermatological: flushing, sweating, acne vulgaris Gastrointestinal: nausea Genitourinary: decreased urination due to vasoconstriction of renal arteries, difficulty urinating is not uncommon, as alpha-agonists such as ephedrine constrict the internal urethral sphincter, mimicking the effects of sympathetic nervous system stimulation Nervous system: restlessness, confusion, insomnia, mild euphoria, mania/hallucinations (rare except in previously existing psychiatric conditions), delusions, formication (may be possible, but lacks documented evidence) paranoia, hostility, panic, agitation Respiratory: dyspnea, pulmonary edema Miscellaneous: dizziness, headache, tremor, hyperglycemic reactions, dry mouth Overdose Overdose of ephedrine may result in sympathomimetic symptoms like tachycardia and hypertension. Interactions Ephedrine with monoamine oxidase inhibitors (MAOIs) like phenelzine and tranylcypromine can result in hypertensive crisis. Pharmacology Pharmacodynamics Ephedrine, a sympathomimetic amine, acts on part of the sympathetic nervous system (SNS). The principal mechanism of action relies on its indirect stimulation of the adrenergic receptor system by increasing activation of α- and β-adrenergic receptors via induction of norepinephrine release. The presence of direct interactions with α-adrenergic receptors is unlikely but still controversial. L-ephedrine, and particularly its stereoisomer norpseudoephedrine (which is also present in Catha edulis) has indirect sympathomimetic effects and due to its ability to cross the blood–brain barrier, it is a CNS stimulant similar to amphetamines, but less pronounced, as it releases norepinephrine and dopamine in the brain. Pharmacokinetics Absorption The oral bioavailability of ephedrine is 88%. The onset of action of ephedrine orally is 15 to 60minutes, via intramuscular injection is 10 to 20minutes, and via intravenous infusion is within seconds. Distribution Its plasma protein binding is approximately 24 to 29%, with 5 to 10% bound to albumin. Metabolism Ephedrine is largely not metabolized. Norephedrine (phenylpropanolamine) is an active metabolite of ephedrine formed via N-demethylation. About 8 to 20% of an oral dose of ephedrine is demethylated into norephedrine, about 4 to 13% is oxidatively deaminated into benzoic acid, and a small fraction is converted into 1,2-dihydroxy-1-phenylpropane. Elimination Ephedrine is eliminated mainly in urine, with 60% (range 53–79%) excreted unchanged. The elimination half-life of ephedrine is 6hours. Its duration of action orally is 2 to 4hours and via intravenous or intramuscular injection is 60minutes. The elimination of ephedrine is dependent on urinary pH. Chemistry Ephedrine, or (−)-(1R,2S)-ephedrine, also known as (1R,2S)-β-hydroxy-N-methyl-α-methyl-β-phenethylamine or as (1R,2S)-β-hydroxy-N-methylamphetamine, is a substituted phenethylamine and amphetamine derivative. It is similar in chemical structure to phenylpropanolamine, methamphetamine, and epinephrine (adrenaline). It differs from methamphetamine only by the presence of a hydroxyl group (–OH). Chemically, ephedrine is an alkaloid with a phenethylamine skeleton found in various plants in the genus Ephedra (family Ephedraceae). It is most usually marketed as the hydrochloride or sulfate salt. It has an experimental log P of 1.13, while its predicted log P values range from 0.9 to 1.32. The lipophilicity of amphetamines is closely related to their brain permeability. For comparison to ephedrine, the experimental log P of methamphetamine is 2.1, of amphetamine is 1.8, of pseudoephedrine is 0.89, of phenylpropanolamine is 0.7, of phenylephrine is -0.3, and of norepinephrine is -1.2. Methamphetamine has high brain permeability, whereas phenylephrine and norepinephrine are peripherally selective drugs. The optimal log P for brain permeation and central activity is about 2.1 (range 1.5–2.7). Ephedrine hydrochloride has a melting point of 187−188°C. The racemic form of ephedrine is racephedrine ((±)-ephedrine; dl-ephedrine; (1RS,2SR)-ephedrine). A stereoisomer of ephedrine is pseudoephedrine. Derivatives of ephedrine include methylephedrine (N-methylephedrine), etafedrine (N-ethylephedrine), cinnamedrine (N-cinnamylephedrine), and oxilofrine (4-hydroxyephedrine). Analogues of ephedrine include phenylpropanolamine (norephedrine) and metaraminol (3-hydroxynorephedrine). The presence of an N-methyl group decreases binding affinities at α-adrenergic receptors, compared with norephedrine. Ephedrine, though, binds better than N-methylephedrine, which has an additional methyl group at the nitrogen atom. Also, the steric orientation of the hydroxyl group is important for receptor binding and functional activity. Nomenclature Ephedrine exhibits optical isomerism and has two chiral centres, giving rise to four stereoisomers. By convention, the pair of enantiomers with the stereochemistry (1R,2S) and (1S,2R) is designated ephedrine, while the pair of enantiomers with the stereochemistry (1R,2R) and (1S,2S) is called pseudoephedrine. The isomer which is marketed is (−)-(1R,2S)-ephedrine. In the outdated D/L system (+)-ephedrine is also referred to as D-ephedrine and (−)-ephedrine as L-ephedrine (in which case, in the Fisher projection, the phenyl ring is drawn at the bottom). Often, the D/L system (with small caps) and the d/l system (with lower-case) are confused. The result is that the levorotary l-ephedrine is wrongly named L-ephedrine and the dextrorotary d-pseudoephedrine (the diastereomer) wrongly D-pseudoephedrine. The IUPAC names of the two enantiomers are (1R,2S)- respectively (1S,2R)-2-methylamino-1-phenylpropan-1-ol. A synonym is erythro-ephedrine. Detection in body fluids Ephedrine may be quantified in blood, plasma, or urine to monitor possible abuse by athletes, confirm a diagnosis of poisoning, or assist in a medicolegal death investigation. Many commercial immunoassay screening tests directed at the amphetamines cross-react appreciably with ephedrine, but chromatographic techniques can easily distinguish ephedrine from other phenethylamine derivatives. Blood or plasma ephedrine concentrations are typically in the 20–200μg/L range in persons taking the drug therapeutically, 300–3000μg/L in abusers or poisoned patients, and 3–20mg/L in cases of acute fatal overdosage. The current World Anti-Doping Agency (WADA) limit for ephedrine in an athlete's urine is 10μg/mL. History Asia Ephedrine in its natural form, known as máhuáng (麻黄) in traditional Chinese medicine, has been documented in China since the Han dynasty (206 BC – 220 AD) as an antiasthmatic and stimulant. In traditional Chinese medicine, máhuáng has been used as a treatment for asthma and bronchitis for centuries. In 1885, the chemical synthesis of ephedrine was first accomplished by Japanese organic chemist Nagai Nagayoshi based on his research on traditional Japanese and Chinese herbal medicines. The industrial manufacture of ephedrine in China began in the 1920s, when Merck began marketing and selling the drug as ephetonin. Ephedrine exports from China to the West grew from 4 to 216 tonnes between 1926 and 1928. Western medicine Ephedrine was first introduced for medical use in the United States in 1926. It was introduced in 1948 in Vicks Vatronol nose drops (now discontinued) which contained ephedrine sulfate as the active ingredient for rapid nasal decongestion. Society and culture Names Ephedrine is the generic name of the drug and its . Its is ephédrine while its is efedrina. In the case of the hydrochloride salt, its generic name is ephedrine hydrochloride and this is its , , and . In the case of the sulfate salt, its generic name is ephedrine sulfate or ephedrine sulphate and the former is its while the latter is its . A synonym of ephedrine sulfate is isofedrol. These names all refer to the (1R,2R)-enantiomer of ephedrine. The racemic form of ephedrine is known as racephedrine and this is its and , while the hydrochloride salt of the racemic form is racephedrine hydrochloride and this is its . Recreational use As a phenethylamine, ephedrine has a similar chemical structure to amphetamines and is a methamphetamine analog having the methamphetamine structure with a hydroxyl group at the β position. Because of ephedrine's structural similarity to methamphetamine, it can be used to create methamphetamine using chemical reduction in which ephedrine's hydroxyl group is removed; this has made ephedrine a highly sought-after chemical precursor in the illicit manufacture of methamphetamine. The most popular method for reducing ephedrine to methamphetamine is similar to the Birch reduction, in that it uses anhydrous ammonia and lithium metal in the reaction. The second-most popular method uses red phosphorus and iodine in the reaction with ephedrine. Moreover, ephedrine can be synthesized into methcathinone via simple oxidation. As such, ephedrine is listed as a table-I precursor under the United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances. Use in exercise and sports Ephedrine has been used as a performance-enhancing drug in exercise and sports. It can increase heart rate, blood pressure, and cardiac contractility as well as act as a psychostimulant. Ephedrine is often used in combination with caffeine for performance-enhancing purposes. Other uses In chemical synthesis, ephedrine is used in bulk quantities as a chiral auxiliary group. In saquinavir synthesis, the half-acid is resolved as its salt with l-ephedrine. Legal status Canada In January 2002, Health Canada issued a voluntary recall of all ephedrine products containing more than 8mg per dose, all combinations of ephedrine with other stimulants such as caffeine, and all ephedrine products marketed for weight-loss or bodybuilding indications, citing a serious risk to health. Ephedrine is still sold as an oral nasal decongestant in 8mg pills as a natural health product, with a limit of 0.4g (400mg) per package, the limit established by the Controlled Drugs and Substances Act as it is considered as Class A Precursor. United States In 1997, the FDA proposed a regulation on ephedra (the herb from which ephedrine is obtained), which limited an ephedra dose to 8mg (of active ephedrine) with no more than 24mg per day. This proposed rule was withdrawn, in part, in 2000 because of "concerns regarding the agency's basis for proposing a certain dietary ingredient level and a duration of use limit for these products." In 2004, the FDA created a ban on ephedrine alkaloids marketed for reasons other than asthma, colds, allergies, other disease, or traditional Asian use. On April 14, 2005, the U.S. District Court for the District of Utah ruled the FDA did not have proper evidence that low dosages of ephedrine alkaloids are actually unsafe, but on August 17, 2006, the U.S. Court of Appeals for the Tenth Circuit in Denver upheld the FDA's final rule declaring all dietary supplements containing ephedrine alkaloids adulterated, and therefore illegal for marketing in the United States. Furthermore, ephedrine is banned by the NCAA, MLB, NFL, and PGA. Ephedrine is, however, still legal in many applications outside of dietary supplements. Purchasing is currently limited and monitored, with specifics varying from state to state. The House passed the Combat Methamphetamine Epidemic Act of 2005 as an amendment to the renewal of the USA PATRIOT Act. Signed into law by President George W. Bush on March 6, 2006, the act amended the US Code (21 USC 830) concerning the sale of products containing ephedrine and the closely related drug pseudoephedrine. Both substances are used as precursors in the illicit production of methamphetamine, and to discourage that use the federal statute included the following requirements for merchants who sell these products: A retrievable record of all purchases identifying the name and address of each party to be kept for two years Required verification of proof of identity of all purchasers Required protection and disclosure methods in the collection of personal information Reports to the Attorney General of any suspicious payments or disappearances of the regulated products Non-liquid dose form of regulated product may only be sold in unit-dose blister packs Regulated products are to be sold behind the counter or in a locked cabinet in such a way as to restrict access Daily sales of regulated products not to exceed 3.6g to a single purchaser, without regard to the number of transactions Monthly sales to a single purchaser not to exceed 9g of pseudoephedrine base in regulated products The law gives similar regulations to mail-order purchases, except the monthly sales limit is 7.5g. As a pure herb or tea, má huáng, containing ephedrine, is still sold legally in the US. The law restricts/prohibits its being sold as a dietary supplement (pill) or as an ingredient/additive to other products, like diet pills. Australia Ephedrine and all Ephedra species that contain it are considered Schedule 4 substances under the Poisons Standard. A Schedule 4 drug is considered a Prescription Only Medicine, or Prescription Animal Remedy – Substances, the use or supply of which should be by or on the order of persons permitted by State or Territory legislation to prescribe and should be available from a pharmacist on prescription under the Poisons Standard. South Africa In South Africa, ephedrine was moved to schedule 6 on 27 May 2008, which makes pure ephedrine tablets prescription only. Pills containing ephedrine up to 30 mg per tablet in combination with other medications are still available OTC, schedule 1 and 2, for sinus, head colds, and influenza. Germany Ephedrine was freely available in pharmacies in Germany until 2001. Afterward, access was restricted since it was mostly bought for unindicated uses. Similarly, ephedra can only be bought with a prescription. Since April 2006, all products, including plant parts, that contain ephedrine are only available with a prescription. Sources Agricultural Ephedrine is obtained from the plant Ephedra sinica and other members of the genus Ephedra, from which the name of the substance is derived. Raw materials for the manufacture of ephedrine and traditional Chinese medicines are produced in China on a large scale. As of 2007, companies produced for export US$13 million worth of ephedrine from 30,000 tons of ephedra annually, or about ten times the amount used in traditional Chinese medicine. Synthetic Most of the l-ephedrine produced today for official medical use is made synthetically as the extraction and isolation process from E. sinica is tedious and no longer cost-effective. Biosynthetic Ephedrine was long thought to come from modifying the amino acid L-phenylalanine. L-Phenylalanine would be decarboxylated and subsequently attacked with ω-aminoacetophenone. Methylation of this product would then produce ephedrine. This pathway has since been disproven. A new pathway proposed suggests that phenylalanine first forms cinnamoyl-CoA via the enzymes phenylalanine ammonia-lyase and acyl CoA ligase. The cinnamoyl-CoA is then reacted with a hydratase to attach the alcohol functional group. The product is then reacted with a retro-aldolase, forming benzaldehyde. Benzaldehyde reacts with pyruvic acid to attach a 2-carbon unit. This product then undergoes transamination and methylation to form ephedrine and its stereoisomer, pseudoephedrine.
Biology and health sciences
Specific drugs
Health
183048
https://en.wikipedia.org/wiki/Thrombosis
Thrombosis
Thrombosis () is the formation of a blood clot inside a blood vessel, obstructing the flow of blood through the circulatory system. When a blood vessel (a vein or an artery) is injured, the body uses platelets (thrombocytes) and fibrin to form a blood clot to prevent blood loss. Even when a blood vessel is not injured, blood clots may form in the body under certain conditions. A clot, or a piece of the clot, that breaks free and begins to travel around the body is known as an embolus. Thrombosis may occur in veins (venous thrombosis) or in arteries (arterial thrombosis). Venous thrombosis (sometimes called DVT, deep vein thrombosis) leads to a blood clot in the affected part of the body, while arterial thrombosis (and, rarely, severe venous thrombosis) affects the blood supply and leads to damage of the tissue supplied by that artery (ischemia and necrosis). A piece of either an arterial or a venous thrombus can break off as an embolus, which could then travel through the circulation and lodge somewhere else as an embolism. This type of embolism is known as a thromboembolism. Complications can arise when a venous thromboembolism (commonly called a VTE) lodges in the lung as a pulmonary embolism. An arterial embolus may travel further down the affected blood vessel, where it can lodge as an embolism. Signs and symptoms Thrombosis is generally defined by the type of blood vessel affected (arterial or venous thrombosis) and the precise location of the blood vessel or the organ supplied by it. Venous thrombosis Deep vein thrombosis Deep vein thrombosis (DVT) is the formation of a blood clot within a deep vein. It most commonly affects leg veins, such as the femoral vein. Three factors are important in the formation of a blood clot within a deep vein—these are: the rate of blood flow, the thickness of the blood and qualities of the vessel wall. Classical signs of DVT include swelling, pain and redness of the affected area. Paget-Schroetter disease Paget-Schroetter disease or upper extremity DVT (UEDVT) is the obstruction of an arm vein (such as the axillary vein or subclavian vein) by a thrombus. The condition usually comes to light after vigorous exercise and usually presents in younger, otherwise healthy people. Men are affected more than women. Budd-Chiari syndrome Budd-Chiari syndrome is the blockage of a hepatic vein or of the hepatic part of the inferior vena cava. This form of thrombosis presents with abdominal pain, ascites and enlarged liver. Treatment varies between therapy and surgical intervention by the use of shunts. Portal vein thrombosis Portal vein thrombosis affects the hepatic portal vein, which can lead to portal hypertension and reduction of the blood supply to the liver. It usually happens in the setting of another disease such as pancreatitis, cirrhosis, diverticulitis or cholangiocarcinoma. Renal vein thrombosis Renal vein thrombosis is the obstruction of the renal vein by a thrombus. This tends to lead to reduced drainage from the kidney. Cerebral venous sinus thrombosis Cerebral venous sinus thrombosis (CVST) is a rare form of stroke which results from the blockage of the dural venous sinuses by a thrombus. Symptoms may include headache, abnormal vision, any of the symptoms of stroke such as weakness of the face and limbs on one side of the body and seizures. The diagnosis is usually made with a CT or MRI scan. The majority of persons affected make a full recovery. The mortality rate is 4.3%. Jugular vein thrombosis Jugular vein thrombosis is a condition that may occur due to infection, intravenous drug use or malignancy. Jugular vein thrombosis can have a varying list of complications, including: systemic sepsis, pulmonary embolism, and papilledema. Though characterized by a sharp pain at the site of the vein, it can prove difficult to diagnose, because it can occur at random. Cavernous sinus thrombosis Cavernous sinus thrombosis is a specialised form of cerebral venous sinus thrombosis, where there is thrombosis of the cavernous sinus of the basal skull dura, due to the retrograde spread of infection and endothelial damage from the danger triangle of the face. The facial veins in this area anastomose with the superior and inferior ophthalmic veins of the orbit, which drain directly posteriorly into the cavernous sinus through the superior orbital fissure. Staphyloccoal or Streptococcal infections of the face, for example nasal or upper lip pustules may thus spread directly into the cavernous sinus, causing stroke-like symptoms of double vision, squint, as well as spread of infection to cause meningitis. Arterial thrombosis Arterial thrombosis is the formation of a thrombus within an artery. In most cases, arterial thrombosis follows rupture of atheroma (a fat-rich deposit in the blood vessel wall), and is therefore referred to as atherothrombosis. Arterial embolism occurs when clots then migrate downstream and can affect any organ. Alternatively, arterial occlusion occurs as a consequence of embolism of blood clots originating from the heart ("cardiogenic" emboli). The most common cause is atrial fibrillation, which causes a blood stasis within the atria with easy thrombus formation, but blood clots can develop inside the heart for other reasons too as infective endocarditis. Stroke A stroke is the rapid decline of brain function due to a disturbance in the supply of blood to the brain. This can be due to ischemia, thrombus, embolus (a lodged particle) or hemorrhage (a bleed). In thrombotic stroke, a thrombus (blood clot) usually forms around atherosclerotic plaques. Since blockage of the artery is gradual, the onset of symptomatic thrombotic strokes is slower. Thrombotic stroke can be divided into two categories — large vessel disease or small vessel disease. The former affects vessels such as the internal carotids, vertebral and the circle of Willis. The latter can affect smaller vessels, such as the branches of the circle of Willis. Myocardial infarction Myocardial infarction (MI), or heart attack, is caused by ischemia (restriction in the blood supply), which is often due to the obstruction of a coronary artery by a thrombus. This restriction gives an insufficient supply of oxygen to the heart muscle which then results in tissue death (infarction). A lesion is then formed which is the infarct. MI can quickly become fatal if emergency medical treatment is not received promptly. If diagnosed within 12 hours of the initial episode (attack) then thrombolytic therapy is initiated. Limb ischemia An arterial thrombus or embolus can also form in the limbs, which can lead to acute limb ischemia. Other sites Hepatic artery thrombosis usually occurs as a devastating complication after liver transplantation. Causes Thrombosis prevention is initiated with assessing the risk for its development. Some people have a higher risk of developing thrombosis and its possible development into thromboembolism. Some of these risk factors are related to inflammation. "Virchow's triad" has been suggested to describe the three factors necessary for the formation of thrombosis: hemodynamic changes (blood stasis or turbulence), vessel wall (endothelial) injury/dysfunction, and altered blood coagulation (hypercoagulability). Some risk factors predispose for venous thrombosis while others increase the risk of arterial thrombosis. Newborn babies in the neonatal period are also at risk of a thromboembolism. Mechanism Pathogenesis The main causes of thrombosis are given in Virchow's triad which lists thrombophilia, endothelial cell injury, and disturbed blood flow. Generally speaking the risk for thrombosis increases over the life course of individuals, depending on life style factors like smoking, diet, and physical activity, the presence of other diseases like cancer or autoimmune disease, while also platelet properties change in aging individuals which is an important consideration as well. Hypercoagulability Hypercoagulability or thrombophilia, is caused by, for example, genetic deficiencies or autoimmune disorders. Recent studies indicate that white blood cells play a pivotal role in deep vein thrombosis, mediating numerous pro-thrombotic actions. Endothelial cell injury Any inflammatory process, such as trauma, surgery or infection, can cause damage to the endothelial lining of the vessel's wall. The main mechanism is exposure of tissue factor to the blood coagulation system. Inflammatory and other stimuli (such as hypercholesterolemia) can lead to changes in gene expression in endothelium producing to a pro-thrombotic state. When this occurs, endothelial cells downregulate substances such as thrombomodulin, which is a key modulator of thrombin activity. The result is a sustained activation of thrombin and reduced production of protein C and tissue factor inhibitor, which furthers the pro-thrombotic state. Endothelial injury is almost invariably involved in the formation of thrombi in arteries, as high rates of blood flow normally hinder clot formation. In addition, arterial and cardiac clots are normally rich in platelets–which are required for clot formation in areas under high stress due to blood flow. Disturbed blood flow Causes of disturbed blood flow include stagnation of blood flow past the point of injury, or venous stasis which may occur in heart failure, or after long periods of sedentary behaviour, such as sitting on a long airplane flight. Also, atrial fibrillation, causes stagnant blood in the left atrium (LA), or left atrial appendage (LAA), and can lead to a thromboembolism. Cancers or malignancies such as leukemia may cause increased risk of thrombosis by possible activation of the coagulation system by cancer cells or secretion of procoagulant substances (paraneoplastic syndrome), by external compression on a blood vessel when a solid tumor is present, or (more rarely) extension into the vasculature (for example, renal cell cancers extending into the renal veins). Also, treatments for cancer (radiation, chemotherapy) often cause additional hypercoagulability. There are scores that correlate different aspects of patient data (comorbidities, vital signs, and others) to risk of thrombosis, such as the POMPE-C, which stratifies risk of mortality due to pulmonary embolism in patients with cancer, who typically have higher rates of thrombosis. Also, there are several predictive scores for thromboembolic events, such as Padua, Khorana, and ThroLy score. Pathophysiology Natural history Fibrinolysis is the physiological breakdown of blood clots by enzymes such as plasmin. Organisation: following the thrombotic event, residual vascular thrombus will be re-organised histologically with several possible outcomes. For an occlusive thrombus (defined as thrombosis within a small vessel that leads to complete occlusion), wound healing will reorganise the occlusive thrombus into collagenous scar tissue, where the scar tissue will either permanently obstruct the vessel, or contract down with myofibroblastic activity to unblock the lumen. For a mural thrombus (defined as a thrombus in a large vessel that restricts the blood flow but does not occlude completely), histological reorganisation of the thrombus does not occur via the classic wound healing mechanism. Instead, the platelet-derived growth factor degranulated by the clotted platelets will attract a layer of smooth muscle cells to cover the clot, and this layer of mural smooth muscle will be vascularised by the blood inside the vessel lumen rather than by the vasa vasorum. Ischemia/infarction: if an arterial thrombus cannot be lysed by the body and it does not embolise, and if the thrombus is large enough to impair or occlude blood flow in the involved artery, then local ischemia or infarction will result. A venous thrombus may or may not be ischemic, since veins distribute deoxygenated blood that is less vital for cellular metabolism. Nevertheless, non-ischemic venous thrombosis may still be problematic, due to the swelling caused by blockage to venous drainage. In deep vein thrombosis this manifests as pain, redness, and swelling; in retinal vein occlusion this may result in macular oedema and visual acuity impairment, which if severe enough can lead to blindness. Embolization A thrombus may become detached and enter circulation as an embolus, finally lodging in and completely obstructing a blood vessel, which unless treated very quickly will lead to tissue necrosis (an infarction) in the area past the occlusion. Venous thrombosis can lead to pulmonary embolism when the migrated embolus becomes lodged in the lung. In people with a "shunt" (a connection between the pulmonary and systemic circulation), either in the heart or in the lung, a venous clot can also end up in the arteries and cause arterial embolism. Arterial embolism can lead to obstruction of blood flow through the blood vessel that is obstructed by it, and a lack of oxygen and nutrients (ischemia) of the downstream tissue. The tissue can become irreversibly damaged, a process known as necrosis. This can affect any organ; for instance, arterial embolism of the brain is one of the causes of stroke. Prevention The use of heparin following surgery is common if there are no issues with bleeding. Generally, a risk-benefit analysis is required, as all anticoagulants lead to an increased risk of bleeding. In people admitted to hospital, thrombosis is a major cause for complications and occasionally death. In the UK, for instance, the Parliamentary Health Select Committee heard in 2005 that the annual rate of death due to thrombosis was 25,000, with at least 50% of these being hospital-acquired. Hence thromboprophylaxis (prevention of thrombosis) is increasingly emphasized. In patients admitted for surgery, graded compression stockings are widely used, and in severe illness, prolonged immobility and in all orthopedic surgery, professional guidelines recommend low molecular weight heparin (LMWH) administration, mechanical calf compression or (if all else is contraindicated and the patient has recently developed deep vein thrombosis) the insertion of a vena cava filter. In patients with medical rather than surgical illness, LMWH too is known to prevent thrombosis, and in the United Kingdom the Chief Medical Officer has issued guidance to the effect that preventative measures should be used in medical patients, in anticipation of formal guidelines. Treatment The treatment for thrombosis depends on whether it is in a vein or an artery, the impact on the person, and the risk of complications from treatment. Anticoagulation Warfarin and vitamin K antagonists are anticoagulants that can be taken orally to reduce thromboembolic occurrence. Where a more effective response is required, heparin can be given (by injection) concomitantly. As a side effect of any anticoagulant, the risk of bleeding is increased, so the international normalized ratio of blood is monitored. Self-monitoring and self-management are safe options for competent patients, though their practice varies. In Germany, about 20% of patients were self-managed while only 1% of U.S. patients did home self-testing (according to one 2012 study). Other medications such as direct thrombin inhibitors and direct Xa inhibitors are increasingly being used instead of warfarin. Thrombolysis Thrombolysis is the pharmacological destruction of blood clots by administering thrombolytic drugs including recombinant tissue plasminogen activator, which enhances the normal destruction of blood clots by the body's enzymes. This carries an increased risk of bleeding so is generally only used for specific situations (such as severe stroke or a massive pulmonary embolism). Surgery Arterial thrombosis may require surgery if it causes acute limb ischemia. Endovascular treatment Mechanical clot retrieval and catheter-guided thrombolysis are used in certain situations. Antiplatelet agents Arterial thrombosis is platelet-rich, and inhibition of platelet aggregation with antiplatelet drugs such as aspirin may reduce the risk of recurrence or progression. Targeting ischemia/reperfusion injury With reperfusion comes ischemia/reperfusion (IR) injury (IRI), which paradoxically causes cell death in reperfused tissue and contributes significantly to post-reperfusion mortality and morbidity. For example, in a feline model of intestinal ischemia, four hours of ischemia resulted in less injury than three hours of ischemia followed by one hour of reperfusion. In ST-elevation myocardial infarction (STEMI), IRI contributes up to 50% of final infarct size despite timely primary percutaneous coronary intervention. This is a key reason for the continued high mortality and morbidity in these conditions, despite endovascular reperfusion treatments and continuous efforts to improve timeliness and access to these treatments. Hence, protective therapies are required to attenuate IRI alongside reperfusion in acute ischemic conditions to improve clinical outcomes. Therapeutic strategies that have potential to improve clinical outcomes in reperfused STEMI patients include remote ischemic conditioning (RIC), exenatide, and metoprolol. These have emerged amongst a multitude of cardioprotective interventions investigated with largely neutral clinical data. Of these, RIC has the most robust clinical evidence, especially in the context of STEMI, but also emerging for other indications such as acute ischemic stroke and aneurysmal subarachnoid hemorrhage. Neonatal thrombosis Treatment options for full-term and preterm babies who develop thromboembolism include expectant management (with careful observation), nitroglycerin ointment, pharmacological therapy (thrombolytics and/or anticoagulants), and surgery. The evidence supporting these treatment approaches is weak. For anticoagulant treatment, it is not clear if unfractionated and/or low molecular weight heparin treatment is effective at decreasing mortality and serious adverse events in this population. There is also insufficient evidence to understand the risk of adverse effects associated with these treatment approaches in term or preterm infants.
Biology and health sciences
Cardiovascular disease
Health
183083
https://en.wikipedia.org/wiki/Galaxy%20rotation%20curve
Galaxy rotation curve
The rotation curve of a disc galaxy (also called a velocity curve) is a plot of the orbital speeds of visible stars or gas in that galaxy versus their radial distance from that galaxy's centre. It is typically rendered graphically as a plot, and the data observed from each side of a spiral galaxy are generally asymmetric, so that data from each side are averaged to create the curve. A significant discrepancy exists between the experimental curves observed, and a curve derived by applying gravity theory to the matter observed in a galaxy. Theories involving dark matter are the main postulated solutions to account for the variance. The rotational/orbital speeds of galaxies/stars do not follow the rules found in other orbital systems such as stars/planets and planets/moons that have most of their mass at the centre. Stars revolve around their galaxy's centre at equal or increasing speed over a large range of distances. In contrast, the orbital velocities of planets in planetary systems and moons orbiting planets decline with distance according to Kepler’s third law. This reflects the mass distributions within those systems. The mass estimations for galaxies based on the light they emit are far too low to explain the velocity observations. The galaxy rotation problem is the discrepancy between observed galaxy rotation curves and the theoretical prediction, assuming a centrally dominated mass associated with the observed luminous material. When mass profiles of galaxies are calculated from the distribution of stars in spirals and mass-to-light ratios in the stellar disks, they do not match with the masses derived from the observed rotation curves and the law of gravity. A solution to this conundrum is to hypothesize the existence of dark matter and to assume its distribution from the galaxy's center out to its halo. Thus the discrepancy between the two curves can be accounted for by adding a dark matter halo surrounding the galaxy. Though dark matter is by far the most accepted explanation of the rotation problem, other proposals have been offered with varying degrees of success. Of the possible alternatives, one of the most notable is modified Newtonian dynamics (MOND), which involves modifying the laws of gravity. History In 1932, Jan Hendrik Oort became the first to report that measurements of the stars in the solar neighborhood indicated that they moved faster than expected when a mass distribution based upon visible matter was assumed, but these measurements were later determined to be essentially erroneous. In 1939, Horace Babcock reported in his PhD thesis measurements of the rotation curve for Andromeda which suggested that the mass-to-luminosity ratio increases radially. He attributed that to either the absorption of light within the galaxy or to modified dynamics in the outer portions of the spiral and not to any form of missing matter. Babcock's measurements turned out to disagree substantially with those found later, and the first measurement of an extended rotation curve in good agreement with modern data was published in 1957 by Henk van de Hulst and collaborators, who studied M31 with the Dwingeloo Radio Observatory's newly commissioned 25-meter radio telescope. A companion paper by Maarten Schmidt showed that this rotation curve could be fit by a flattened mass distribution more extensive than the light. In 1959, Louise Volders used the same telescope to demonstrate that the spiral galaxy M33 also does not spin as expected according to Keplerian dynamics. Reporting on NGC 3115, Jan Oort wrote that "the distribution of mass in the system appears to bear almost no relation to that of light... one finds the ratio of mass to light in the outer parts of NGC 3115 to be about 250". On page 302–303 of his journal article, he wrote that "The strongly condensed luminous system appears imbedded in a large and more or less homogeneous mass of great density" and although he went on to speculate that this mass may be either extremely faint dwarf stars or interstellar gas and dust, he had clearly detected the dark matter halo of this galaxy. The Carnegie telescope (Carnegie Double Astrograph) was intended to study this problem of Galactic rotation. In the late 1960s and early 1970s, Vera Rubin, an astronomer at the Department of Terrestrial Magnetism at the Carnegie Institution of Washington, worked with a new sensitive spectrograph that could measure the velocity curve of edge-on spiral galaxies to a greater degree of accuracy than had ever before been achieved. Together with fellow staff-member Kent Ford, Rubin announced at a 1975 meeting of the American Astronomical Society the discovery that most stars in spiral galaxies orbit at roughly the same speed, and that this implied that galaxy masses grow approximately linearly with radius well beyond the location of most of the stars (the galactic bulge). Rubin presented her results in an influential paper in 1980. These results suggested either that Newtonian gravity does not apply universally or that, conservatively, upwards of 50% of the mass of galaxies was contained in the relatively dark galactic halo. Although initially met with skepticism, Rubin's results have been confirmed over the subsequent decades. If Newtonian mechanics is assumed to be correct, it would follow that most of the mass of the galaxy had to be in the galactic bulge near the center and that the stars and gas in the disk portion should orbit the center at decreasing velocities with radial distance from the galactic center (the dashed line in Fig. 1). Observations of the rotation curve of spirals, however, do not bear this out. Rather, the curves do not decrease in the expected inverse square root relationship but are "flat", i.e. outside of the central bulge the speed is nearly a constant (the solid line in Fig. 1). It is also observed that galaxies with a uniform distribution of luminous matter have a rotation curve that rises from the center to the edge, and most low-surface-brightness galaxies (LSB galaxies) have the same anomalous rotation curve. The rotation curves might be explained by hypothesizing the existence of a substantial amount of matter permeating the galaxy outside of the central bulge that is not emitting light in the mass-to-light ratio of the central bulge. The material responsible for the extra mass was dubbed dark matter, the existence of which was first posited in the 1930s by Jan Oort in his measurements of the Oort constants and Fritz Zwicky in his studies of the masses of galaxy clusters. Dark matter While the observed galaxy rotation curves were one of the first indications that some mass in the universe may not be visible, many different lines of evidence now support the concept of cold dark matter as the dominant form of matter in the universe. Among the lines of evidence are mass-to-light ratios which are much too low without a dark matter component, the amount of hot gas detected in galactic clusters by x-ray astronomy, measurements of cluster mass with the Sunyaev–Zeldovich effect and with gravitational lensing. Models of the formation of galaxies are based on their dark matter halos. The existence of non-baryonic cold dark matter (CDM) is today a major feature of the Lambda-CDM model that describes the cosmology of the universe and matches high precision astrophysical observations. Further investigations The rotational dynamics of galaxies are well characterized by their position on the Tully–Fisher relation, which shows that for spiral galaxies the rotational velocity is uniquely related to their total luminosity. A consistent way to predict the rotational velocity of a spiral galaxy is to measure its bolometric luminosity and then read its rotation rate from its location on the Tully–Fisher diagram. Conversely, knowing the rotational velocity of a spiral galaxy gives its luminosity. Thus the magnitude of the galaxy rotation is related to the galaxy's visible mass. While precise fitting of the bulge, disk, and halo density profiles is a rather complicated process, it is straightforward to model the observables of rotating galaxies through this relationship. So, while state-of-the-art cosmological and galaxy formation simulations of dark matter with normal baryonic matter included can be matched to galaxy observations, there is not yet any straightforward explanation as to why the observed scaling relationship exists. Additionally, detailed investigations of the rotation curves of low-surface-brightness galaxies (LSB galaxies) in the 1990s and of their position on the Tully–Fisher relation showed that LSB galaxies had to have dark matter haloes that are more extended and less dense than those of galaxies with high surface brightness, and thus surface brightness is related to the halo properties. Such dark-matter-dominated dwarf galaxies may hold the key to solving the dwarf galaxy problem of structure formation. Very importantly, the analysis of the inner parts of low and high surface brightness galaxies showed that the shape of the rotation curves in the centre of dark-matter dominated systems indicates a profile different from the NFW spatial mass distribution profile. This so-called cuspy halo problem is a persistent problem for the standard cold dark matter theory. Simulations involving the feedback of stellar energy into the interstellar medium in order to alter the predicted dark matter distribution in the innermost regions of galaxies are frequently invoked in this context. Halo density profiles In order to accommodate a flat rotation curve, a density profile for a galaxy and its environs must be different than one that is centrally concentrated. Newton's version of Kepler's Third Law implies that the spherically symmetric, radial density profile is: where is the radial orbital velocity profile and is the gravitational constant. This profile closely matches the expectations of a singular isothermal sphere profile where if is approximately constant then the density to some inner "core radius" where the density is then assumed constant. Observations do not comport with such a simple profile, as reported by Navarro, Frenk, and White in a seminal 1996 paper. The authors then remarked that a "gently changing logarithmic slope" for a density profile function could also accommodate approximately flat rotation curves over large scales. They found the famous Navarro–Frenk–White profile, which is consistent both with N-body simulations and observations given by where the central density, , and the scale radius, , are parameters that vary from halo to halo. Because the slope of the density profile diverges at the center, other alternative profiles have been proposed, for example the Einasto profile, which has exhibited better agreement with certain dark matter halo simulations. Observations of orbit velocities in spiral galaxies suggest a mass structure according to: with the galaxy gravitational potential. Since observations of galaxy rotation do not match the distribution expected from application of Kepler's laws, they do not match the distribution of luminous matter. This implies that spiral galaxies contain large amounts of dark matter or, alternatively, the existence of exotic physics in action on galactic scales. The additional invisible component becomes progressively more conspicuous in each galaxy at outer radii and among galaxies in the less luminous ones. A popular interpretation of these observations is that about 26% of the mass of the Universe is composed of dark matter, a hypothetical type of matter which does not emit or interact with electromagnetic radiation. Dark matter is believed to dominate the gravitational potential of galaxies and clusters of galaxies. Under this theory, galaxies are baryonic condensations of stars and gas (namely hydrogen and helium) that lie at the centers of much larger haloes of dark matter, affected by a gravitational instability caused by primordial density fluctuations. Many cosmologists strive to understand the nature and the history of these ubiquitous dark haloes by investigating the properties of the galaxies they contain (i.e. their luminosities, kinematics, sizes, and morphologies). The measurement of the kinematics (their positions, velocities and accelerations) of the observable stars and gas has become a tool to investigate the nature of dark matter, as to its content and distribution relative to that of the various baryonic components of those galaxies. Alternatives to dark matter There have been a number of attempts to solve the problem of galaxy rotation by modifying gravity without invoking dark matter. One of the most discussed is modified Newtonian dynamics (MOND), originally proposed by Mordehai Milgrom in 1983, which modifies the Newtonian force law at low accelerations to enhance the effective gravitational attraction. MOND has had a considerable amount of success in predicting the rotation curves of low-surface-brightness galaxies, matching the baryonic Tully–Fisher relation, and the velocity dispersions of the small satellite galaxies of the Local Group. Using data from the Spitzer Photometry and Accurate Rotation Curves (SPARC) database, a group has found that the radial acceleration traced by rotation curves (an effect given the name "radial acceleration relation") could be predicted just from the observed baryon distribution (that is, including stars and gas but not dark matter). This so-called radial acceleration relation (RAR) might be fundamental for understanding the dynamics of galaxies. The same relation provided a good fit for 2693 samples in 153 rotating galaxies, with diverse shapes, masses, sizes, and gas fractions. Brightness in the near infrared, where the more stable light from red giants dominates, was used to estimate the density contribution due to stars more consistently. The results are consistent with MOND, and place limits on alternative explanations involving dark matter alone. However, cosmological simulations within a Lambda-CDM framework that include baryonic feedback effects reproduce the same relation, without the need to invoke new dynamics (such as MOND). Thus, a contribution due to dark matter itself can be fully predictable from that of the baryons, once the feedback effects due to the dissipative collapse of baryons are taken into account. MOND is not a relativistic theory, although relativistic theories which reduce to MOND have been proposed, such as tensor–vector–scalar gravity (TeVeS), scalar–tensor–vector gravity (STVG), and the f(R) theory of Capozziello and De Laurentis. Attempts to model of galaxy rotation based on a general relativity metric, showing that the rotation curves for the Milky Way, NGC 3031, NGC 3198 and NGC 7331 are consistent with the mass density distributions of the visible matter and other similar work have been disputed. According to recent analysis of the data produced by the Gaia spacecraft, it would seem possible to explain at least the Milky Way's rotation curve without requiring any dark matter if instead of a Newtonian approximation the entire set of equations of general relativity is adopted.
Physical sciences
Basics_2
Astronomy
183089
https://en.wikipedia.org/wiki/List%20of%20unsolved%20problems%20in%20physics
List of unsolved problems in physics
The following is a list of notable unsolved problems grouped into broad areas of physics. Some of the major unsolved problems in physics are theoretical, meaning that existing theories seem incapable of explaining a certain observed phenomenon or experimental result. The others are experimental, meaning that there is a difficulty in creating an experiment to test a proposed theory or investigate a phenomenon in greater detail. There are still some questions beyond the Standard Model of physics, such as the strong CP problem, neutrino mass, matter–antimatter asymmetry, and the nature of dark matter and dark energy. Another problem lies within the mathematical framework of the Standard Model itself—the Standard Model is inconsistent with that of general relativity, to the point that one or both theories break down under certain conditions (for example, within known spacetime singularities like the Big Bang and the centres of black holes beyond the event horizon). General physics Theory of everything: Is there a singular, all-encompassing, coherent theoretical framework of physics that fully explains and links together all physical aspects of the universe? Dimensionless physical constants: At the present time, the values of various dimensionless physical constants cannot be calculated; they can be determined only by physical measurement. What is the minimum number of dimensionless physical constants from which all other dimensionless physical constants can be derived? Are dimensional physical constants necessary at all? Quantum gravity Quantum gravity: Can quantum mechanics and general relativity be realized as a fully consistent theory (perhaps as a quantum field theory)? Is spacetime fundamentally continuous or discrete? Would a consistent theory involve a force mediated by a hypothetical graviton, or be a product of a discrete structure of spacetime itself (as in loop quantum gravity)? Are there deviations from the predictions of general relativity at very small or very large scales or in other extreme circumstances that flow from a quantum gravity mechanism? Black holes, black hole information paradox, and black hole radiation: Do black holes produce thermal radiation, as expected on theoretical grounds? Does this radiation contain information about their inner structure, as suggested by gauge–gravity duality, or not, as implied by Hawking's original calculation? If not, and black holes can evaporate away, what happens to the information stored in them (since quantum mechanics does not provide for the destruction of information)? Or does the radiation stop at some point, leaving a black hole remnant? Is there another way to probe their internal structure somehow, if such a structure even exists? The cosmic censorship hypothesis and the chronology protection conjecture: Can singularities not hidden behind an event horizon, known as "naked singularities", arise from realistic initial conditions, or is it possible to prove some version of the "cosmic censorship hypothesis" of Roger Penrose, which proposes that this is impossible? Similarly, will the closed timelike curves that arise in some solutions to the equations of general relativity (and that imply the possibility of backwards time travel) be ruled out by a theory of quantum gravity that unites general relativity with quantum mechanics, as suggested by the "chronology protection conjecture" of Stephen Hawking? Holographic principle: Is it true that quantum gravity admits a lower-dimensional description that does not contain gravity? A well-understood example of holography is the AdS/CFT correspondence in string theory. Similarly, can quantum gravity in a de Sitter space be understood using dS/CFT correspondence? Can the AdS/CFT correspondence be vastly generalized to the gauge–gravity duality for arbitrary asymptotic spacetime backgrounds? Are there other theories of quantum gravity other than string theory that admit a holographic description? Quantum spacetime or the emergence of spacetime: Is the nature of spacetime at the Planck scale very different from the continuous classical dynamical spacetime that exists in general relativity? In loop quantum gravity, the spacetime is postulated to be discrete from the beginning. In string theory, although originally spacetime was considered just like in general relativity (with the only difference being supersymmetry), recent research building upon the Ryu–Takayanagi conjecture has taught that spacetime in string theory is emergent by using quantum information theoretic concepts such as entanglement entropy in the AdS/CFT correspondence. However, how exactly the familiar classical spacetime emerges within string theory or the AdS/CFT correspondence is still not well understood. Problem of time: In quantum mechanics, time is a classical background parameter, and the flow of time is universal and absolute. In general relativity, time is one component of four-dimensional spacetime, and the flow of time changes depending on the curvature of spacetime and the spacetime trajectory of the observer. How can these two concepts of time be reconciled? Quantum physics Yang–Mills theory: Given an arbitrary compact gauge group, does a non-trivial quantum Yang–Mills theory with a finite mass gap exist? (This problem is also listed as one of the Millennium Prize Problems in mathematics.) Quantum field theory (this is a generalization of the previous problem): Is it possible to construct, in a mathematically rigorous way, a quantum field theory in 4-dimensional spacetime that includes interactions and does not resort to perturbative methods? Cosmology and general relativity Axis of evil: Some large features of the microwave sky at distances of over 13 billion light years appear to be aligned with both the motion and orientation of the solar system. Is this due to systematic errors in processing, contamination of results by local effects, an unexplained violation of the Copernican principle and thus the concordance model, or are these features simply statistically insignificant? Fine-tuned universe: The values of the fundamental physical constants are in a narrow range that is necessary to support carbon-based life. Is this because there are an infinite number of other universes with different constants, or are our universe's constants the result of chance, intelligent design (by a personal being such as the theist's "God"), or some other factor or process? (
Physical sciences
Physics basics: General
Physics
183143
https://en.wikipedia.org/wiki/Frigatebird
Frigatebird
Frigatebirds are a family of seabirds called Fregatidae which are found across all tropical and subtropical oceans. The five extant species are classified in a single genus, Fregata. All have predominantly black plumage, long, deeply forked tails and long hooked bills. Females have white underbellies and males have a distinctive red gular pouch, which they inflate during the breeding season to attract females. Their wings are long and pointed and can span up to , the largest wing area to body weight ratio of any bird. Able to soar for weeks on wind currents, frigatebirds spend most of the day in flight hunting for food, and roost on trees or cliffs at night. Their main prey are fish and squid, caught when chased to the water surface by large predators such as tuna. Frigatebirds are referred to as kleptoparasites as they occasionally rob other seabirds for food, and are known to snatch seabird chicks from the nest. Seasonally monogamous, frigatebirds nest colonially. A rough nest is constructed in low trees or on the ground on remote islands. A single egg is laid each breeding season. The duration of parental care is among the longest of any bird species; frigatebirds are only able to breed every other year. The Fregatidae are a sister group to Suloidea which consists of cormorants, darters, gannets, and boobies. Three of the five extant species of frigatebirds are widespread (the magnificent, great and lesser frigatebirds), while two are endangered (the Christmas Island and Ascension Island frigatebirds) and restrict their breeding habitat to one small island each. The oldest fossils date to the early Eocene, around 50 million years ago. Classified in the genus Limnofregata, the three species had shorter, less-hooked bills and longer legs, and lived in a freshwater environment. Taxonomy Etymology The term Frigate Bird itself was used in 1738 by the English naturalist and illustrator Eleazar Albin in his A Natural History of the Birds. The book included an illustration of the male bird showing the red gular pouch. Like the genus name, the English term is derived from the French mariners' name for the bird la frégate—a frigate or fast warship. The etymology was mentioned by French naturalist Jean-Baptiste Du Tertre when describing the bird in 1667. Alternative names and spellings include "frigate bird", "frigate-bird", "frigate", "frigate-petrel". Christopher Columbus encountered frigatebirds when passing the Cape Verde Islands on his first voyage across the Atlantic in 1492. In his journal entry for 29 September he used the word rabiforçado, modern Spanish rabihorcado or forktail. In the Caribbean frigatebirds were called Man-of-War birds by English mariners. This name was used by the English explorer William Dampier in his book An Account of a New Voyage Around the World published in 1697: The Man-of-War (as it is called by the English) is about the bigness of a Kite, and in shape like it, but black; and the neck is red. It lives on Fish yet never lights on the water, but soars aloft like a Kite, and when it sees its prey, it flys down head foremost to the Waters edge, very swiftly takes its prey out of the Sea with his Bill, and immediately mounts again as swiftly; never touching the Water with his Bill. His Wings are very long; his feet are like other Land-fowl, and he builds on Trees, where he finds any; but where they are wanting on the ground. Classification Frigatebirds were grouped with cormorants, and sulids (gannets and boobies) as well as pelicans in the genus Pelecanus by Linnaeus in 1758 in the tenth edition of his Systema Naturae. He described the distinguishing characteristics as a straight bill hooked at the tip, linear nostrils, a bare face, and fully webbed feet. The genus Fregata was introduced by French naturalist Bernard Germain de Lacépède in 1799. The type species was designated as the Ascension frigatebird by French zoologist François Marie Daudin in 1802. Louis Pierre Vieillot described the genus name Tachypetes in 1816 for the great frigatebird. The genus name Atagen had been coined by German naturalist Paul Möhring in 1752, though this has no validity as it predates the official beginning of Linnaean taxonomy. In 1874, English zoologist Alfred Henry Garrod published a study where he had examined various groups of birds and recorded which muscles of a selected group of five they possessed or lacked. Noting that the muscle patterns were different among the steganopodes (classical Pelecaniformes), he resolved that there were divergent lineages in the group that should be in separate families, including frigatebirds in their own family Fregatidae. Urless N. Lanham observed in 1947 that frigatebirds bore some skeletal characteristics more in common with Procellariiformes than Pelecaniformes, though concluded they still belonged in the latter group (as suborder Fregatae), albeit as an early offshoot. Martyn Kennedy and colleagues derived a cladogram based on behavioural characteristics of the traditional Pelecaniformes, calculating the frigatebirds to be more divergent than pelicans from a core group of gannets, darters and cormorants, and tropicbirds the most distant lineage. The classification of this group as the traditional Pelecaniformes, united by feet that are totipalmate (with all four toes linked by webbing) and the presence of a gular pouch, persisted until the early 1990s. The DNA–DNA hybridization studies of Charles Sibley and Jon Edward Ahlquist placed the frigatebirds in a lineage with penguins, loons, petrels and albatrosses. Subsequent genetic studies place the frigatebirds as a sister group to the group Suloidea, which comprises the gannets and boobies, cormorants and darters. Microscopic analysis of eggshell structure by Konstantin Mikhailov in 1995 found that the eggshells of frigatebirds resembled those of other Pelecaniformes in having a covering of thick microglobular material over the crystalline shells. Molecular studies have consistently shown that pelicans, the namesake family of the Pelecaniformes, are actually more closely related to herons, ibises and spoonbills, the hamerkop and the shoebill than to the remaining species. In recognition of this, the order comprising the frigatebirds and Suloidea was renamed Suliformes in 2010. In 1994, the family name Fregatidae, cited as described in 1867 by French naturalists Côme-Damien Degland and Zéphirin Gerbe, was conserved under Article 40(b) of the International Code of Zoological Nomenclature in preference to the 1840 description Tachypetidae by Johann Friedrich von Brandt. This was because the genus names Atagen and Tachypetes had been synonymised with Fregata before 1961, resulting in the aligning of family and genus names. Fossil record The Eocene frigatebird genus Limnofregata comprises birds whose fossil remains were recovered from prehistoric freshwater environments, unlike the marine preferences of their modern-day relatives. They had shorter less-hooked bills and longer legs, and longer slit-like nasal openings. Three species have been described from fossil deposits in the western United States, two—L. azygosternon and L. hasegawai—from the Green River Formation (48–52 million years old) and one—L. hutchisoni—from the Wasatch Formation (between 53 and 55 million years of age). Fossil material indistinguishable from living species dating to the Pleistocene and Holocene has been recovered from Ascension Island (for F. aquila), Saint Helena Island, both in the southern Atlantic Ocean, and also from various islands in the Pacific Ocean (for F. minor and F. ariel). A tarsometatarsus and pedal phalanx from the Lower Eocene London Clay of the Walton-on-the-Naze resembles Limnofregata, but being notably larger and distinct in other ways, was tentatively referred to Marinavis longirostris due to similar stratigraphy, geography, size, and presumed frigatebird affinities. A cladistic study of the skeletal and bone morphology of the classical Pelecaniformes and relatives found that the frigatebirds formed a clade with Limnofregata. Birds of the two genera have 15 cervical vertebrae, unlike almost all other Ciconiiformes, Suliformes and Pelecaniformes, which have 17. The age of Limnofregata indicates that these lineages had separated by the Eocene. Living species and infrageneric classification The type species of the genus is the Ascension frigatebird (Fregata aquila). For many years, the consensus was to recognise only two species of frigatebird, with larger birds as F. aquila and smaller as F. ariel. In 1914 the Australian ornithologist Gregory Mathews delineated five species, which remain valid. Analysis of ribosomal and mitochondrial DNA indicated that the five species had diverged from a common ancestor only recently—as little as 1.5 million years ago. There are two species pairs, the great and Christmas Island frigatebirds, and the magnificent and Ascension frigatebirds, while the fifth species, the lesser frigatebird, is an early offshoot of the common ancestor of the other four species. Two subspecies of the magnificent, three subspecies of the lesser and five subspecies of the great frigatebird are recognised. Description Frigatebirds are large slender mostly black-plumaged seabirds, with the five species similar in appearance to each other. The largest species is the magnificent frigatebird, which reaches in length, with three of the remaining four almost as large. The lesser frigatebird is substantially smaller, at around long. Frigatebirds exhibit marked sexual dimorphism; females are larger and up to 25 percent heavier than males, and generally have white markings on their underparts. Frigatebirds have short necks and long, slender hooked bills. Their long narrow wings (male wingspan can reach ) taper to points. Their wings have eleven primary flight feathers, with the tenth the longest and eleventh a vestigial feather only, and 23 secondaries. Their tails are deeply forked, though this is not apparent unless the tail is fanned. The tail and wings give them a distinctive 'W' silhouette in flight. The legs and face are fully feathered. The totipalmate feet are short and weak, the webbing is reduced and part of each toe is free. The bones of frigatebirds are markedly pneumatic, making them very light and contributing only 5% to total body weight. The pectoral girdle is strong as its bones are fused. The pectoral muscles are well-developed, and weigh as much as the frigatebird's feathers—around half the body weight is made up equally of these muscles and feathers. The males have inflatable red-coloured throat pouches called gular pouches, which they inflate to attract females during the mating season. The gular sac is, perhaps, the most striking frigatebird feature. These can only deflate slowly, so males that are disturbed will fly off with pouches distended for some time. Frigatebirds remain in the air and do not settle on the ocean. They produce very little oil from their uropygial glands so their feathers would become sodden if they settled on the surface. In addition, with their long wings relative to body size, they would have great difficulty taking off again. Distribution and habitat Frigatebirds are found over tropical oceans, and ride warm updrafts under cumulus clouds. Their range coincides with availability of food such as flying fish, and with the trade winds, which provide the windy conditions that facilitate their flying. They are rare vagrants to temperate regions and not found in polar latitudes. Adults are generally sedentary, remaining near the islands where they breed. However, male frigatebirds have been recorded dispersing great distances after departing a breeding colony—one male great frigatebird relocated from Europa Island in the Mozambique Channel to the Maldives away, and a male magnificent frigatebird flew from French Guiana to Trinidad. In 2015, a magnificent frigatebird was spotted as far north as Michigan. Great frigatebirds marked with wing tags on Tern Island in the French Frigate Shoals were found to regularly travel the to Johnston Atoll, although one was reported in Quezon City in the Philippines. Genetic testing seems to indicate that the species has fidelity to their site of hatching despite their high mobility. Young birds may disperse far and wide, with distances of up to recorded. Behaviour and ecology Having the largest wing-area-to-body-weight ratio of any bird, frigatebirds are essentially aerial. This allows them to soar continuously and only rarely flap their wings. One great frigatebird, being tracked by satellite in the Indian Ocean, stayed aloft for two months. They can fly higher than 4,000 meters in freezing conditions. Like swifts they are able to spend the night on the wing, but they will also return to an island to roost on trees or cliffs. Field observations in the Mozambique Channel found that great frigatebirds could remain on the wing for up to 12 days while foraging. Highly adept, they use their forked tails for steering during flight and make strong deep wing-beats, though not suited to flying by sustained flapping. Frigatebirds bathe and clean themselves in flight by flying low and splashing at the water surface before preening and scratching afterwards. Conversely, frigatebirds do not swim and with their short legs cannot walk well or take off from the sea easily. According to a study in the journal Nature Communications, scientists attached an accelerometer and an electroencephalogram testing device on nine great frigatebirds to measure if they slept during flight. The study found the birds do sleep, but usually only using one hemisphere of the brain at a time and usually sleep while ascending at higher altitudes. The amount of time mid-air sleeping was less than an hour and always at night. The average life span is unknown but in common with seabirds such as the wandering albatross and Leach's storm petrel, frigatebirds are long-lived. In 2002, 35 ringed great frigatebirds were recovered on Tern Island in the Hawaiian Islands. Of these ten were older than 37 years and one was at least 44 years of age. Despite having dark plumage in a tropical climate, frigatebirds have found ways not to overheat—particularly as they are exposed to full sunlight when on the nest. They ruffle feathers to lift them away from the skin and improve air circulation, and can extend and upturn their wings to expose the hot undersurface to the air and lose heat by evaporation and convection. Frigatebirds also place their heads in the shade of their wings, and males frequently flutter their gular pouches. Unlike most seabirds, frigatebirds are thermal soarers, using thermals to glide. This is in contrast to birds like albatrosses, which are dynamic soarers, using winds produced by the waves to stay aloft. Breeding behaviour Frigatebirds typically breed on remote oceanic islands, generally in colonies of up to 5000 birds. Within these colonies, they most often nest in groups of 10 to 30 (or rarely 100) individuals. Breeding can occur at any time of year, often prompted by commencement of the dry season or plentiful food. Frigatebirds have the most elaborate mating displays of all seabirds. The male birds take up residence in the colony in groups of up to thirty individuals. They display to females flying overhead by pointing their bills upwards, inflating their red throat pouches and vibrating their outstretched wings, showing the lighter wing undersurfaces in the process. They produce a drumming sound by vibrating their bills together and sometimes give a whistling call. The female descends to join a male she has chosen and allows him to take her bill in his. The pair also engages in mutual "head-snaking". After copulation it is generally the male who gathers sticks and the female that constructs the loosely woven nest. The nest is subsequently covered with (and cemented by) guano. Frigatebirds prefer to nest in trees or bushes, though when these are not available they will nest on the ground. A single white egg that weighs up to 6–7% of mother's body mass is laid, and is incubated in turns by both birds for 41 to 55 days. The altricial chicks are naked on hatching and develop a white down. They are continuously guarded by the parents for the first 4–6 weeks and are fed on the nest for 5–6 months. Both parents take turns feeding for the first three months, after which the male's attendance trails off leaving the mother to feed the young for another six to nine months on average. The chicks feed by reaching their heads in their parents' throat and eating the part-regurgitated food. It takes so long to rear a chick that frigatebirds generally breed every other year. The duration of parental care in frigatebirds is among the longest for birds, rivalled only by the southern ground hornbill and some large accipitrids. Frigatebirds take many years to reach sexual maturity. A study of great frigatebirds in the Galapagos Islands found that they only bred once they have acquired the full adult plumage. This was attained by female birds when they were eight to nine years of age and by male birds when they were ten to eleven years of age. Feeding Frigatebirds' feeding habits are pelagic, and they may forage up to 500 km (310 mi) from land. They do not land on the water but snatch prey from the ocean surface using their long, hooked bills. They mainly catch small fish such as flying fish, particularly the genera Exocoetus and Cypselurus, that are driven to the surface by predators such as tuna and dolphinfish, but they will also eat cephalopods, particularly squid. Menhaden of the genus Brevoortia can be an important prey item where common, and jellyfish and larger plankton are also eaten. Frigatebirds have learned to follow fishing vessels and take fish from holding areas. Conversely tuna fishermen fish in areas where they catch sight of frigatebirds due to their association with large marine predators. Frigatebirds also at times prey directly on eggs and young of other seabirds, including boobies, petrels, shearwaters and terns, in particular the sooty tern. Frigatebirds will rob other seabirds such as boobies, particularly the red-footed booby, tropicbirds, shearwaters, petrels, terns, gulls and even ospreys of their catch, using their speed and manoeuvrability to outrun and harass their victims until they regurgitate their stomach contents. They may either assail their targets after they have caught their food or circle high over seabird colonies waiting for parent birds to return laden with food. Although frigatebirds are renowned for their kleptoparasitic feeding behaviour, kleptoparasitism is not thought to play a significant part of the diet of any species, and is instead a supplement to food obtained by hunting. A study of great frigatebirds stealing from masked boobies estimated that the frigatebirds could at most obtain 40% of the food they needed, and on average obtained only 5%. Unlike most other seabirds, frigatebirds drink freshwater when they come across it, by swooping down and gulping with their bills. Parasites Frigatebirds are unusual among seabirds in that they often carry blood parasites. Blood-borne protozoa of the genus Haemoproteus have been recovered from four of the five species. Bird lice of the ischnoceran genus Pectinopygus and amblyceran genus Colpocephalum and species Fregatiella aurifasciata have been recovered from magnificent and great frigatebirds of the Galapagos Islands. Frigatebirds tended to have more parasitic lice than did boobies analysed in the same study. A heavy chick mortality at a large and important colony of the magnificent frigatebird, located on Île du Grand Connétable off French Guiana, was recorded in summer 2005. Chicks showed nodular skin lesions, feather loss and corneal changes, with around half the year's progeny perishing across the colony. An alphaherpesvirus was isolated and provisionally named Fregata magnificens herpesvirus, though it was unclear whether it caused the outbreak or affected birds already suffering malnutrition. Status and conservation Populations and threats Two of the five species are considered at risk. In 2003, a survey of the four colonies of the critically endangered Christmas Island frigatebirds counted 1200 breeding pairs. As frigatebirds normally breed every other year, the total adult population was estimated to lie between 1800 and 3600 pairs. Larger numbers formerly bred on the island, but the clearance of breeding habitat during World War II and dust pollution from phosphate mining have contributed to the decrease. The population of the vulnerable Ascension frigatebird has been estimated at around 12,500 individuals. The birds formerly bred on Ascension Island itself, but the colonies were exterminated by feral cats introduced in 1815. The birds continued to breed on a rocky outcrop just off the shore of the island. A program conducted between 2002 and 2004 eradicated the feral cats and a few birds have returned to nest on the island. The other three species are classified by the International Union for Conservation of Nature as being of Least Concern. The populations of all three are large, with that of the magnificent frigatebird thought to be increasing, while the great and lesser frigatebird decreasing. Monitoring populations of all species is difficult due to their movements across the open ocean and low reproductivity. The status of the Atlantic populations of the great and lesser frigatebirds are unknown and possibly extinct. As frigatebirds rely on large marine predators such as tuna for their prey, overfishing threatens to significantly impact on food availability and jeopardise whole populations. As frigatebirds nest in large dense colonies in small areas, they are vulnerable to local disasters that could wipe out the rare species or significantly impact the widespread ones. Hunting In Nauru, catching frigatebirds was an important tradition still practised to some degree. Donald W. Buden writes: "Birds typically are captured by slinging the weighted end of a coil of line in front of an approaching bird attracted to previously captured birds used as decoys. In a successful toss, the line becomes entangled about the bird's wing and bringing [sic] it to ground." Marine birds including frigatebirds were once harvested for food on Christmas Island but this practice ceased in the late 1970s. Eggs and young of magnificent frigatebirds were taken and eaten in the Caribbean. Great frigatebirds were eaten in the Hawaiian Islands and their feathers used for decoration. Cultural significance The frigate bird appears on the national Flag of Kiribati. The design is based on its former colonial Gilbert and Ellice Islands coat of arms. The bird also appears on the flag of Barbuda, and is the national bird of Antigua and Barbuda. There are anecdotal reports of tame frigatebirds being kept across Polynesia and Micronesia in the Pacific. A bird that had come from one island and had been taken elsewhere could be reliably trusted to return to its original home, hence would be used as a speedy way to relay a message there. There is evidence of this practice taking place in the Gilbert Islands and Tuvalu. The great frigatebird was venerated by the Rapa Nui people on Easter Island; carvings of the birdman Tangata manu depict him with the characteristic hooked beak and throat pouch. Its incorporation into local ceremonies suggests that the now-vanished species was extant there between the 1800s and 1860s. Maritime folklore around the time of European contact with the Americas held that frigatebirds were birds of good omen as their presence meant land was near.
Biology and health sciences
Pelecanimorphae
null
183193
https://en.wikipedia.org/wiki/Front-side%20bus
Front-side bus
The front-side bus (FSB) is a computer communication interface (bus) that was often used in Intel-chip-based computers during the 1990s and 2000s. The EV6 bus served the same function for competing AMD CPUs. Both typically carry data between the central processing unit (CPU) and a memory controller hub, known as the northbridge. Depending on the implementation, some computers may also have a back-side bus that connects the CPU to the cache. This bus and the cache connected to it are faster than accessing the system memory (or RAM) via the front-side bus. The speed of the front side bus is often used as an important measure of the performance of a computer. The original front-side bus architecture was replaced by HyperTransport, Intel QuickPath Interconnect, and Direct Media Interface, followed by Intel Ultra Path Interconnect and AMD's Infinity Fabric. History The term came into use by Intel Corporation about the time the Pentium Pro and Pentium II products were announced, in the 1990s. "Front side" refers to the external interface from the processor to the rest of the computer system, as opposed to the back side, where the back-side bus connects the cache (and potentially other CPUs). A front-side bus (FSB) is mostly used on PC-related motherboards (including personal computers and servers). They are seldom used in embedded systems or similar small computers. The FSB design was a performance improvement over the single system bus designs of the previous decades, but these front-side buses are sometimes referred to as the "system bus". Front-side buses usually connect the CPU and the rest of the hardware via a chipset, which Intel implemented as a northbridge and a southbridge. Other buses like the Peripheral Component Interconnect (PCI), Accelerated Graphics Port (AGP), and memory buses all connect to the chipset in order for data to flow between the connected devices. These secondary system buses usually run at speeds derived from the front-side bus clock, but are not necessarily synchronized to it. In response to AMD's Torrenza initiative, Intel opened its FSB CPU socket to third party devices. Prior to this announcement, made in Spring 2007 at Intel Developer Forum in Beijing, Intel had very closely guarded who had access to the FSB, only allowing Intel processors in the CPU socket. The first example was field-programmable gate array (FPGA) co-processors, a result of collaboration between Intel-Xilinx-Nallatech and Intel-Altera-XtremeData (which shipped in 2008). Related component speeds CPU The frequency at which a processor (CPU) operates is determined by applying a clock multiplier to the front-side bus (FSB) speed in some cases. For example, a processor running at 3200 MHz might be using a 400 MHz FSB. This means there is an internal clock multiplier setting (also called bus/core ratio) of 8. That is, the CPU is set to run at 8 times the frequency of the front-side bus: 400 MHz × 8 = 3200 MHz. Different CPU speeds are achieved by varying either the FSB frequency or the CPU multiplier, this is referred to as overclocking or underclocking. Memory Setting an FSB speed is related directly to the speed grade of memory a system must use. The memory bus connects the northbridge and RAM, just as the front-side bus connects the CPU and northbridge. Often, these two buses must operate at the same frequency. Increasing the front-side bus to 450 MHz in most cases also means running the memory at 450 MHz. In newer systems, it is possible to see memory ratios of "4:5" and the like. The memory will run 5/4 times as fast as the FSB in this situation, meaning a 400 MHz bus can run with the memory at 500 MHz. This is often referred to as an 'asynchronous' system. Due to differences in CPU and system architecture, overall system performance can vary in unexpected ways with different FSB-to-memory ratios. In image, audio, video, gaming, FPGA synthesis and scientific applications that perform a small amount of work on each element of a large data set, FSB speed becomes a major performance issue. A slow FSB will cause the CPU to spend significant amounts of time waiting for data to arrive from system memory. However, if the computations involving each element are more complex, the processor will spend longer performing these; therefore, the FSB will be able to keep pace because the rate at which the memory is accessed is reduced. Peripheral buses Similar to the memory bus, the PCI and AGP buses can also be run asynchronously from the front-side bus. In older systems, these buses are operated at a set fraction of the front-side bus frequency. This fraction was set by the BIOS. In newer systems, the PCI, AGP, and PCI Express peripheral buses often receive their own clock signals, which eliminates their dependence on the front-side bus for timing. Overclocking Overclocking is the practice of making computer components operate beyond their stock performance levels by manipulating the frequencies at which the component is set to run, and, when necessary, modifying the voltage sent to the component to allow it to operate at these higher frequencies with more stability. Many motherboards allow the user to manually set the clock multiplier and FSB settings by changing jumpers or BIOS settings. Almost all CPU manufacturers now "lock" a preset multiplier setting into the chip. It is possible to unlock some locked CPUs; for instance, some AMD Athlon processors can be unlocked by connecting electrical contacts across points on the CPU's surface. Some other processors from AMD and Intel are unlocked from the factory and labeled as an "enthusiast-grade" processors by end users and retailers because of this feature. For all processors, increasing the FSB speed can be done to boost processing speed by reducing latency between CPU and the northbridge. This practice pushes components beyond their specifications and may cause erratic behavior, overheating or premature failure. Even if the computer appears to run normally, problems may appear under a heavy load. Most PCs purchased from retailers or manufacturers, such as Hewlett-Packard or Dell, do not allow the user to change the multiplier or FSB settings due to the probability of erratic behavior or failure. Motherboards purchased separately to build a custom machine are more likely to allow the user to edit the multiplier and FSB settings in the PC's BIOS. Evolution The front-side bus had the advantage of high flexibility and low cost when it was first designed. Simple symmetric multiprocessors place a number of CPUs on a shared FSB, though performance could not scale linearly due to bandwidth bottlenecks. The front-side bus was used in all Intel Atom, Celeron, Pentium, Core 2, and Xeon processor models through about 2008 and was eliminated in 2009. Originally, this bus was a central connecting point for all system devices and the CPU. The potential of a faster CPU is wasted if it cannot fetch instructions and data as quickly as it can execute them. The CPU may spend significant time idle while waiting to read or write data in main memory, and high-performance processors therefore require high bandwidth and low latency access to memory. The front-side bus was criticized by AMD as being an old and slow technology that limits system performance. More modern designs use point-to-point and serial connections like AMD's HyperTransport and Intel's DMI 2.0 or QuickPath Interconnect (QPI). These implementations remove the traditional northbridge in favor of a direct link from the CPU to the system memory, high-speed peripherals, and the Platform Controller Hub, southbridge or I/O controller. In a traditional architecture, the front-side bus served as the immediate data link between the CPU and all other devices in the system, including main memory. In HyperTransport- and QPI-based systems, system memory is accessed independently by means of a memory controller integrated into the CPU, leaving the bandwidth on the HyperTransport or QPI link for other uses. This increases the complexity of the CPU design but offers greater throughput as well as superior scaling in multiprocessor systems. Transfer rates The bandwidth or maximum theoretical throughput of the front-side bus is determined by the product of the width of its data path, its clock frequency (cycles per second) and the number of data transfers it performs per clock cycle. For example, a 64-bit (8-byte) wide FSB operating at a frequency of 100 MHz that performs 4 transfers per cycle has a bandwidth of 3200 megabytes per second (MB/s): 8 bytes/transfer × 100 MHz × 4 transfers/cycle = 3200 MB/s The number of transfers per clock cycle depends on the technology used. For example, GTL+ performs 1 transfer/cycle, EV6 2 transfers/cycle, and AGTL+ 4 transfers/cycle. Intel calls the technique of four transfers per cycle Quad Pumping. Many manufacturers publish the frequency of the front-side bus in MHz, but marketing materials often list the theoretical effective signaling rate (which is commonly called megatransfers per second or MT/s). For example, if a motherboard (or processor) has its bus set at 200 MHz and performs 4 transfers per clock cycle, the FSB is rated at 800 MT/s. The specifications of several generations of popular processors are indicated below. Intel processors AMD processors
Technology
Computer hardware
null
183243
https://en.wikipedia.org/wiki/Alluvium
Alluvium
Alluvium (, ) is loose clay, silt, sand, or gravel that has been deposited by running water in a stream bed, on a floodplain, in an alluvial fan or beach, or in similar settings. Alluvium is also sometimes called alluvial deposit. Alluvium is typically geologically young and is not consolidated into solid rock. Sediments deposited underwater, in seas, estuaries, lakes, or ponds, are not described as alluvium. Floodplain alluvium can be highly fertile, and supported some of the earliest human civilizations. Definitions The present consensus is that "alluvium" refers to loose sediments of all types deposited by running water in floodplains or in alluvial fans or related landforms. However, the meaning of the term has varied considerably since it was first defined in the French dictionary of Antoine Furetière, posthumously published in 1690. Drawing upon concepts from Roman law, Furetière defined alluvion (the French term for alluvium) as new land formed by deposition of sediments along rivers and seas. By the 19th century, the term had come to mean recent sediments deposited by rivers on top of older diluvium, which was similar in character but interpreted as sediments deposited by Noah's flood. With the rejection by geologists of the concept of a primordial universal flood, the term "diluvium" fell into disfavor and was replaced with "older alluvium". At the same time, the term "alluvium" came to mean all sediment deposits due to running water on plains. The definition gradually expanded to include deposits in estuaries, coasts, and young rock of marine and fluvial origin. Alluvium and diluvium were grouped as colluvium in the late 19th century. "Colluvium" is now generally understood as sediments produced by gravity-driven transport on steep slopes. At the same time, the definition of "alluvium" has switched back to an emphasis on sediments deposited by river action. There continues to be disagreement over what other sediment deposits should be included under the term "alluvium". Age Most alluvium is Quaternary in age and is often referred to as "cover" because these sediments obscure the underlying bedrock. Most sedimentary material that fills a basin ("basin fill") that is not lithified is typically lumped together as "alluvial". Alluvium of Pliocene age occurs, for example, in parts of Idaho. Alluvium of late Miocene age occurs, for example, in the valley of the San Joaquin River, California.
Physical sciences
Sedimentology
Earth science
183256
https://en.wikipedia.org/wiki/Nuclear%20isomer
Nuclear isomer
A nuclear isomer is a metastable state of an atomic nucleus, in which one or more nucleons (protons or neutrons) occupy excited state levels (higher energy levels). "Metastable" describes nuclei whose excited states have half-lives 100 to 1000 times longer than the half-lives of the excited nuclear states that decay with a "prompt" half life (ordinarily on the order of 10−12 seconds). The term "metastable" is usually restricted to isomers with half-lives of 10−9 seconds or longer. Some references recommend 5 × 10−9 seconds to distinguish the metastable half life from the normal "prompt" gamma-emission half-life. Occasionally the half-lives are far longer than this and can last minutes, hours, or years. For example, the nuclear isomer survives so long (at least 1015 years) that it has never been observed to decay spontaneously. The half-life of a nuclear isomer can even exceed that of the ground state of the same nuclide, as shown by as well as , , , , and multiple holmium isomers. Sometimes, the gamma decay from a metastable state is referred to as isomeric transition, but this process typically resembles shorter-lived gamma decays in all external aspects with the exception of the long-lived nature of the meta-stable parent nuclear isomer. The longer lives of nuclear isomers' metastable states are often due to the larger degree of nuclear spin change which must be involved in their gamma emission to reach the ground state. This high spin change causes these decays to be forbidden transitions and delayed. Delays in emission are caused by low or high available decay energy. The first nuclear isomer and decay-daughter system (uranium X2/uranium Z, now known as /) was discovered by Otto Hahn in 1921. Nuclei of nuclear isomers The nucleus of a nuclear isomer occupies a higher energy state than the non-excited nucleus existing in the ground state. In an excited state, one or more of the protons or neutrons in a nucleus occupy a nuclear orbital of higher energy than an available nuclear orbital. These states are analogous to excited states of electrons in atoms. When excited atomic states decay, energy is released by fluorescence. In electronic transitions, this process usually involves emission of light near the visible range. The amount of energy released is related to bond-dissociation energy or ionization energy and is usually in the range of a few to few tens of eV per bond. However, a much stronger type of binding energy, the nuclear binding energy, is involved in nuclear processes. Due to this, most nuclear excited states decay by gamma ray emission. For example, a well-known nuclear isomer used in various medical procedures is , which decays with a half-life of about 6 hours by emitting a gamma ray of 140 keV of energy; this is close to the energy of medical diagnostic X-rays. Nuclear isomers have long half-lives because their gamma decay is "forbidden" from the large change in nuclear spin needed to emit a gamma ray. For example, has a spin of 9 and must gamma-decay to with a spin of 1. Similarly, has a spin of 1/2 and must gamma-decay to with a spin of 9/2. While most metastable isomers decay through gamma-ray emission, they can also decay through internal conversion. During internal conversion, energy of nuclear de-excitation is not emitted as a gamma ray, but is instead used to accelerate one of the inner electrons of the atom. These excited electrons then leave at a high speed. This occurs because inner atomic electrons penetrate the nucleus where they are subject to the intense electric fields created when the protons of the nucleus re-arrange in a different way. In nuclei that are far from stability in energy, even more decay modes are known. After fission, several of the fission fragments that may be produced have a metastable isomeric state. These fragments are usually produced in a highly excited state, in terms of energy and angular momentum, and go through a prompt de-excitation. At the end of this process, the nuclei can populate both the ground and the isomeric states. If the half-life of the isomers is long enough, it is possible to measure their production rate and compare it to that of the ground state, calculating the so-called isomeric yield ratio. Metastable isomers Metastable isomers can be produced through nuclear fusion or other nuclear reactions. A nucleus produced this way generally starts its existence in an excited state that relaxes through the emission of one or more gamma rays or conversion electrons. Sometimes the de-excitation does not completely proceed rapidly to the nuclear ground state. This usually occurs as a spin isomer when the formation of an intermediate excited state has a spin far different from that of the ground state. Gamma-ray emission is hindered if the spin of the post-emission state differs greatly from that of the emitting state, especially if the excitation energy is low. The excited state in this situation is a good candidate to be metastable if there are no other states of intermediate spin with excitation energies less than that of the metastable state. Metastable isomers of a particular isotope are usually designated with an "m". This designation is placed after the mass number of the atom; for example, cobalt-58m1 is abbreviated , where 27 is the atomic number of cobalt. For isotopes with more than one metastable isomer, "indices" are placed after the designation, and the labeling becomes m1, m2, m3, and so on. Increasing indices, m1, m2, etc., correlate with increasing levels of excitation energy stored in each of the isomeric states (e.g., hafnium-178m2, or ). A different kind of metastable nuclear state (isomer) is the fission isomer or shape isomer. Most actinide nuclei in their ground states are not spherical, but rather prolate spheroidal, with an axis of symmetry longer than the other axes, similar to an American football or rugby ball. This geometry can result in quantum-mechanical states where the distribution of protons and neutrons is so much further from spherical geometry that de-excitation to the nuclear ground state is strongly hindered. In general, these states either de-excite to the ground state far more slowly than a "usual" excited state, or they undergo spontaneous fission with half-lives of the order of nanoseconds or microseconds—a very short time, but many orders of magnitude longer than the half-life of a more usual nuclear excited state. Fission isomers may be denoted with a postscript or superscript "f" rather than "m", so that a fission isomer, e.g. of plutonium-240, can be denoted as plutonium-240f or . Nearly stable isomers Most nuclear excited states are very unstable and "immediately" radiate away the extra energy after existing on the order of 10−12 seconds. As a result, the characterization "nuclear isomer" is usually applied only to configurations with half-lives of 10−9 seconds or longer. Quantum mechanics predicts that certain atomic species should possess isomers with unusually long lifetimes even by this stricter standard and have interesting properties. Some nuclear isomers are so long-lived that they are relatively stable and can be produced and observed in quantity. The most stable nuclear isomer occurring in nature is , which is present in all tantalum samples at about 1 part in 8,300. Its half-life is at least 1015 years, markedly longer than the age of the universe. The low excitation energy of the isomeric state causes both gamma de-excitation to the ground state (which itself is radioactive by beta decay, with a half-life of only 8 hours) and direct electron capture to hafnium or beta decay to tungsten to be suppressed due to spin mismatches. The origin of this isomer is mysterious, though it is believed to have been formed in supernovae (as are most other heavy elements). Were it to relax to its ground state, it would release a photon with a photon energy of 75 keV. It was first reported in 1988 by C. B. Collins that theoretically can be forced to release its energy by weaker X-rays, although at that time this de-excitation mechanism had never been observed. However, the de-excitation of by resonant photo-excitation of intermediate high levels of this nucleus (E ≈ 1 MeV) was observed in 1999 by Belic and co-workers in the Stuttgart nuclear physics group. is another reasonably stable nuclear isomer. It possesses a half-life of 31 years and the highest excitation energy of any comparably long-lived isomer. One gram of pure contains approximately 1.33 gigajoules of energy, the equivalent of exploding about of TNT. In the natural decay of , the energy is released as gamma rays with a total energy of 2.45 MeV. As with , there are disputed reports that can be stimulated into releasing its energy. Due to this, the substance is being studied as a possible source for gamma-ray lasers. These reports indicate that the energy is released very quickly, so that can produce extremely high powers (on the order of exawatts). Other isomers have also been investigated as possible media for gamma-ray stimulated emission. Holmium's nuclear isomer has a half-life of 1,200 years, which is nearly the longest half-life of any holmium radionuclide. Only , with a half-life of 4,570 years, is more stable. has a remarkably low-lying metastable isomer only above the ground state. This low energy produces "gamma rays" at a wavelength of , in the far ultraviolet, which allows for direct nuclear laser spectroscopy. Such ultra-precise spectroscopy, however, could not begin without a sufficiently precise initial estimate of the wavelength, something that was only achieved in 2024 after two decades of effort. The energy is so low that the ionization state of the atom affects its half-life. Neutral decays by internal conversion with a half-life of , but because the isomeric energy is less than thorium's second ionization energy of , this channel is forbidden in thorium cations and decays by gamma emission with a half-life of . This conveniently moderate lifetime allows the development of a nuclear clock of unprecedented accuracy. High-spin suppression of decay The most common mechanism for suppression of gamma decay of excited nuclei, and thus the existence of a metastable isomer, is lack of a decay route for the excited state that will change nuclear angular momentum along any given direction by the most common amount of 1 quantum unit ħ in the spin angular momentum. This change is necessary to emit a gamma photon, which has a spin of 1 unit in this system. Integral changes of 2 and more units in angular momentum are possible, but the emitted photons carry off the additional angular momentum. Changes of more than 1 unit are known as forbidden transitions. Each additional unit of spin change larger than 1 that the emitted gamma ray must carry inhibits decay rate by about 5 orders of magnitude. The highest known spin change of 8 units occurs in the decay of 180mTa, which suppresses its decay by a factor of 1035 from that associated with 1 unit. Instead of a natural gamma-decay half-life of 10−12 seconds, it has a half-life of more than 1023 seconds, or at least 3 × 1015 years, and thus has yet to be observed to decay. Gamma emission is impossible when the nucleus begins in a zero-spin state, as such an emission would not conserve angular momentum. Applications Hafnium isomers (mainly 178m2Hf) have been considered as weapons that could be used to circumvent the Nuclear Non-Proliferation Treaty, since it is claimed that they can be induced to emit very strong gamma radiation. This claim is generally discounted. DARPA had a program to investigate this use of both nuclear isomers. The potential to trigger an abrupt release of energy from nuclear isotopes, a prerequisite to their use in such weapons, is disputed. Nonetheless a 12-member Hafnium Isomer Production Panel (HIPP) was created in 2003 to assess means of mass-producing the isotope. Technetium isomers (with a half-life of 6.01 hours) and (with a half-life of 61 days) are used in medical and industrial applications. Nuclear batteries Nuclear batteries use small amounts (milligrams and microcuries) of radioisotopes with high energy densities. In one betavoltaic device design, radioactive material sits atop a device with adjacent layers of P-type and N-type silicon. Ionizing radiation directly penetrates the junction and creates electron–hole pairs. Nuclear isomers could replace other isotopes, and with further development, it may be possible to turn them on and off by triggering decay as needed. Current candidates for such use include 108Ag, 166Ho, 177Lu, and 242Am. As of 2004, the only successfully triggered isomer was 180mTa, which required more photon energy to trigger than was released. An isotope such as 177Lu releases gamma rays by decay through a series of internal energy levels within the nucleus, and it is thought that by learning the triggering cross sections with sufficient accuracy, it may be possible to create energy stores that are 106 times more concentrated than high explosive or other traditional chemical energy storage. Decay processes An isomeric transition or internal transition (IT) is the decay of a nuclear isomer to a lower-energy nuclear state. The actual process has two types (modes): γ (gamma ray) emission (emission of a high-energy photon), internal conversion (the energy is used to eject one of the atom's electrons). Isomers may decay into other elements, though the rate of decay may differ between isomers. For example, 177mLu can beta-decay to 177Hf with a half-life of 160.4 d, or it can undergo isomeric transition to 177Lu with a half-life of 160.4 d, which then beta-decays to 177Hf with a half-life of 6.68 d. The emission of a gamma ray from an excited nuclear state allows the nucleus to lose energy and reach a lower-energy state, sometimes its ground state. In certain cases, the excited nuclear state following a nuclear reaction or other type of radioactive decay can become a metastable nuclear excited state. Some nuclei are able to stay in this metastable excited state for minutes, hours, days, or occasionally far longer. The process of isomeric transition is similar to gamma emission from any excited nuclear state, but differs by involving excited metastable states of nuclei with longer half-lives. As with other excited states, the nucleus can be left in an isomeric state following the emission of an alpha particle, beta particle, or some other type of particle. The gamma ray may transfer its energy directly to one of the most tightly bound electrons, causing that electron to be ejected from the atom, a process termed the photoelectric effect. This should not be confused with the internal conversion process, in which no gamma-ray photon is produced as an intermediate particle.
Physical sciences
Nuclear physics
Physics
183290
https://en.wikipedia.org/wiki/Life%20extension
Life extension
Life extension is the concept of extending the human lifespan, either modestly through improvements in medicine or dramatically by increasing the maximum lifespan beyond its generally-settled biological limit of around 125 years. Several researchers in the area, along with "life extensionists", "immortalists", or "longevists" (those who wish to achieve longer lives themselves), postulate that future breakthroughs in tissue rejuvenation, stem cells, regenerative medicine, molecular repair, gene therapy, pharmaceuticals, and organ replacement (such as with artificial organs or xenotransplantations) will eventually enable humans to have indefinite lifespans through complete rejuvenation to a healthy youthful condition (agerasia). The ethical ramifications, if life extension becomes a possibility, are debated by bioethicists. The sale of purported anti-aging products such as supplements and hormone replacement is a lucrative global industry. For example, the industry that promotes the use of hormones as a treatment for consumers to slow or reverse the aging process in the US market generated about $50 billion of revenue a year in 2009. The use of such hormone products has not been proven to be effective or safe. Average life expectancy and lifespan During the process of aging, an organism accumulates damage to its macromolecules, cells, tissues, and organs. Specifically, aging is characterized as and thought to be caused by "genomic instability, telomere attrition, epigenetic alterations, loss of proteostasis, deregulated nutrient sensing, mitochondrial dysfunction, cellular senescence, stem cell exhaustion, and altered intercellular communication." Oxidation damage to cellular contents caused by free radicals is believed to contribute to aging as well. The longest documented human lifespan is 122 years 164 days, the case of Jeanne Calment, who according to records was born in 1875 and died in 1997, whereas the maximum lifespan of a wildtype mouse, commonly used as a model in research on aging, is about three years. Genetic differences between humans and mice that may account for these different aging rates include differences in efficiency of DNA repair, antioxidant defenses, energy metabolism, proteostasis maintenance, and recycling mechanisms such as autophagy. The average life expectancy in a population is lowered by infant and child mortality, which are frequently linked to infectious diseases or nutrition problems. Later in life, vulnerability to accidents and age-related chronic disease such as cancer or cardiovascular disease play an increasing role in mortality. Extension of life expectancy and lifespan can often be achieved by access to improved medical care, vaccinations, good diet, exercise, and avoidance of hazards such as smoking. Maximum lifespan is determined by the rate of aging for a species inherent in its genes and by environmental factors. Widely recognized methods of extending maximum lifespan in model organisms such as nematodes, fruit flies, and mice include caloric restriction, gene manipulation, and administration of pharmaceuticals. Another technique uses evolutionary pressures such as breeding from only older members or altering levels of extrinsic mortality. Some animals such as hydra, planarian flatworms, and certain sponges, corals, and jellyfish do not die of old age and exhibit potential immortality. History The extension of life has been a desire of humanity and a mainstay motif in the history of scientific pursuits and ideas throughout history, from the Sumerian Epic of Gilgamesh and the Egyptian Smith medical papyrus, all the way through the Taoists, Ayurveda practitioners, alchemists, hygienists such as Luigi Cornaro, Johann Cohausen and Christoph Wilhelm Hufeland, and philosophers such as Francis Bacon, René Descartes, Benjamin Franklin and Nicolas Condorcet. However, the beginning of the modern period in this endeavor can be traced to the end of the 19th – beginning of the 20th century, to the so-called "fin-de-siècle" (end of the century) period, denoted as an "end of an epoch" and characterized by the rise of scientific optimism and therapeutic activism, entailing the pursuit of life extension (or life-extensionism). Among the foremost researchers of life extension at this period were the Nobel Prize winning biologist Elie Metchnikoff (1845-1916) -- the author of the cell theory of immunity and vice director of Institut Pasteur in Paris, and Charles-Édouard Brown-Séquard (1817-1894) -- the president of the French Biological Society and one of the founders of modern endocrinology. Sociologist James Hughes claims that science has been tied to a cultural narrative of conquering death since the Age of Enlightenment. He cites Francis Bacon (1561–1626) as an advocate of using science and reason to extend human life, noting Bacon's novel New Atlantis, wherein scientists worked toward delaying aging and prolonging life. Robert Boyle (1627–1691), founding member of the Royal Society, also hoped that science would make substantial progress with life extension, according to Hughes, and proposed such experiments as "to replace the blood of the old with the blood of the young". Biologist Alexis Carrel (1873–1944) was inspired by a belief in indefinite human lifespan that he developed after experimenting with cells, says Hughes. Contemporary Regulatory and legal struggles between the Food and Drug Administration (FDA) and the Life Extension organization included seizure of merchandise and court action. In 1991, Saul Kent and Bill Faloon, the principals of the organization, were jailed for four hours and were released on $850,000 bond each. After 11 years of legal battles, Kent and Faloon convinced the US Attorney's Office to dismiss all criminal indictments brought against them by the FDA. In 2003, Doubleday published "The Immortal Cell: One Scientist's Quest to Solve the Mystery of Human Aging," by Michael D. West. West emphasised the potential role of embryonic stem cells in life extension. Other modern life extensionists include writer Gennady Stolyarov, who insists that death is "the enemy of us all, to be fought with medicine, science, and technology"; transhumanist philosopher Zoltan Istvan, who proposes that the "transhumanist must safeguard one's own existence above all else"; futurist George Dvorsky, who considers aging to be a problem that desperately needs to be solved; and recording artist Steve Aoki, who has been called "one of the most prolific campaigners for life extension". Scientific research In 1991, the American Academy of Anti-Aging Medicine (A4M) was formed. The American Board of Medical Specialties recognizes neither anti-aging medicine nor the A4M's professional standing. In 2003, Aubrey de Grey and David Gobel formed the Methuselah Foundation, which gives financial grants to anti-aging research projects. In 2009, de Grey and several others founded the SENS Research Foundation, a California-based scientific research organization which conducts research into aging and funds other anti-aging research projects at various universities. In 2013, Google announced Calico, a new company based in San Francisco that will harness new technologies to increase scientific understanding of the biology of aging. It is led by Arthur D. Levinson, and its research team includes scientists such as Hal V. Barron, David Botstein, and Cynthia Kenyon. In 2014, biologist Craig Venter founded Human Longevity Inc., a company dedicated to scientific research to end aging through genomics and cell therapy. They received funding with the goal of compiling a comprehensive human genotype, microbiome, and phenotype database. Aside from private initiatives, aging research is being conducted in university laboratories, and includes universities such as Harvard and UCLA. University researchers have made a number of breakthroughs in extending the lives of mice and insects by reversing certain aspects of aging. Research Theoretically, extension of maximum lifespan in humans could be achieved by reducing the rate of aging damage by periodic replacement of damaged tissues, molecular repair or rejuvenation of deteriorated cells and tissues, reversal of harmful epigenetic changes, or the enhancement of enzyme telomerase activity. Research geared towards life extension strategies in various organisms is currently under way at a number of academic and private institutions. Since 2009, investigators have found ways to increase the lifespan of nematode worms and yeast by 10-fold; the record in nematodes was achieved through genetic engineering and the extension in yeast by a combination of genetic engineering and caloric restriction. A 2009 review of longevity research noted: "Extrapolation from worms to mammals is risky at best, and it cannot be assumed that interventions will result in comparable life extension factors. Longevity gains from dietary restriction, or from mutations studied previously, yield smaller benefits to Drosophila than to nematodes, and smaller still to mammals. This is not unexpected, since mammals have evolved to live many times the worm's lifespan, and humans live nearly twice as long as the next longest-lived primate. From an evolutionary perspective, mammals and their ancestors have already undergone several hundred million years of natural selection favoring traits that could directly or indirectly favor increased longevity, and may thus have already settled on gene sequences that promote lifespan. Moreover, the very notion of a "life-extension factor" that could apply across taxa presumes a linear response rarely seen in biology." Anti-aging drugs There are numerous chemicals intended to slow the aging process under study in animal models. One type of research is related to the observed effects of a calorie restriction (CR) diet, which has been shown to extend lifespan in some animals. Based on that research, there have been attempts to develop drugs that will have the same effect on the aging process as a CR diet, which are known as caloric restriction mimetic drugs, such as rapamycin and metformin. Sirtuin activating polyphenols, such as resveratrol and pterostilbene, and flavonoids, such as quercetin and fisetin, as well as oleic acid are dietary supplements that have also been studied in this context. Other common supplements with less clear biological pathways to target aging include lipoic acid, senolytics, and coenzyme Q10. While agents such as these have some limited laboratory evidence of efficacy in animals, there are no studies to date in humans for drugs that may promote life extension, mainly because research investment remains at a low level, and regulatory standards are high. Aging is not recognized as a preventable condition by governments, indicating there is no clear pathway to approval of anti-aging medications. Further, anti-aging drug candidates are under constant review by regulatory authorities like the US Food and Drug Administration, which stated in 2023 that "no medication has been proven to slow or reverse the aging process." Nanotechnology Future advances in nanomedicine could give rise to life extension through the repair of many processes thought to be responsible for aging. K. Eric Drexler, one of the founders of nanotechnology, postulated cell repair machines, including ones operating within cells and utilizing as yet hypothetical molecular computers, in his 1986 book Engines of Creation. Raymond Kurzweil, a futurist and transhumanist, stated in his book The Singularity Is Near that he believes that advanced medical nanorobotics could completely remedy the effects of aging by 2030. According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman's theoretical nanomachines (see biological machine). Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) "swallow the doctor". The idea was incorporated into Feynman's 1959 essay There's Plenty of Room at the Bottom. Cyborgs Replacement of biological (susceptible to diseases) organs with mechanical ones could extend life. This is the goal of the 2045 Initiative. Cryonics Cryonics is the low-temperature freezing (usually at ) of a human corpse, with the hope that resuscitation may be possible in the future. It is regarded with skepticism within the mainstream scientific community and has been characterized as quackery. Strategies for engineered negligible senescence Another proposed life extension technology aims to combine existing and predicted future biochemical and genetic techniques. SENS proposes that rejuvenation may be obtained by removing aging damage via the use of stem cells and tissue engineering, telomere-lengthening machinery, allotopic expression of mitochondrial proteins, targeted ablation of cells, immunotherapeutic clearance, and novel lysosomal hydrolases. While some biogerontologists find these ideas "worthy of discussion", others contend that the alleged benefits are too speculative given the current state of technology, referring to it as "fantasy rather than science". Genetic editing Genome editing, in which nucleic acid polymers are delivered as a drug and are either expressed as proteins, interfere with the expression of proteins, or correct genetic mutations, has been proposed as a future strategy to prevent aging. CRISPR/Cas9 CRISPR/Cas9 edits genes by precisely cutting DNA and then harnessing natural DNA repair processes to modify the gene in the desired manner. The system has two components: the Cas9 enzyme and a guide RNA. A large array of genetic modifications have been found to increase lifespan in model organisms such as yeast, nematode worms, fruit flies, and mice. As of 2013, the longest extension of life caused by a single gene manipulation was roughly 50% in mice and 10-fold in nematode worms. In July 2020 scientists, using public biological data on 1.75 m people with known lifespans overall, identify 10 genomic loci which appear to intrinsically influence healthspan, lifespan, and longevity – of which half have not been reported previously at genome-wide significance and most being associated with cardiovascular disease – and identify haem metabolism as a promising candidate for further research within the field. Their study suggests that high levels of iron in the blood likely reduce, and genes involved in metabolising iron likely increase healthy years of life in humans. The same month other scientists report that yeast cells of the same genetic material and within the same environment age in two distinct ways, describe a biomolecular mechanism that can determine which process dominates during aging and genetically engineer a novel aging route with substantially extended lifespan. Fooling genes In The Selfish Gene, Richard Dawkins describes an approach to life-extension that involves "fooling genes" into thinking the body is young. Dawkins attributes inspiration for this idea to Peter Medawar. The basic idea is that our bodies are composed of genes that activate throughout our lifetimes, some when we are young and others when we are older. Presumably, these genes are activated by environmental factors, and the changes caused by these genes activating can be lethal. It is a statistical certainty that we possess more lethal genes that activate in later life than in early life. Therefore, to extend life, we should be able to prevent these genes from switching on, and we should be able to do so by "identifying changes in the internal chemical environment of a body that take place during aging... and by simulating the superficial chemical properties of a young body". Cloning and body part replacement Some life extensionists suggest that therapeutic cloning and stem cell research could one day provide a way to generate cells, body parts, or even entire bodies (generally referred to as reproductive cloning) that would be genetically identical to a prospective patient. In 2008, the US Department of Defense announced a program to research the possibility of growing human body parts on mice. Complex biological structures, such as mammalian joints and limbs, have not yet been replicated. Dog and primate brain transplantation experiments were conducted in the mid-20th century but failed due to rejection and the inability to restore nerve connections. As of 2006, the implantation of bio-engineered bladders grown from patients' own cells has proven to be a viable treatment for bladder disease. Proponents of body part replacement and cloning contend that the required biotechnologies are likely to appear earlier than other life-extension technologies. The use of human stem cells, particularly embryonic stem cells, is controversial. Opponents' objections generally are based on interpretations of religious teachings or ethical considerations. Proponents of stem cell research point out that cells are routinely formed and destroyed in a variety of contexts. Use of stem cells taken from the umbilical cord or parts of the adult body may not provoke controversy. The controversies over cloning are similar, except general public opinion in most countries stands in opposition to reproductive cloning. Some proponents of therapeutic cloning predict the production of whole bodies, lacking consciousness, for eventual brain transplantation. Ethics and politics Scientific controversy Some critics dispute the portrayal of aging as a disease. For example, Leonard Hayflick, who determined that fibroblasts are limited to around 50 cell divisions, reasons that aging is an unavoidable consequence of entropy. Hayflick and fellow biogerontologists Jay Olshansky and Bruce Carnes have strongly criticized the anti-aging industry in response to what they see as unscrupulous profiteering from the sale of unproven anti-aging supplements. Consumer motivations Research by Sobh and Martin (2011) suggests that people buy anti-aging products to obtain a hoped-for self (e.g., keeping a youthful skin) or to avoid a feared-self (e.g., looking old). The research shows that when consumers pursue a hoped-for self, it is expectations of success that most strongly drive their motivation to use the product. The research also shows why doing badly when trying to avoid a feared self is more motivating than doing well. When product use is seen to fail it is more motivating than success when consumers seek to avoid a feared-self. Political parties Though many scientists state that life extension and radical life extension are possible, there are still no international or national programs focused on radical life extension. There are political forces working both for and against life extension. By 2012, in Russia, the United States, Israel, and the Netherlands, the Longevity political parties started. They aimed to provide political support to radical life extension research and technologies, and ensure the fastest possible and at the same time soft transition of society to the next step – life without aging and with radical life extension, and to provide access to such technologies to most currently living people. Silicon Valley Some tech innovators and Silicon Valley entrepreneurs have invested heavily into anti-aging research. This includes Jeff Bezos (founder of Amazon), Larry Ellison (founder of Oracle), Peter Thiel (former PayPal CEO), Larry Page (co-founder of Google), Peter Diamandis, Sam Altman (CEO of OpenAI, invested in Retro Biosciences), and Brian Armstrong (founder of Coinbase and NewLimit), Bryan Johnson (Founder of Kernel). Commentators Leon Kass (chairman of the US President's Council on Bioethics from 2001 to 2005) has questioned whether potential exacerbation of overpopulation problems would make life extension unethical. He states his opposition to life extension with the words: John Harris, former editor-in-chief of the Journal of Medical Ethics, argues that as long as life is worth living, according to the person himself, we have a powerful moral imperative to save the life and thus to develop and offer life extension therapies to those who want them. Transhumanist philosopher Nick Bostrom has argued that any technological advances in life extension must be equitably distributed and not restricted to a privileged few. In an extended metaphor entitled "The Fable of the Dragon-Tyrant", Bostrom envisions death as a monstrous dragon who demands human sacrifices. In the fable, after a lengthy debate between those who believe the dragon is a fact of life and those who believe the dragon can and should be destroyed, the dragon is finally killed. Bostrom argues that political inaction allowed many preventable human deaths to occur. Overpopulation concerns Controversy about life extension is due to fear of overpopulation and possible effects on society. Biogerontologist Aubrey De Grey counters the overpopulation critique by pointing out that the therapy could postpone or eliminate menopause, allowing women to space out their pregnancies over more years and thus decreasing the yearly population growth rate. Moreover, the philosopher and futurist Max More argues that, given that the worldwide population growth rate is slowing down and is projected to eventually stabilize and begin falling, superlongevity would be unlikely to contribute to overpopulation. Opinion polls A Spring 2013 Pew Research poll in the United States found that 38% of Americans would want life extension treatments, and 56% would reject it. However, it also found that 68% believed most people would want it and that only 4% consider an "ideal lifespan" to be more than 120 years. The median "ideal lifespan" was 91 years of age and the majority of the public (63%) viewed medical advances aimed at prolonging life as generally good. 41% of Americans believed that radical life extension (RLE) would be good for society, while 51% said they believed it would be bad for society. One possibility for why 56% of Americans claim they would reject life extension treatments may be due to the cultural perception that living longer would result in a longer period of decrepitude, and that the elderly in our current society are unhealthy. Religious people are no more likely to oppose life extension than the unaffiliated, though some variation exists between religious denominations. Aging as a disease Most mainstream medical organizations and practitioners do not consider aging to be a disease. Biologist David Sinclair says: "I don't see aging as a disease, but as a collection of quite predictable diseases caused by the deterioration of the body." The two main arguments used are that aging is both inevitable and universal while diseases are not. However, not everyone agrees. Harry R. Moody, director of academic affairs for AARP, notes that what is normal and what is disease strongly depend on a historical context. David Gems, assistant director of the Institute of Healthy Ageing, argues that aging should be viewed as a disease. In response to the universality of aging, David Gems notes that it is as misleading as arguing that Basenji are not dogs because they do not bark. Because of the universality of aging he calls it a "special sort of disease". Robert M. Perlman, coined the terms "aging syndrome" and "disease complex" in 1954 to describe aging. The discussion whether aging should be viewed as a disease or not has important implications. One view is, this would stimulate pharmaceutical companies to develop life extension therapies and in the United States of America, it would also increase the regulation of the anti-aging market by the Food and Drug Administration (FDA). Anti-aging now falls under the regulations for cosmetic medicine which are less tight than those for drugs. Beliefs and methods Senolytics and prolongevity drugs Senolytics eliminate senescent cells whereas senomorphics – with candidates such as Apigenin, Everolimus and Rapamycin – modulate properties of senescent cells without eliminating them, suppressing phenotypes of senescence, including the SASP. Senomorphic effects may be one major effect mechanism of a range of prolongevity drug candidates. Such candidates are however typically not studied for just one mechanism, but multiple. There are biological databases of prolongevity drug candidates under research as well as of potential gene/protein targets. These are enhanced by longitudinal cohort studies, electronic health records, computational (drug) screening methods, computational biomarker-discovery methods and computational biodata-interpretation/personalized medicine methods. Besides rapamycin and senolytics, the drug-repurposing candidates studied most extensively include metformin, acarbose, spermidine and NAD+ enhancers. Many prolongevity drugs are synthetic alternatives or potential complements to existing nutraceuticals, such as various sirtuin-activating compounds under investigation like SRT2104. In some cases pharmaceutical administration is combined with that of neutraceuticals – such as in the case of glycine combined with NAC. Often studies are structured based on or thematize specific prolongevity targets, listing both nutraceuticals and pharmaceuticals (together or separately) such as FOXO3-activators. Researchers are also exploring ways to mitigate side-effects from such substances (possibly most notably rapamycin and its derivatives) such as via protocols of intermittent administration and have called for research that helps determine optimal treatment schedules (including timing) in general. Diets and supplements Vitamins and antioxidants The free-radical theory of aging suggests that antioxidant supplements might extend human life. Reviews, however, have found that use of vitamin A (as β-carotene) and vitamin E supplements possibly can increase mortality. Other reviews have found no relationship between vitamin E and other vitamins with mortality. Vitamin D supplementation of various dosages is investigated in trials and there also is research into GlyNAC . Complications Complications of antioxidant supplementation (especially continuous high dosages far above the RDA) include that reactive oxygen species (ROS), which are mitigated by antioxidants, "have been found to be physiologically vital for signal transduction, gene regulation, and redox regulation, among others, implying that their complete elimination would be harmful". In particular, one way of multiple they can be detrimental is by inhibiting adaptation to exercise such as muscle hypertrophy (e.g. during dedicated periods of caloric surplus). There is also research into stimulating/activating/fueling endogenous antioxidant generation, in particular e.g. of neutraceutical glycine and pharmaceutical NAC. Antioxidants can change the oxidation status of different e.g. tissues, targets or sites each with potentially different implications, especially for different concentrations. A review suggests mitochondria have a hormetic response to ROS, whereby low oxidative damage can be beneficial. Dietary restriction As of 2021, there is no clinical evidence that any dietary restriction practice contributes to human longevity. Healthy diet Research suggests that increasing adherence to Mediterranean diet patterns is associated with a reduction in total and cause-specific mortality, extending health- and lifespan. Research is identifying the key beneficial components of the Mediterranean diet. Studies suggest dietary changes are a factor of national relative rises in life-span. Optimal diet Approaches to develop optimal diets for health- and lifespan (or "longevity diets") include: modifying the Mediterranean diet as the baseline via nutrition science. For instance, via: (additional) increase in plant-based foods alongside additional restriction of meat intake – meat reduction is (or can be) typically healthy, keeping alcohol consumption of any type at a minimum – conventional Mediterranean diets include alcohol consumption (i.e. of wine), which is under research due to data suggesting negative long-term brain impacts even at low/moderate consumption levels. fully replacing refined grains – some guidelines of Mediterranean diets do not clarify or include the principle of whole-grain consumption instead of refined grains. Whole grains are included in Mediterranean diets. Other approaches Further advanced biosciences-based approaches include: Genetic and epigenetic alterations: Human genetic enhancement for pro-longevity and protective genes – see genetics of aging Cellular reprogramming: in vivo reprogramming to complement or augment human regenerative capacity and rejuvenate or replace cells Epigenetic reprogramming: early-stage research about rejuvenating/repairing epigenetic machinery Stem-cell interventions: "Increasing the number and quality of stem cells and activate regenerative signals" Nanomedicine: early-stage research of in vivo pro-longevity nanotechnology Tissue engineering: of tissues and organs (see also: xenotransplantation and artificial organ) Endogenous circulating biomolecules: Blood proteins of blood from young animals have shown some pro-longevity potential in animal studies (e.g. via transfer of blood or plasma, and of plasma proteins). Moreover, exerkines – signalling biomolecules released during/after exercise – have also shown promising results. Exerkines include myokines. Extracellular vesicles were shown to be secreted concomitantly with exerkines and are also investigated. (
Biology and health sciences
Fields of medicine
Health
183324
https://en.wikipedia.org/wiki/Thermodynamic%20activity
Thermodynamic activity
In thermodynamics, activity (symbol ) is a measure of the "effective concentration" of a species in a mixture, in the sense that the species' chemical potential depends on the activity of a real solution in the same way that it would depend on concentration for an ideal solution. The term "activity" in this sense was coined by the American chemist Gilbert N. Lewis in 1907. By convention, activity is treated as a dimensionless quantity, although its value depends on customary choices of standard state for the species. The activity of pure substances in condensed phases (solids and liquids) is taken as = 1. Activity depends on temperature, pressure and composition of the mixture, among other things. For gases, the activity is the effective partial pressure, and is usually referred to as fugacity. The difference between activity and other measures of concentration arises because the interactions between different types of molecules in non-ideal gases or solutions are different from interactions between the same types of molecules. The activity of an ion is particularly influenced by its surroundings. Equilibrium constants should be defined by activities but, in practice, are often defined by concentrations instead. The same is often true of equations for reaction rates. However, there are circumstances where the activity and the concentration are significantly different and, as such, it is not valid to approximate with concentrations where activities are required. Two examples serve to illustrate this point: In a solution of potassium hydrogen iodate KH(IO3)2 at 0.02 M the activity is 40% lower than the calculated hydrogen ion concentration, resulting in a much higher pH than expected. When a 0.1 M hydrochloric acid solution containing methyl green indicator is added to a 5 M solution of magnesium chloride, the color of the indicator changes from green to yellow—indicating increasing acidity—when in fact the acid has been diluted. Although at low ionic strength (< 0.1 M) the activity coefficient approaches unity, this coefficient can actually increase with ionic strength in a high ionic strength regime. For hydrochloric acid solutions, the minimum is around 0.4 M. Definition The relative activity of a species , denoted , is defined as: where is the (molar) chemical potential of the species under the conditions of interest, is the (molar) chemical potential of that species under some defined set of standard conditions, is the gas constant, is the thermodynamic temperature and is the exponential constant. Alternatively, this equation can be written as: In general, the activity depends on any factor that alters the chemical potential. Such factors may include: concentration, temperature, pressure, interactions between chemical species, electric fields, etc. Depending on the circumstances, some of these factors, in particular concentration and interactions, may be more important than others. The activity depends on the choice of standard state such that changing the standard state will also change the activity. This means that activity is a relative term that describes how "active" a compound is compared to when it is under the standard state conditions. In principle, the choice of standard state is arbitrary; however, it is often chosen out of mathematical or experimental convenience. Alternatively, it is also possible to define an "absolute activity" (i.e., the fugacity in statistical mechanics), , which is written as: Note that this definition corresponds to setting as standard state the solution of , if the latter exists. Activity coefficient The activity coefficient , which is also a dimensionless quantity, relates the activity to a measured mole fraction (or in the gas phase), molality , mass fraction , molar concentration (molarity) or mass concentration : The division by the standard molality (usually 1 mol/kg) or the standard molar concentration (usually 1 mol/L) is necessary to ensure that both the activity and the activity coefficient are dimensionless, as is conventional. The activity depends on the chosen standard state and composition scale; for instance, in the dilute limit it approaches the mole fraction, mass fraction, or numerical value of molarity, all of which are different. However, the activity coefficients are similar. When the activity coefficient is close to 1, the substance shows almost ideal behaviour according to Henry's law (but not necessarily in the sense of an ideal solution). In these cases, the activity can be substituted with the appropriate dimensionless measure of composition , or . It is also possible to define an activity coefficient in terms of Raoult's law: the International Union of Pure and Applied Chemistry (IUPAC) recommends the symbol for this activity coefficient, although this should not be confused with fugacity. Standard states Gases In most laboratory situations, the difference in behaviour between a real gas and an ideal gas is dependent only on the pressure and the temperature, not on the presence of any other gases. At a given temperature, the "effective" pressure of a gas is given by its fugacity : this may be higher or lower than its mechanical pressure. By historical convention, fugacities have the dimension of pressure, so the dimensionless activity is given by: where is the dimensionless fugacity coefficient of the species, is its mole fraction in the gaseous mixture ( for a pure gas) and is the total pressure. The value is the standard pressure: it may be equal to 1 atm (101.325 kPa) or 1 bar (100 kPa) depending on the source of data, and should always be quoted. Mixtures in general The most convenient way of expressing the composition of a generic mixture is by using the mole fractions (written in the gas phase) of the different components (or chemical species: atoms or molecules) present in the system, where with , the number of moles of the component i, and , the total number of moles of all the different components present in the mixture. The standard state of each component in the mixture is taken to be the pure substance, i.e. the pure substance has an activity of one. When activity coefficients are used, they are usually defined in terms of Raoult's law, where is the Raoult's law activity coefficient: an activity coefficient of one indicates ideal behaviour according to Raoult's law. Dilute solutions (non-ionic) A solute in dilute solution usually follows Henry's law rather than Raoult's law, and it is more usual to express the composition of the solution in terms of the molar concentration (in mol/L) or the molality (in mol/kg) of the solute rather than in mole fractions. The standard state of a dilute solution is a hypothetical solution of concentration  = 1 mol/L (or molality  = 1 mol/kg) which shows ideal behaviour (also referred to as "infinite-dilution" behaviour). The standard state, and hence the activity, depends on which measure of composition is used. Molalities are often preferred as the volumes of non-ideal mixtures are not strictly additive and are also temperature-dependent: molalities do not depend on volume, whereas molar concentrations do. The activity of the solute is given by: Ionic solutions When the solute undergoes ionic dissociation in solution (for example a salt), the system becomes decidedly non-ideal and we need to take the dissociation process into consideration. One can define activities for the cations and anions separately ( and ). In a liquid solution the activity coefficient of a given ion (e.g. Ca2+) isn't measurable because it is experimentally impossible to independently measure the electrochemical potential of an ion in solution. (One cannot add cations without putting in anions at the same time). Therefore, one introduces the notions of mean ionic activity mean ionic molality mean ionic activity coefficient where represent the stoichiometric coefficients involved in the ionic dissociation process Even though and cannot be determined separately, is a measurable quantity that can also be predicted for sufficiently dilute systems using Debye–Hückel theory. For electrolyte solutions at higher concentrations, Debye–Hückel theory needs to be extended and replaced, e.g., by a Pitzer electrolyte solution model (see external links below for examples). For the activity of a strong ionic solute (complete dissociation) we can write: Measurement The most direct way of measuring the activity of a volatile species is to measure its equilibrium partial vapor pressure. For water as solvent, the water activity aw is the equilibrated relative humidity. For non-volatile components, such as sucrose or sodium chloride, this approach will not work since they do not have measurable vapor pressures at most temperatures. However, in such cases it is possible to measure the vapor pressure of the solvent instead. Using the Gibbs–Duhem relation it is possible to translate the change in solvent vapor pressures with concentration into activities for the solute. The simplest way of determining how the activity of a component depends on pressure is by measurement of densities of solution, knowing that real solutions have deviations from the additivity of (molar) volumes of pure components compared to the (molar) volume of the solution. This involves the use of partial molar volumes, which measure the change in chemical potential with respect to pressure. Another way to determine the activity of a species is through the manipulation of colligative properties, specifically freezing point depression. Using freezing point depression techniques, it is possible to calculate the activity of a weak acid from the relation, where is the total equilibrium molality of solute determined by any colligative property measurement (in this case ), is the nominal molality obtained from titration and is the activity of the species. There are also electrochemical methods that allow the determination of activity and its coefficient. The value of the mean ionic activity coefficient of ions in solution can also be estimated with the Debye–Hückel equation, the Davies equation or the Pitzer equations. Single ion activity measurability revisited The prevailing view that single ion activities are unmeasurable, or perhaps even physically meaningless, has its roots in the work of Edward A. Guggenheim in the late 1920s. However, chemists have not given up the idea of single ion activities. For example, pH is defined as the negative logarithm of the hydrogen ion activity. By implication, if the prevailing view on the physical meaning and measurability of single ion activities is correct it relegates pH to the category of thermodynamically unmeasurable quantities. For this reason the International Union of Pure and Applied Chemistry (IUPAC) states that the activity-based definition of pH is a notional definition only and further states that the establishment of primary pH standards requires the application of the concept of 'primary method of measurement' tied to the Harned cell. Nevertheless, the concept of single ion activities continues to be discussed in the literature, and at least one author purports to define single ion activities in terms of purely thermodynamic quantities. The same author also proposes a method of measuring single ion activity coefficients based on purely thermodynamic processes. A different approach has a similar objective. Use Chemical activities should be used to define chemical potentials, where the chemical potential depends on the temperature , pressure and the activity according to the formula: where is the gas constant and is the value of under standard conditions. Note that the choice of concentration scale affects both the activity and the standard state chemical potential, which is especially important when the reference state is the infinite dilution of a solute in a solvent. Chemical potential has units of joules per mole (J/mol), or energy per amount of matter. Chemical potential can be used to characterize the specific Gibbs free energy changes occurring in chemical reactions or other transformations. Formulae involving activities can be simplified by considering that: For a chemical solution: the solvent has an activity of unity (only a valid approximation for rather dilute solutions) At a low concentration, the activity of a solute can be approximated to the ratio of its concentration over the standard concentration: Therefore, it is approximately equal to its concentration. For a mix of gas at low pressure, the activity is equal to the ratio of the partial pressure of the gas over the standard pressure: Therefore, it is equal to the partial pressure in atmospheres (or bars), compared to a standard pressure of 1 atmosphere (or 1 bar). For a solid body, a uniform, single species solid has an activity of unity at standard conditions. The same thing holds for a pure liquid. The latter follows from any definition based on Raoult's law, because if we let the solute concentration go to zero, the vapor pressure of the solvent will go to . Thus its activity will go to unity. This means that if during a reaction in dilute solution more solvent is generated (the reaction produces water for example) we can typically set its activity to unity. Solid and liquid activities do not depend very strongly on pressure because their molar volumes are typically small. Graphite at 100 bars has an activity of only 1.01 if we choose as standard state. Only at very high pressures do we need to worry about such changes. Activity expressed in terms of pressure is called fugacity. Example values Example values of activity coefficients of sodium chloride in aqueous solution are given in the table. In an ideal solution, these values would all be unity. The deviations tend to become larger with increasing molality and temperature, but with some exceptions.
Physical sciences
Thermodynamics
Chemistry
183340
https://en.wikipedia.org/wiki/Bicycle%20brake
Bicycle brake
A bicycle brake reduces the speed of a bicycle or prevents the wheels from moving. The two main types are: rim brakes and disc brakes. Drum brakes are less common on bicycles. Most bicycle brake systems consist of three main components: a mechanism for the rider to apply the brakes, such as brake levers or pedals; a mechanism for transmitting that signal, such as Bowden cables, hydraulic hoses, rods, or the bicycle chain; and the brake mechanism itself, a caliper or drum, to press two or more surfaces together in order to convert, via friction, kinetic energy of the bike and rider into thermal energy to be dissipated. History Karl Drais included a pivoting brake shoe that could be pressed against the rear iron tyre of his 1817 . This was continued on the earliest bicycles with pedals, such as the boneshaker, which were fitted with a spoon brake to press onto the rear wheel. The brake was operated by a lever or by a cord connecting to the handlebars. The rider could also slow down by resisting the pedals of the fixed-wheel drive. The next development of bicycles, the penny-farthings, were similarly braked with a spoon brake or by back pedalling. During its development from 1870 to 1878, there were various designs for brakes, most of them operating on the rear wheel. However, as the rear wheel became smaller and smaller, with more of the rider's weight over the front wheel, braking on the rear wheel became less effective. The front brake, introduced by John Kean in 1873, had been generally adopted by 1880 because of its greater stopping power. Some penny-farthing riders used only back pedalling and got off and walked down steep hills, but most also used a brake. Having a brake meant that riders could coast down hill by taking their feet off the pedals and placing the legs over the handlebars, although most riders preferred to dismount and walk down steep hills. Putting the legs under the handlebars with the feet off the pedals placed on foot-rests on the forks had resulted in serious accidents caused by the feet getting caught in the spokes. An alternative to the spoon brake for penny-farthings was the caliper brake patented by Browett and Harrison in 1887. This early version of caliper braking used a rubber block to contact the outside of the penny-farthing's small rear tyre. The 1870s and 1880s saw the development of the safety bicycle which roughly resembles bicycles today, with two wheels of equal size, initially with solid rubber tyres. These were typically equipped with a front spoon brake and no rear brake mechanism, but like penny-farthings they used fixed gears, allowing rear wheel braking by resisting the motion of the pedals. The relative fragility of the wooden rims used on most bicycles still precluded the use of rim brakes. In the late 1890s came the introduction of rim brakes and the freewheel. With the introduction of mass-produced pneumatic tyres by the Dunlop Tyre Company, the use of spoon brakes began to decline, as they tended to quickly wear through the thin casing of the new tyres. This problem led to demands for alternative braking systems. On November 23, 1897, Abram W. Duck of Duck's Cyclery in Oakland, California, was granted a patent for his Duck Roller Brake (U.S. Patent 594,234). The Duck brake used a rod operated by a lever on the handlebar to pull twin rubber rollers against the front tyre, braking the front wheel. In 1898, after the advent of freewheel coasting mechanisms, the first internal coaster brakes were introduced for the rear wheel. The coaster brake was contained in the rear wheel hub, and was engaged and controlled by backpedaling, thus eliminating the issue of tyre wear. In the United States, the coaster brake was the most commonly fitted brake throughout the first half of the 20th century, often comprising the only braking system on the bicycle. Brake types Spoon brakes The spoon brake or plunger brake was probably the first type of bicycle brake and precedes the pneumatic tyre. Spoon brakes were used on penny farthings with solid rubber tyres in the 1800s and continued to be used after the introduction of the pneumatic-tyred safety bicycle. The spoon brake consists of a pad (often leather) or metal shoe (possibly rubber faced), which is pressed onto the top of the front tyre. These were almost always rod-operated by a right-hand lever. In developing countries, a foot-operated form of the spoon brake sometimes is retrofitted to old rod brake roadsters. It consists of a spring-loaded flap attached to the back of the fork crown. This is depressed against the front tyre by the rider's foot. Perhaps more so than any other form of bicycle brake, the spoon brake is sensitive to road conditions and increases tyre wear dramatically. Though made obsolete by the introduction of the Duck brake, coaster brake, and rod brake, spoon brakes continued to be used in the West supplementally on adult bicycles until the 1930s, and on children's bicycles until the 1950s. In the developing world, they were manufactured until much more recently. Duck brake Invented in 1897, the Duck brake or Duck roller brake used a rod operated by a lever on the handlebar to pull twin friction rollers (usually made of wood or rubber) against the front tyre. Mounted on axles secured by friction washers and set at an angle to conform to the shape of the tyre, the rollers were forced against their friction washers upon contacting the tyre, thus braking the front wheel. A tension spring held the rollers away from the tyre except when braking. Braking power was enhanced by an extra-long brake lever mounted in parallel with and behind the handlebar, which provided additional leverage when braking (two hands could be used to pull the lever if necessary). Used in combination with a rear coaster brake, a cyclist of the day could stop much more quickly and with better modulation of braking effort than was possible using only a spoon brake or rear coaster brake. Known colloquially as the duck brake, the design was used by many notable riders of the day, and was widely exported to England, Australia, and other countries. In 1902, Louis H. Bill was granted a patent for an improved version of the Duck Roller Brake (Patent 708,114) for use on motorized bicycles (motorcycles). Rim brakes Rim brakes are so called because braking force is applied by friction pads to the rim of the rotating wheel, thus slowing it and the bicycle. Brake pads can be made of leather, rubber or cork and are often mounted in metal "shoes". Rim brakes are typically actuated by a lever mounted on the handlebar. Advantages and disadvantages Rim brakes are inexpensive, light, mechanically simple, easy to maintain, and powerful. However, they perform relatively poorly when the rims are wet, and will brake unevenly if the rims are even slightly warped. Because rims can carry debris from the ground to the brake pads, rim brakes are more prone to clogging with mud or snow than disc brakes (where braking surfaces are higher from the ground), particularly when riding on unpaved surfaces. The low price and ease of maintenance of rim brakes makes them popular in low- to mid-price commuter bikes, where the disadvantages are alleviated by the unchallenging conditions. The light weight of rim brakes also makes them desirable in road racing bicycles. Rim brakes require regular maintenance. Brake pads wear down and have to be replaced. As they wear down, their position may need to be adjusted as the material wears away. Because the motion of most brakes is not perfectly horizontal, the pads may lose their centering as they wear, causing the pads to wear unevenly. Over longer time and use, rims can become worn. Rims should be checked for wear periodically as they can fail catastrophically if the braking surface becomes too worn. Wear is accelerated by wet and muddy conditions. Rim brakes require that the rims be straight (not out-of-round or warped). If a rim has a pronounced wobble, then the braking force may be intermittent or uneven, and the pads may rub the rims even when the brake is not applied. During braking, the friction surfaces (brake pads and rims) will experience thermal heating. In normal use this is not a problem, as the brakes are applied with limited force and for a short time, so the heat quickly dissipates to the surrounding air. However, on a heavily laden bike on a long descent, heat energy may be added more quickly than it can dissipate causing heat build-up, which may damage components and cause brake failure. A ceramic coating for the rims is available which may reduce wear and can also improve both wet and dry braking. It may also slightly reduce heat transfer to the inside of the rims because it is a thermal insulator. Brake pads Brake pads are available with numerous shapes and materials. Many consist of a replaceable rubber pad held on a mounting, or brake shoe, with a post or bolt on the back to attach to the brake. Some are made as one piece with the attachment directly molded in the pad for lower production costs; brake pads of the cartridge type are held in place by a metal split pin or threaded grub screw and can be replaced without moving the brake shoe from its alignment to the rim. The rubber can be softer for more braking force with less lever effort, or harder for longer life. Many pad designs have a rectangular shape; others are longer and curved to match the radius of the rim. Larger pads do not necessarily provide more braking force, but will wear more slowly (in relation to thickness), so can usually be thinner. In general, a brake can be fitted with a variety of pads, as long as the mounting is compatible. Carbon fiber rims may be more sensitive to damage by incorrectly-matched brake pads, and generally must use non-abrasive cork pads. Ceramic-coated rims should be used with special pads because of heat build-up at the pad-rim interface; standard pads can leave a "glaze" on the ceramic braking surface, reducing its inherent roughness and leading to a severe drop in wet-weather braking performance. Ceramic pads usually contain chromium compounds to resist heat. For wet-weather use, brake pads containing iron (iii) oxide are sometimes used as these have higher friction on a wet aluminum rim than the usual rubber. These salmon-colored pads were first made by Scott-Mathauser and are now produced by Kool-Stop. To minimise excessive rim wear, a brake pad should be hard enough that it does not embed road grit or chips of rim metal in the face of the pad, since these act as grinding/gouging agents and markedly reduce rim life. Types of rim brakes The following are among the many sub-types of rim brakes: Rod-actuated brakes The rod-actuated brake, or simply rod brake (roller lever brake in Raleigh terminology), uses a series of rods and pivots, rather than Bowden cables, to transmit force applied to a hand lever to pull friction pads upwards against the inner surface, which faces the hub, of the wheel rim. They were often called "stirrup brakes" due to their shape. Rod brakes are used with a rim profile known as the Westwood rim, which has a slightly concave area on the braking surface and lacks the flat outer surface required by brakes that apply the pads on opposite sides of the rim. The rear linkage mechanism is complicated by the need to allow rotation where the fork and handlebars attach to the frame. A common setup was to combine a front rod brake with a rear coaster brake. Although heavy and complex, the linkages are reliable and durable and can be repaired or adjusted with simple hand tools. The design is still in use, typically on African and Asian roadsters such as the Sohrab and Flying Pigeon. Caliper brakes The caliper brake is a class of cable-actuated brake in which the brake mounts to a single point above the wheel, theoretically allowing the arms to auto-centre on the rim. Arms extend around the tyre and end in brake shoes that press against the rim. While some designs incorporate dual pivot points — the arms pivot on a sub-frame — the entire assembly still mounts to a single point. Caliper brakes tend to become less effective as tyres get wider, and so deeper, reducing the brakes' mechanical advantage. Thus caliper brakes are rarely found on modern mountain bikes. But they are almost ubiquitous on road bikes, particularly the dual-pivot side-pull caliper brake. Side-pull caliper brakes Single-pivot side-pull caliper brakes consist of two curved arms that cross at a pivot above the wheel and hold the brake pads on opposite sides of the rim. These arms have extensions on one side, one attached to the cable, the other to the cable housing. When the brake lever is squeezed, the arms move together and the brake pads squeeze the rim. These brakes are simple and effective for relatively narrow tyres but have significant flex and resulting poor performance if the arms are made long enough to fit wide tyres. If not adjusted properly, low quality varieties tend to rotate to one side during actuation and tend to stay there, making it difficult to evenly space brake shoes away from the rim. These brakes are now used on inexpensive bikes; before the introduction of dual-pivot caliper brakes they were used on all types of road bikes. Dual-pivot side-pull caliper brakes are used on most modern racing bicycles. One arm pivots at the centre, like a side-pull; and the other pivots at the side, like a centre-pull. The cable housing attaches like that of a side-pull brake. These brakes offer a higher mechanical advantage, and result in better braking. Dual-pivot brakes are slightly heavier than conventional side-pull calipers and cannot accurately track an out-of-true rim, or a wheel that flexes from side to side in the frame during hard climbing. It is common to see professional racers climbing mountains with the quick-release undone on the rear brake, to eliminate drag from this source. Direct mount rim brakes employ two mounting points, increasing stiffness and braking power. It was developed by Shimano and released as an open standard. Individual mounting points for each arm ease the centering of side-pull brakes, and accommodate tire widths of 30 mm and more. Centre-pull caliper brakes Centre-pull caliper brakes have symmetrical arms and therefore centre more effectively. The cable housing attaches to a fixed cable stop attached to the frame, and the inner cable bolts to a sliding piece (called a "braking delta", "braking triangle", or "yoke") or a small pulley, over which runs a straddle cable connecting the two brake arms. Tension on the cable is evenly distributed to the two arms, preventing the brake from taking a "set" to one side or the other. These brakes were reasonably priced, and in the past filled the price niche between the cheaper and the more expensive models of side-pull brakes. They are more effective than side-pull brakes in long reach applications as the distance between the pivot and brake pad or cable attachment is much shorter, reducing flex. It is important that the fixed bridge holding the pivots is very stiff. U-brakes U-brakes (also known by the trademarked term 990-style) are essentially the same design as the centre-pull caliper brake. The difference is that the two arm pivots attach directly to the frame or fork while those of the centre-pull caliper brake attach to an integral bridge frame that mounts to the frame or fork by a single bolt. Like roller cam brakes, this is a caliper design with pivots located above the rim. Thus U-brakes are often interchangeable with, and have the same maintenance issues as, roller cam brakes. U-brakes were used on mountain bikes through the mid-to-late 1980s, particularly under the chain stays, a rear brake mounting location that was then popular. This location usually benefits from higher frame stiffness, an important consideration with a powerful brake since flex in the stays will increase lever travel and reduce effective braking force. Unfortunately it is also very prone to clogging by mud, which meant that U-brakes quickly fell out of favour on cross-country bikes. U-brakes are the current standard on Freestyle BMX frames and forks. The U-brake's main advantage over cantilever and linear-pull brakes in this application is that sideways protrusion of the brake and cable system is minimal, and the exposed parts are smooth. This is especially valuable on freestyle BMX bikes where any protruding parts are susceptible to damage and may interfere with the rider's body or clothing. Cantilever brakes The cantilever brake is a class of brake in which each arm is attached to a separate pivot point on one side of the seat stay or fork. Thus all cantilever brakes are dual-pivot. Both first- and second-class lever designs exist; second-class is by far the more common. In the second-class lever design, the arm pivots below the rim. The brake shoe is mounted above the pivot and is pressed against the rim as the two arms are drawn together. In the first-class lever design, the arm pivots above the rim. The brake shoe is mounted below the pivot and is pressed against the rim as the two arms are forced apart. Due to a wider possible distance between the mounts and pads, cantilever brakes are often preferred for bicycles that use wide tyres, such as on mountain bikes. Because the arms move only in their designed arcs, the brake shoe must be adjustable in several planes. Thus cantilever brake shoes are notoriously difficult to adjust. As the brake shoes of a second-class cantilever brake wears, they ride lower on the rim. Eventually, one may go underneath the rim, so that the brake does not function. There are several brake types based on the cantilever brake design: cantilever brakes and direct-pull brakes – both second class lever designs – and roller cam brakes and U-brakes – both first class lever designs. Traditional cantilever brakes Traditional cantilever brakes pre-date the direct-pull brake. It is a centre-pull cantilever design with an outwardly angled arm protruding on each side, a cable stop on the frame or fork to terminate the cable housing, and a straddle cable between the arms similar to centre-pull caliper brakes. The cable from the brake lever pulls upwards on the straddle cable, causing the brake arms to rotate up and inward thus squeezing the rim between the brake pads. Originally, cantilever brakes had nearly horizontal arms and were designed for maximum clearance on touring or cyclo-cross bicycles. When the mountain bike became popular, cantilever brakes were adopted for these too, but the smaller MTB frames meant that riders often fouled the rear brake arms with their heels. "Low profile" cantilevers were designed to overcome this, where the arms are closer to 45 degrees from horizontal. Low profile brakes require more careful attention to cable geometry than traditional cantilevers but are now the most common type. Traditional cantilever brakes are difficult to adapt to bicycle suspension and protrude somewhat from the frame. Accordingly, they are usually found only on bicycles without suspension. V-brakes Linear-pull brakes or direct-pull brakes, commonly referred to by Shimano's trademark V-brakes, are a side-pull version of cantilever brakes and mount on the same frame bosses. However, the arms are longer, with the cable housing attached to one arm and the cable to the other. As the cable pulls against the housing, the arms are drawn together. Because the housing enters from vertically above one arm yet force must be transmitted laterally between arms, the flexible housing is extended by a rigid tube with a 90° bend known as the "noodle" (a noodle with a 135° bend is used where the front brake is operated by the right hand, as this gives a smoother curve in the cable housing). The noodle is seated in a stirrup attached to the arm. A flexible bellows often covers the exposed cable. Since there is no intervening mechanism between the cable and the arms, the design is called "direct-pull". And since the arms move the same distance that the cable moves with regard to its housing, the design is also called "linear-pull". The term "V-brake" is trademarked by Shimano and represents the most popular implementation of this design. Some high-end v-brakes use a four-pivot parallel motion so the brake pads contact at virtually the same position on the wheel rim regardless of wear. V-brakes function well with the suspension systems found on many mountain bikes because they do not require a separate cable stop on the frame or fork. Because of the higher mechanical advantage of V-brakes, they require brake levers with longer cable travel than levers intended for older types of brakes. Mechanical (i.e. cable-actuated) disc brakes use the same amount of cable travel as V-brakes, except for those that are described as being "road" specific. As a general rule, mechanical disc brakes for so-called "flat bar" bicycles (chiefly mountain and hybrid bicycles) are compatible with V-brake levers, whereas mechanical disc brakes intended for "drop-bar" bicycles are compatible with the cable pull of older brake designs (cantilever, caliper, and U-brake). Poorly designed V-brakes can suffer from a sudden failure when the noodle end pulls through the metal stirrup, leaving the wheel with no braking power. Although the noodle can be regarded as a service item and changed regularly, the hole in the stirrup may enlarge through wear. The stirrup cannot normally be replaced, so good-quality V-brakes use a durable metal for the stirrup. Mini V-brakes (or mini Vs) are V-brakes with shorter arms, typically between 8 and 9 centimeters. This reduces the required cable pull, making them compatible with brake levers intended for cantilever brakes. Mini V-brakes retain advantages specific to V-brakes such as not requiring extra cable stops. On the downside, their shorter arms provide very small tyre and wheel clearance and generally make for a less forgiving setup: they can only accommodate smaller tyre sizes compared to cantilever brakes, may pose problems for mounting fenders, can be clogged more easily by mud, and they can make it harder to change wheels. V-brakes always use thin and relatively long brake pads. The thin pads ease wheel removal, which is achieved by pushing the arms together and unhooking the noodle from the stirrup. The additional length gives good pad life by compensating for the thinner material depth. Roller cam brakes Roller cam brakes are centre-pull cantilever brakes actuated by the cable pulling a single two-sided sliding cam. (First- and second-class lever designs exist; first-class is most common and is described here.) Each arm has a cam follower. As the cam presses against the follower it forces the arms apart. As the top of each arm moves outward, the brake shoe below the pivot is forced inward against the rim. There is much in favor of the roller cam brake design. Since the cam controls the rate of closure, the clamping force can be made non-linear with the pull. And since the design can provide positive mechanical advantage, maximum clamping force can be higher than that of other types of brakes. They are known for being strong and controllable. On the downside, they require some skill to set up and can complicate wheel changes. They also require maintenance: like U-brakes, as the pad wears it strikes the rim higher; unless re-adjusted it can eventually contact the tyre's sidewall. The roller cam design was first developed by Charlie Cunningham of WTB around 1982 and licensed to Suntour. Roller cam brakes were used on early mountain bikes in the 1980s and into the 1990s, mounted to the fork blades and seat stays in the standard locations, as well as below the chain stays for improved stiffness as they do not protrude to interfere with the crank. It is not unusual for a bicycle to have a single roller cam brake (or U-brake) combined with another type. They are still used on some BMX and recumbent bicycles. There are two rare variants that use the roller cam principle. For locations where centre-pull is inappropriate, the side-pull toggle cam brake was developed. Also a first-class cantilever, it uses a single-sided sliding cam (the toggle) against one arm that is attached by a link to the other arm. As the cam presses against the follower, the force is also transmitted to the other arm via the link. And specifically for suspension forks where the housing must terminate at the brake frame, the side-pull "sabre cam brake" was developed. In the sabre cam design, the cable end is fixed and the housing moves the single-sided cam. Delta brakes The delta brake is a road bicycle brake named due to its triangular shape. The cable enters at the centre, pulls a corner of a parallelogram linkage housed inside the brake across two opposite corners, pushing out at the other two corners on to the brake arms above the pivots, so that the arms below the pivots push pads in against the rim. A feature of the design is that the mechanical advantage varies as a tangent function across its range, where that of most other designs remains fixed. Many consider the brake attractive, and it has a lower wind profile than some other common brakes. However, Bicycle Quarterly criticized the delta brake for being heavy, giving mediocre stopping power, and suffering disadvantageous variable mechanical advantage. In particular, with a small parallelogram, pad wear causes mechanical advantage to rise dramatically. However, with high leverage, the stroke of the lever is not enough to fully apply the brake, so the rider can have brakes that feel normal in light braking but which cannot be applied harder for hard braking. The basic design dates from at least the 1930s. They were made most prominently by Campagnolo in 1985, but brakes based on the same mechanism were also manufactured by Modolo (Kronos), , and others. They are no longer made and are now uncommon. Hydraulic rim brakes Hydraulic rim brakes are one of the least common types of brakes. They are mounted either on the same pivot points used for cantilever and linear-pull brakes or they can be mounted on four-bolt brake mounts found on many trials frames. They were available on some high-end mountain bikes in the early 1990s, but declined in popularity with the rise of disc brakes. The moderate performance advantage (greater power and control) they offer over cable actuated rim brakes is offset by their greater weight and complexity. Some e-bikes continue to use them since they are powerful, relatively low-maintenance and weight is less of an issue when electric assistance is available. Disc brakes A disc brake consists of a metal disc, or "rotor", attached to the wheel hub that rotates with the wheel. Calipers are attached to the frame or fork along with pads that squeeze the rotors for braking. Disc brakes may be actuated mechanically by cable, or hydraulically. Disc brakes are most common for mountain bikes (including nearly all downhill bikes), and are also seen on some hybrid bicycles and touring bicycles. Towards the end of the 2010s, disc brakes have become increasingly common also on racing bicycles. A disc brake is sometimes employed as a drag brake for controlled speed reduction on steep descents. Many hydraulic disc brakes have a self-adjusting mechanism so as the brake pad wears, the pistons keep the distance from the pad to the disc consistent to maintain the same brake lever throw. Some hydraulic brakes, especially older ones, and most mechanical discs have manual controls to adjust the pad-to-rotor gap. Several adjustments are often required during the life of the pads. Advantages Disc brakes tend to perform equally well in all conditions including water, mud, and snow due to several factors: The braking surface is farther from the ground and possible contaminants like mud which can coat or freeze on the rim and pads. With rim brakes, the first point that mud builds up on a mountain bike ridden in thick mud is usually the brakes. A mountain bicycle with disc brakes is less susceptible to mud buildup provided the rear frame and front fork yoke have sufficient clearance from the wheels. Disc brakes may be made of materials that dissipate heat better than the wheel rim, but undersized sport-sized discs will be too small to take advantage of this fact. There are holes in the rotor, providing a path for water and debris to get out from under the pads. Wheel rims tend to be made of lightweight metal. Brake discs and pads are harder and can accept higher maximum loads. It is possible to ride a bicycle with a buckled wheel if it has disc brakes, where it would not be possible with a rim brake because the buckled wheel would bind on the brake pads. Other reasons include: While all types of brakes will eventually wear out the braking surface, a brake disc is easier and cheaper to replace than a wheel rim or drum. The use of very wide tyres favors disc brakes, as rim brakes require ever-longer arms to clear the wider tyre. Longer arms tend to flex more, degrading braking. Disc brakes are unaffected by tyre width. Unlike some rarer rim brake designs, disc brakes are compatible with front and rear suspension. Different wheel sizes can be used with the same frame: i.e. the same frame built for 29″ tires can often also fit 27.5″+ (650+) tires, despite the two wheel sizes having different rim diameters. This is possible with disc brakes, so long as the rotor sizes are consistent and the frame has enough clearance; with rim brakes, this would be impossible, as the different rim sizes would not allow the same rim brakes to work with a different sized wheel. Wheel size optionality with disc brakes allows the rider more options, including using a smaller diameter rim-sized wheel (such as 27.5″+) with a higher volume wider tire with the same outer diameter size as a larger rim-sized wheel (such as 29″) – the outer diameter consistency is important as it preserves the geometry of the frame between the two different wheel sizes. Disadvantages Hydraulic vs. "mechanical" There are two main types of disc brake: "mechanical" (cable-actuated) and hydraulic. Advantages and disadvantages are highly discussed by the users of each system. As advantages of cable-actuated disc brakes are argued lower cost, lower maintenance, and lighter system weight, hydraulic disc brakes are said to offer more braking power and better control. Cable-actuated disc brakes were traditionally the only type of disc brake that could be used with the brake levers found on drop handlebars, but this is no longer the case. Single vs. dual actuation Many disc brakes have their pads actuated from both sides of the caliper, while some have only one pad that moves. Dual actuation can move both pads relative to the caliper, or can move one pad relative to the caliper, then move the caliper and other pad relative to the rotor, called a "floating caliper" design. Single-actuation brakes use either a multi-part rotor that floats axially on the hub, or bend the rotor sideways as needed. Bending the rotor is theoretically inferior, but in practice gives good service, even under high-force braking with a hot disc, and may yield more progressiveness. Multiple pistons For disc brakes with a hydraulic system, high-performance calipers usually use two or three pistons per side; lower-cost and lower-performance calipers often have only one per side. Using more pistons allows a larger piston area and thus increased leverage with a given master cylinder. Also, pistons may be of several sizes so pad force can be controlled across the face of the pad, especially when the pad is long and narrow. A long narrow pad may be desired to increase pad area and thus reduce the frequency of pad changes. In contrast, a single large piston may be heavier. Caliper mounting standards There are many standards for mounting disc brake calipers. However, most manufactures today use either the IS or post-mount (PM) standards. These differ by disc size and axle type. In 2014 Shimano introduced a "Flat Mount" standard for high end road bikes and uses it exclusively for its top tier brake calipers. Advantages and disadvantages of various types of mounts A disadvantage of post mounts is that the bolt is threaded directly into the fork lowers. If the threads are stripped or if the bolt is stuck, then the threads will need to be repaired, or the seized bolt drilled out. Frame manufacturers have standardized the IS mount for the rear disc brake mount. In recent years post mount has gained ground and is becoming more common. This is mostly due to decreased manufacturing and part cost for the brake calipers when using post mount. A limitation of the mount is that the location of the rotor is more constrained: it is possible to encounter incompatible hub/fork combinations, where the rotor is out of range. Disc mounting standards There are many options for rotor mounting. IS is a six-bolt mount and is the industry standard. Center Lock is patented by Shimano and uses a splined interface along with a lockring to secure the disc. The advantages of Center Lock are that the splined interface is theoretically stiffer, and removing the disc is quicker because it only requires one lockring to be removed. Some of the disadvantages are that the design is patented requiring a licensing fee from Shimano. A Shimano cassette lockring tool (or an external BB tool in case of through-axle hub) is needed to remove the rotor and is more expensive and less common than a Torx key. Advantages of IS six-bolt are that there are more choices when it comes to hubs and rotors. Adaptors enable the use of six-bolt discs on Center Lock hubs. Examples of mounting standards are shown here: International Standard (IS), 6-Bolt 44 mm bolt circle diameter (BCD) Center Lock (Shimano proprietary) Hope Technology's 3-bolt pattern (proprietary) Rohloff's 4-bolt pattern (proprietary) Disc sizes Rotors come in many different sizes, such as and diameter. Other sizes are available as manufacturers make discs specific to their calipers; the dimensions often vary by a few millimeters. Larger rotors provide greater braking torque for a given pad pressure, by virtue of a longer moment arm for the caliper to act on. Smaller rotors provide less braking torque but also less weight and better protection from knocks. Larger rotors dissipate heat more quickly and have a larger amount of mass to absorb heat, reducing brake fade or failure. Downhill bikes usually have larger brakes to handle greater braking loads. Cross country bicycles usually use smaller rotors which handle smaller loads but offer considerable weight savings. It is also common to use a larger diameter rotor on the front wheel and a smaller rotor on the rear wheel since the front wheel does the most braking (up to 90% of the total). Drum brakes Bicycle drum brakes operate like those of a car, although the bicycle variety use cable rather than hydraulic actuation. Two pads are pressed outward against the braking surface on the inside of the hub shell. Shell inside diameters on a bicycle drum brake are typically . Drum brakes have been used on front hubs and hubs with both internal and external freewheels. Both cable- and rod-operated drum brake systems have been widely produced. A Roller Brake is a modular cable-operated drum brake manufactured by Shimano for use on specially splined front and rear hubs. Unlike a traditional drum brake, the Roller Brake can be easily removed from the hub. Some models contain a torque-limiting device called a power modulator designed to make it difficult to skid the wheel. In practice this can reduce its effectiveness on bicycles with adult-sized wheels. Drum brakes are most common on utility bicycles in some countries, especially the Netherlands, and are also often found on cargo bikes and velomobiles. Older tandem bicycles often employ a rear drum brake as a drag brake. Drum brakes provide consistent braking in wet or dirty conditions since the mechanism is fully enclosed. They are usually heavier, more complicated, and often weaker than rim brakes, but they require less maintenance. Drum brakes do not adapt well to quick release axle fastening, and removing a drum brake wheel requires the operator to disconnect the brake cable as well as the axle. They also require a torque arm which must be anchored to the frame or fork of the bicycle, and not all bicycles are constructed to accommodate such fastenings or tolerate their applied forces. Coaster brakes Invented in 1898 by Willard M. Farrow, the "coaster brake", also known as a "back pedal brake" or "foot brake" ("torpedo" or "contra" in some countries, in Italy ), is a type of drum brake integrated into the back hub with an internal freewheel. Freewheeling functions as with other systems, but when back pedaled, the brake engages after a fraction of a revolution. The coaster brake can be found in both single-speed and internally geared hubs. When such a hub is pedaled forwards, the sprocket drives a screw which forces a clutch to move along the axle, driving the hub shell or gear assembly. When pedaling is reversed, the screw drives the clutch in the opposite direction, forcing it either between two brake shoes and pressing them against the brake mantle (which is a steel liner within the hub shell), or into a split collar and expanding it against the mantle. The braking surface is often steel, and the braking element brass or phosphor-bronze, as in the Birmingham-made Perry Coaster Hub. Crude coaster brakes also exist, usually on children's bicycles, where a serrated steel brake cone grips the inside of the hub shell directly, with no separate brake pads or mantle. These offer a less progressive action and are more likely to lock the rear wheel unintentionally. Unlike most drum brakes (but like a Shimano Roller Brake) a coaster brake is designed to run with all its internal parts coated in grease for quiet operation and smooth engagement. Most grey molybdenum disulphide greases work well in a coaster brake, with its metal-to-metal friction surfaces. Coaster-brake bicycles are generally equipped with a single cog and chain wheel and often use an wide chain. However, there have been several models of coaster brake hubs with derailleurs, such as the Sachs 2×3. These use special extra-short derailleurs which can stand up to the forces of being straightened out frequently and do not require an excessive amount of reverse pedal rotation before the brake engages. Coaster brakes have also been incorporated into hub gear designs – for example the AWC and SRC3 from Sturmey-Archer, and the Shimano Nexus 3-speed. They can have up to eight gears, like the Nexus inter-8. Coaster brakes have the advantage of being protected from the elements and thus perform well in rain or snow. Though coaster brakes generally go years without needing maintenance, they are more complicated than rim brakes to repair if it becomes necessary, especially the more sophisticated type with expanding brake shoes. Coaster brakes also do not have sufficient heat dissipation for use on long descents, a characteristic made legendary through events such as the 'Repack Downhill' race, where riders almost certainly would need to repack their coaster brakes after the grease melted or smoked due to the heat from lengthy downhill runs. A coaster brake can only be applied when the cranks are reasonably level, limiting how quickly it can be applied. As coaster brakes are only made for rear wheels, they have the disadvantage common to all rear brakes of skidding the wheel easily. This disadvantage may, however, be alleviated if the bicycle also has a hand-lever-operated front brake and the cyclist uses it. Another disadvantage is that the coaster brake is completely dependent on the chain being fully intact and engaged. If the chain breaks or disengages from the chainwheel and/or rear sprocket, the coaster brake provides no braking power whatsoever. Like all hub brakes except disc brakes, a coaster brake requires a reaction arm to be connected to the frame. This may require unbolting when the wheel is removed or moved in its fork ends to adjust chain tension. Drag brakes A drag brake is a type of brake defined by its use rather than by its mechanical design. A drag brake is intended to provide a constant decelerating force to slow a bicycle on a long downhill rather than to stop it; a separate braking system is used to stop the bicycle. A drag brake is often employed on a heavy bicycle such as a tandem in mountainous areas where extended use of rim brakes could cause a rim to become hot enough to blow out.; The typical drag brake has long been a drum brake. The largest manufacturer of this type of brake is Arai, whose brakes are screwed onto hubs with conventional freewheel threading on the left side of the rear hub and operated via Bowden cables. As of 2011, the Arai drum brake has been out of production for several years, with remaining stocks nearing depletion and used units commanding premium prices on internet auction sites. More recently, large-rotor disc brakes are being used as drag brakes. (Some tandem riders with Avid BB-7 mechanical disc brakes and 203 mm rotors report fewer heat problems under heavy braking than when using the previous standard of comparison, an Arai drum used as a drag brake.) DT-Swiss make an adapter to mate disc rotors with hubs threaded for the Arai drum brake, but this still leaves the problem of fitting the caliper. Band brake A band brake consists of a band, strap, or cable that wraps around a drum that rotates with a wheel and is pulled tight to generate braking friction. Band brakes appeared as early as 1884 on tricycles. Star Cycles introduced a band brake in 1902 on its bicycles with freewheels. Band brakes were still being manufactured for bicycles in 2010. A rim band brake, as implemented on the Yankee bicycle by Royce Husted in the 1990s, consists of a stainless-steel cable, wrapped in a kevlar sheath, that rides in a u-shaped channel on the side of the wheel rim. Squeezing the brake lever tightens the cable against the channel to produce braking friction. A return spring slackens the cable when the brake lever is released, no adjustment is required, and the brake becomes more forceful when wet. Husted said his inspiration was the band brake used on industrial machinery. The Yankee bicycle only included a rear brake, but that met U.S. Consumer Product Safety Commission standards. Actuation mechanisms The actuation mechanism is that part of the brake system that transmits force from the rider to that part of the system that does the actual braking. Brake system actuation mechanisms are either mechanical or hydraulic. Mechanical The primary modern mechanical actuation mechanism uses brake levers coupled to Bowden cables to move brake arms, thus forcing pads against a braking surface. Cable mechanisms are usually less expensive, but may require some maintenance related to exposed areas of the cable. Other mechanical actuation mechanisms exist: see Coaster brakes for back-pedal actuation mechanisms, and rod-actuated brakes for a mechanism incorporating metal rods. The first Spoon brakes were actuated by a cable that was pulled by twisting the end of a handlebar. Hydraulic Hydraulic brakes also use brake levers to push fluid through a hose to move pistons in a caliper, thus forcing pads against a braking surface. While hydraulic rim brakes exist, today the hydraulic actuation mechanism is identified mostly with disc brakes. Two types of brake fluid are used today: mineral oil and DOT fluid. Mineral oil is generally inert, while DOT is corrosive to frame paint but has a higher boiling point. Using the wrong fluid can cause seals to swell or become corroded. A hydraulic mechanism is closed and therefore less likely have problems related to contamination at exposed areas. Hydraulic brakes rarely fail, but failure tends to be complete. Hydraulic systems require specialized equipment to repair. Hydraulic brake fluid Hydraulic disc brakes make use of two common forms of fluid: Automotive grade DOT 4 or DOT 5.1 which are hygroscopic and has a boiling point of 230 °C; and mineral oil which is not hygroscopic and has varying boiling points depending on the type. O-rings and seals inside the brake are specifically designed to work with one or the other fluid. Using the incorrect fluid type will cause the seals to fail resulting in a "squishy" feeling in the lever, and the caliper pistons are unable to retract, so a scraping disc is common. The brake fluid reservoir is usually marked to indicate the type of brake fluid to be used. Hybrid Some older designs, like the AMP and Mountain Cycles brakes, use a cable from lever to caliper, then use a master cylinder integrated into the piston. Some Santana tandem bicycles used a cable from lever to a master cylinder mounted near the head tube, with a hydraulic line to the rear wheel caliper. Such "hybrid" designs allow the leverage of a hydraulic system while allowing use of cable brake levers, but may be heavier and can suffer from grit intrusion in the standard cable. An older Sachs drum brake kit ("Hydro Pull") allows to rebuild a regular Sachs bicycle drum brake to hydraulic lever and action. A piston is added outside the drum instead of the bowden clamp. This solution is often seen on modified Long John cargo bikes, allowing a low friction lever pull front wheel brake action. After Sachs ceased production of this kit a similar solution is sometimes done by welding on a Magura piston to the drum cylinder lever. Welding was necessary because the Magura action is reverse to that of the Sachs kit. Brake levers Brake levers are usually mounted on the handlebars within easy reach of the rider's hands. They may be distinct from or integrated into the shifting mechanism. The brake lever transmits the force applied by the rider through either a mechanical or hydraulic mechanism. Bicycles with drop handlebars may have more than one brake lever for each brake to facilitate braking from multiple hand positions. Levers that allow the rider to work the brakes from the tops of the bars, introduced in the '70s, were called extension levers, safety levers or, due to their reputation for being unable to actuate the full range of travel of the brake, suicide levers. Modern top-mounted brake levers are considered safer, and are called interrupt brake levers due to their mechanism of action which "interrupts" the cable run from the primary lever and actuates the brake by pushing the cable housing downward instead of pulling the cable. This type of lever is also known as a "cross lever" due to its popularity in cyclo-cross. The mechanical advantage of the brake lever must be matched to the brake it is connected to in order for the rider to have sufficient leverage and travel to actuate the brake. Using mismatched brakes and levers could result in too much mechanical advantage and hence not enough travel to properly actuate the brake (v-brakes with conventional levers) or too little mechanical advantage, requiring a very strong pull to apply the brakes hard (v-brake levers with other types of brake). Mechanical (cable) brake levers come in two varieties based on the length of brake cable pulled for a given amount of lever movement: Standard pull levers work with most brake designs, including caliper brakes, traditional cantilever brakes, and mechanically actuated disc brakes branded for "Road". Long pull levers work with "direct-pull" cantilever brakes, such as Shimano "V-Brakes", and mechanically actuated disc brakes branded for "Mountain". Adapters are available to allow the use of one type of lever with an otherwise incompatible type of rim brake. Some brake levers have adjustable leverage that can be made to work with either type of brake. Others vary their mechanical advantage as the lever moves to move the pad quickly at first, then provide more leverage once it contacts the brake surface. Hydraulic brake levers move a piston in a fluid reservoir. The mechanical advantage of the lever depends on the brake system design. Braking technique The motion dynamics of a bicycle will cause a transfer of weight to the front wheel during braking, improving the traction on the front wheel. If the front brake is used too hard, momentum may cause the rider and bike to pitch forward – a type of crash sometimes called an "endo". Light use of the rear brake causes a light skid as the bicycle approaches the limit where pitchover will occur, a signal to reduce force on the front brake. On a low-traction surface or when turning, the front wheel will skid, the bicycle cannot be balanced and will fall to the side instead. On tandem bicycles and other long-wheel-base bicycles (including recumbents and other specialized bicycles), the lower relative centre of mass makes it virtually impossible for heavy front braking to flip the bicycle; the front wheel would skid first. In some situations, it is advisable to slow down and to use the rear brake more and the front brake less: When unfamiliar with the braking characteristics of a bicycle. It is important to test the brakes and learn how much hand force is needed when first riding it. When leaning in a turn (or preferably, brake before turning). Slippery surfaces, such as wet pavement, mud, snow, ice, or loose stones/gravel. It is difficult to recover from a front-wheel skid on a slippery surface, especially when leaned over. Bumpy surfaces: If the front wheel comes off the ground during braking, its rotation will cease completely. Landing on a stopped front wheel with the brakes still applied is likely to cause the front wheel to skid and may flip the rider over the handlebar. Very loose surfaces (such as gravel and loose dirt): In some loose-surface situations, it may be beneficial to completely lock up the rear wheel in order to slow down or maintain control. On very steep slopes with loose surfaces where any braking will cause the wheel to skid, it can be better to maintain control of the bicycle by the rear brake more than one would normally. However neither wheel should stop rotating completely, as this will result in very little control. Steep descents: the slope angle makes the front flip more easily achieved, and moreover a front-wheel skid would be very difficult to recover (crash highly probable), whereas a rear skid does still drag the bike without losing too much control. Long descents: alternating the front and back brake can help prevent hand fatigue and overheating of the wheel rims which can cause a disastrous tyre blow-out, or boiling of the hydraulic fluid in case of hydraulic disc brakes. Flat front tyre: braking a tyre that has little air can cause the tyre to come off the rim, which is likely to cause a crash. It is customary to place the front brake lever on the left in right-side-driving countries, and vice versa, because the hand on the side nearer the centre of the road is more commonly used for hand signals. Placing the front brake lever on the right also mimics the layout on motorcycles and is advantageous to avoid confusion when switching to and from a pedal cycle to motorcycle. Bicycles without brakes Track bicycles are built without brakes so as to avoid sudden changes in speed when racing on a velodrome. Since track bikes have a fixed gear, braking can be accomplished by reversing the force on the pedals to slow down, or by locking the pedals backwards and inducing a skid. Fixed gear road bikes may also lack brakes, and slowing or stopping is accomplished as with a track bike. Many fixed gear bikes however are fitted with a front brake for safety reasons, or because it is a legal requirement. Some BMX bicycles are built without brakes to forgo the expense and complication of a detangler. The usual method of stopping is for the rider to put one or both feet on the ground, or to wedge a foot between the seat and the rear tyre, effectively acting as a spoon brake. Cycle speedway is a type of close track racing in the UK, Poland, Australia, and France. The special built bike has a single freewheel and no brakes. Slowing is done during cornering by dragging the inside foot. These bikes are not intended for road use and are kept at the track. In Belgium, Australia, Germany, the UK, France, Italy, Poland, Japan, Denmark, Sweden, and Finland, it is illegal to ride a bicycle without brakes on a public road. Single-lever two-wheel brakes The SureStop braking system uses a single lever to actuate the front brake using the friction applied to the back brake shoes by the rotation of the rear wheel from the rear brake. It is claimed this reduces the risk of some braking-related accidents including going over the handlebars. This system emphasises the use of the rear brake, fails to optimise use of front braking, whilst being marketed as a solution to the fear of toppling over handle bars. The system encourages complacent use of brake levers by cyclists and reinforces the myth that the front brakes of bicycles are dangerous. Since this system means that the brakes on both wheels are directly tied together, the legality of such a system varies based on nation. Cyclists young and old should seek training in the effective use of both brakes to stop in the minimum possible stopping distance in an emergency.
Technology
Human-powered transport
null
183351
https://en.wikipedia.org/wiki/Penny-farthing
Penny-farthing
The penny-farthing, also known as a high wheel, high wheeler or ordinary, is an early type of bicycle. It was popular in the 1870s and 1880s, with its large front wheel providing high speeds, owing to it travelling a large distance for every rotation of the wheel. These bicycles had solid rubber tires and as a consequence the only shock absorption was in the saddle. The penny-farthing became obsolete in the late 1880s with the development of modern bicycles, which provided similar speed, via a chain-driven gear train, and comfort, from the use of pneumatic tires. These later bikes were marketed as "safety bicycles" because of the greater ease of mounting and dismounting, the reduced danger of falling, and the reduced height to fall, in comparison to penny-farthings. The name came from the British penny and farthing coins, the penny being much larger than the farthing, so that the side view of the bicycle resembles a larger penny (the front wheel) leading a smaller farthing (the rear wheel). Although the name "penny-farthing" is now the most common, it was probably not used until the machines had been almost superseded. The first recorded print reference is from 1891 in Bicycling News. For most of their reign they were simply known as "bicycles" and were the first machines to be so called, although they were not the first two-wheeled, pedalled vehicles. In the late 1890s, the name "ordinary" began to be used, to distinguish them from the emerging safety bicycles, and that term, along with "hi-wheel" and variants, are preferred by many modern enthusiasts. Following the popularity of the boneshaker, Eugène Meyer, a Frenchman, invented the high-wheeler bicycle design in 1869 and fashioned the wire-spoke tension wheel. Around 1870 English inventor James Starley, described as the father of the bicycle industry, and others, began producing bicycles based on the French boneshaker but with front wheels of increasing size, because larger front wheels, up to in diameter, enabled higher speeds on bicycles limited to direct-drive. In 1878, Albert Pope began manufacturing the Columbia bicycle outside Boston, starting their two-decade heyday in the United States. Although the trend was short-lived, the penny-farthing became a symbol of the late Victorian era. Its popularity also coincided with the birth of cycling as a sport. History Origins and development Eugène Meyer of Paris is now regarded as the father of the high bicycle by the International Cycling History Conference in place of James Starley. Meyer patented a wire-spoke tension wheel with individually adjustable spokes in 1869. They were called "spider" wheels in Britain when introduced there. Meyer produced a classic high bicycle design during the 1880s. James Starley in Coventry added the tangent spokes and the mounting step to his famous bicycle named "Ariel". He is regarded as the father of the British cycling industry. Ball bearings, solid rubber tires and hollow-section steel frames became standard, reducing weight and making the ride much smoother. Penny-farthing bicycles are dangerous because of the risk of headers (taking a fall over the handlebars head-first). Makers developed "moustache" handlebars, allowing the rider's knees to clear them, "Whatton" handlebars that wrapped around behind the legs, and ultimately (though too late, after development of the safety bicycle), the American "Eagle" and "Star" bicycles, whose large and small wheels were reversed. This prevented headers but left the danger of being thrown backwards when riding uphill. Other attempts included moving the seat rearward and driving the wheel by levers or treadles, as in the "Xtraordinary" and "Facile", or gears, by chain as in the "Kangaroo" or at the hub, as in the "Crypto"; another option was to move the seat well back, as in the "Rational". Even so, bicycling remained the province of the urban well-to-do, and mainly men, until the 1890s, and was a salient example of conspicuous consumption. Attributes The penny-farthing used a larger wheel than the velocipede, thus giving higher speeds on all but the steepest hills. In addition, the large wheel gave a smoother ride, important before the invention of pneumatic tires. An attribute of the penny-farthing is that the rider sits above the front axle. When the wheel strikes rocks and ruts, or under hard braking, the rider can be pitched forward off the bicycle head-first. Headers were relatively common and a significant, sometimes fatal, hazard. Riders coasting down hills often took their feet off the pedals and put them over the tops of the handlebars, so they would be pitched off feet-first instead of head-first. Penny-farthing bicycles often used similar materials and construction as earlier velocipedes: cast iron frames, solid rubber tires, and plain bearings for pedals, steering, and wheels. They were often quite durable and required little service. For example, when cyclist Thomas Stevens rode around the world in the 1880s, he reported only one significant mechanical problem in over , caused when the local military confiscated his bicycle and damaged the front wheel. End of an era The well-known dangers of the penny-farthing were, for the time of its prominence, outweighed by its strengths. While it was a difficult, dangerous machine, it was simpler, lighter, and faster than the safer velocipedes of the time. Two new developments changed this situation, and led to the rise of the safety bicycle. The first was the chain drive, originally used on tricycles, allowing a gear ratio to be chosen independent of the wheel size. The second was the pneumatic bicycle tire, allowing smaller wheels to provide a smooth ride. The nephew of one of the men responsible for popularity of the penny-farthing was largely responsible for its demise. James Starley had built the Ariel (spirit of the air) high-wheeler in 1870; but this was a time of innovation, and when chain drives were upgraded so that each link had a small roller, higher and higher speeds became possible without the need for a large front wheel. In 1885, Starley's nephew John Kemp Starley took these new developments to launch the modern bicycle, the Rover safety bicycle, so-called because the rider, seated much lower and farther behind the front wheel contact point, was less prone to a header. In 1888, when John Dunlop re-invented the pneumatic tire for his son's tricycle, the high wheel was made obsolete. The comfortable ride once found only on tall wheels could now be enjoyed on smaller chain-driven bicycles. By 1893, high-wheelers were no longer being produced. Use lingered into the 1920s in track cycling until racing safety bicycles were adequately designed. Modern usage Today, enthusiasts ride restored penny-farthings, and a few manufacturers build new ones with modern materials. Manufacturers include Richards of England (Hull, UK), Rideable Bicycle Replicas (US), Trott & Sons (UK) and UDC (Taiwan). One of these manufacturers, UDC Penny Farthings, the largest penny-farthing retailer in the United Kingdom, recorded record sales of penny-farthings in 2020 during the COVID-19 lockdown. The Penny Farthing Club is a cycling club that was founded in 2013 by Neil Laughton. The club offers rider training, bike tours of London and other UK cities, and hosts club events such as penny-farthing polo. Characteristics The penny-farthing is a direct-drive bicycle, meaning the cranks and pedals are fixed directly to the hub. Instead of using gears to multiply the revolutions of the pedals, the driven wheel is enlarged so the radius from the hub to the outer wheel is comfortable for the rider to reach the pedals fixed to the hub. But the rider needs to be able to both mount the saddle and reach the pedals. If the wheel is too large, this will not be achievable. For instance a 5'9" cyclist due to their leg length could at best ride a 50"-54" high wheel depending on the height of the saddle. Construction The frame is a single tube following the circumference of the front wheel, then diverting to a trailing wheel. A mounting peg is above the rear wheel. The front wheel is in a rigid fork with little if any trail. A spoon brake is usually fitted on the fork crown, operated by a lever from one of the handlebars. The bars are usually mustache shaped, dropping from the level of the headset. The saddle mounts on the frame less than behind the headset. One particular model, made by Pope Manufacturing Company in 1886, weighs , has a 60-spoke front wheel and a 20-spoke rear wheel. It is fitted with solid rubber tires. The rims, frame, fork, and handlebars are made from hollow, steel tubing. The steel axles are mounted in adjustable ball bearings. The leather saddle is suspended by springs. Another model, made by Humber and Co., Ltd., of Beeston, Nottingham, weighs only , and has and wheels. It has no step and no brakes, in order to minimize weight. A third model, also made by Pope Manufacturing Company, weighs and has forged steel forks. A brake lever on the right of a straight handlebar operates a spoon brake against the front wheel. All three have cranks that can be adjusted for length. Operation Mounting and dismounting a penny-farthing takes practice, but can be learned in about an hour or two. Mounting is generally achieved on flat, level ground. It's possible to mount a penny-farthing on a slight incline, but more challenging as you need to maintain momentum. Once the penny-farthing stops rolling, the rider will fall over if they have not mounted by that point. Dismounting on an incline is also to be avoided and one's ability to successfully reach the top needs to be considered before even attempting it. Once mounting and dismounting have been mastered, speed moderation is the next key skill to learn. If you never cycle faster than you can react to potential hazards, then you can avoid disaster. For instance, don't freewheel (feet off the pedals) down a steep hill which leads to a busy juncture/roundabout or has a blind bend where you can't see if there is a stopped vehicle or other obstruction. Slow-pedaling is a key skill to master. If you can slow-pedal up to a red light, you can stage your approach so when you arrive it is hopefully green, so you don't have to dismount. When cycling downhill, you must start braking at the top of the hill. Start applying resistance on the pedals at the top and through the descent. Also, never brake sharply if using a mechanical brake or you risk going over the handlebars. The last key skill needed to ride a penny-farthing safely is learning to change direction. Never turn the handlebars too sharply or you risk a headover. Turns should be wide and gentle. Also, in respect to changing direction, skill and care must be exercised when approaching junctures and roundabouts. This is where speed moderation becomes important: you need to stage your approach so you can time arriving at the juncture or roundabout when a gap in traffic permits you to safely join the flow of traffic. This is also true for turning in front of opposing lane traffic: you need to be able to stage your approach to reach the turn when a gap enables you to do so. When approaching any juncture/roundabout, if you cannot enter safely, you must stop and dismount. When learning, do not cycle a penny-farthing on busy roads: you will not have the skills to stay safe. Only venture onto the roads when you can mount/dismount reflexively and have mastered speed moderation or you risk serious injury or death. In most other respects, once mounted, riding a penny-farthing is much like riding any other bicycle in respect to anticipating hazards, signaling, and defensive cycling. Penny-farthings are legal to ride on UK roads, but one must check the laws in the country they want to ride their penny-farthing Performance Frederick Lindley Dodds, of Stockton-on-Tees, England, is credited with having set the first hour record, covering an estimated distance of 15 miles and 1,480 yards (25.493 kms) on a high-wheeler during a race on the Fenner's Track, Cambridge University on March 25, 1876. The furthest (paced) hour record ever achieved on a penny-farthing bicycle was by William A. Rowe, an American, in 1886. The record for riding from Land's End to John o' Groats on a penny-farthing was set in 1886 by George Pilkington Mills with a time of five days, one hour, and 45 minutes. This record was broken in 2019 by Richard Thoday with a time of four days, 11 hours and 52 minutes. Until the 21st century, the last paced hour record to be set on a penny-farthing was probably BW Attlee's 1891 English amateur record of . This was beaten by Scots cyclist Mark Beaumont at Herne Hill Velodrome on 16 June 2018 when he covered . In 1884, Thomas Stevens rode a Columbia penny-farthing from San Francisco to Boston—the first cyclist to cross the United States. In 1885–86, he continued from London through Europe, the Middle East, China, and Japan, to become the first to ride around the world. Tremendous feats of balance were reported, including negotiating a narrow bridge parapet and riding down the U.S. Capitol steps with the American Star Bicycle which has the small wheel in front. In popular culture The bike, with the one wheel dominating, led to riders being referred to in America as "wheelmen", a name that lived on for nearly a century in the League of American Wheelmen until renamed the League of American Bicyclists in 1994. Clubs of racing cyclists wore uniforms of peaked caps, tight jackets and knee-length breeches, with leather shoes, the caps and jackets displaying the club's colors. In 1967 collectors and restorers of penny-farthings (and other early bicycles) founded the Wheelmen, a non-profit organization "dedicated to keeping alive the heritage of American cycling". The high-wheeler lives on in the gear inch units used by cyclists in English-speaking countries to describe gear ratios. These are calculated by multiplying the wheel diameter in inches by the number of teeth on the front chain-wheel and dividing by the teeth on the rear sprocket. The result is the equivalent diameter of a penny-farthing wheel. A 60-inch gear, the largest practicable size for a high-wheeler, is nowadays a middle gear of a utility bicycle, while top gears on many exceed 100 inches. There was at least one Columbia made in the mid-1880s, but 60 was the largest in regular production. A penny-farthing is the logo of The Village in the cult 1960s television series The Prisoner, and is also featured in the show's closing titles. Co-creator and star Patrick McGoohan stated that the bike represented slowing down the wheels of progress. The penny-farthing is a symbol of the cities of Sparta, Wisconsin; Davis, California; and Redmond, Washington. Events Each February in Evandale, Tasmania, penny-farthing enthusiasts from around the world converge on the small village for a series of penny-farthing races, including the national championship. In October there is a bicycle ride from the statue of an 1890s bicyclist on a penny-farthing in Port Byron, Illinois named "Will B. Rolling" to a similar statue in Sparta, Wisconsin named "Ben Bikin'". In 2004, British leukemia patient and charity fundraiser Lloyd Scott (43) rode a penny-farthing across the Australian outback to raise money for a charitable cause. In November 2008, Briton Joff Summerfield completed a round-the-world trip on a penny-farthing. Summerfield spent two-and-a-half years cycling through 23 countries, visiting locations including the Taj Mahal, Angkor Wat and Mount Everest. Knutsford in England has hosted the Knutsford Great Race every 10 years since 1980. The 1980 race had 15 team entries, and there were 16 in 1990 and 2000. The 2010 race was limited to 50 teams and was in aid of the ShelterBox charity. In 2012, the first Clustered Spires High Wheel Race took place in Frederick, Maryland, USA. This is the country's only race of its kind - a one-hour criterium race around a course through the historic downtown district.
Technology
Human-powered transport
null
183481
https://en.wikipedia.org/wiki/New%20World%20vulture
New World vulture
Cathartidae, known commonly as New World vultures or condors, are a family of birds of prey consisting of seven extant species in five genera. It includes five extant vultures and two extant condors found in the Americas. They are known as "New World" vultures to distinguish them from Old World vultures, with which the Cathartidae does not form a single clade despite the two being similar in appearance and behavior as a result of convergent evolution. Like other vultures, New World vultures are scavengers, having evolved to feed off of the carcasses of dead animals without any notable ill effects. Some species of New World vulture have a good sense of smell, whereas Old World vultures find carcasses exclusively by sight. Other adaptations shared by both Old and New World vultures include a bald head, devoid of feathers which prevents rotting matter from accumulating while feeding, and an extremely disease-resistant digestive system to protect against scavenging-related germs. Taxonomy and systematics The family Cathartidae was introduced (as the subfamily Cathartinae) by the French ornithologist Frédéric de Lafresnaye in 1839. The New World vultures comprise seven species in five genera, being Coragyps, Cathartes, Gymnogyps, Sarcoramphus, and Vultur. Of these, only Cathartes is not monotypic. The family's scientific name, Cathartidae, comes from cathartes, Greek for "purifier". Although New World vultures and Old World vultures are not very closely related, they share many resemblances because of convergent evolution. Phylogenetic analyses including all Cathartidae species found two primary clades. The first consists of black vultures (Coragyps atratus) together with the three Cathartes species (lesser yellow-headed vultures (C. burrovianus), greater yellow-headed vultures (C. melambrotus), and turkey vultures (C. aura)), while the second consists of king vultures (Sarcoramphus papa), California condors (Gymnogyps californianus) and Andean condors (Vultur gryphus). New World vultures were traditionally placed in a family of their own in the Falconiformes. However, in the late 20th century some ornithologists argued that they are more closely related to storks on the basis of karyotype, morphological, and behavioral data. Thus some authorities placed them in the Ciconiiformes with storks and herons; Sibley and Monroe (1990) even considered them a subfamily of the storks. This was criticized, and an early DNA sequence study was based on erroneous data and subsequently retracted. There was then an attempt to raise the New World vultures to the rank of an independent order, Cathartiformes, not closely associated with either the birds of prey or the storks and herons. Recent multi-locus DNA studies on the evolutionary relationships between bird groups indicate that New World vultures are related to the other birds of prey, excluding the Falconidae. This analysis argues that New World vultures should either be a part of a new order Accipitriformes or part of an order (Cathartiformes) closely related to, but distinct from, other birds of prey. New World vultures are a sister group to Accipitriformes, a group consisting of Accipitridae, the osprey and secretarybird. Both groups are basal members of the recently recognized clade Afroaves. Extinct species and fossils The fossil history of the Cathartidae is complex, and many taxa that may possibly have been New World vultures have at some stage been treated as early representatives of the family. There is no unequivocal European record from the Neogene. It is clear that the Cathartidae had a much higher diversity in the Plio-Pleistocene, rivalling the current diversity of Old World vultures and their relatives in shapes, sizes, and ecological niches. Extinct taxa include: Diatropornis ("European vulture") Late Eocene/Early Oligocene – ?Middle Oligocene of France Phasmagyps Chadronian of Colorado Cathartidae gen. et sp. indet. Late Oligocene of Mongolia Brasilogyps Late Oligocene/Early Miocene of Brazil Hadrogyps ("American dwarf vulture") Middle Miocene of SW North America Cathartidae gen. et sp. indet. Late Miocene/Early Pliocene of Lee Creek Mine, USA Pliogyps ("Miocene vulture") Late Miocene – Late Pliocene of S North America Perugyps ("Peruvian vulture") Pisco Late Miocene/Early Pliocene of SC Peru Dryornis ("Argentinean vulture") Early – Late? Pliocene of Argentina; may belong to modern genus Vultur Cathartidae gen. et sp. indet. Middle Pliocene of Argentina Aizenogyps ("South American vulture") Late Pliocene of SE North America Breagyps ("long-legged vulture") Late Pleistocene of SW North America Geronogyps Late Pleistocene of Argentina and Peru Gymnogyps varonai Late Quaternary of Cuba Wingegyps Late Pleistocene of Brazil Pleistovultur Late Pleistocene/Early Holocene of Brazil Cathartidae gen. et sp. indet. Cuba Gymnogyps amplus Late Pleistocene – Holocene of W North America Description New World vultures are generally large, ranging in length from the lesser yellow-headed vulture at 56–61 centimeters (22–24 inches) up to the California and Andean condors, both of which can reach 120 centimeters (48 inches) in length and weigh 12 or more kilograms (26 or more pounds). Plumage is predominantly black or brown, and is sometimes marked with white. All species have featherless heads and necks. In some, this skin is brightly colored, and in the king vulture it is developed into colorful wattles and outgrowths. All New World vultures have long, broad wings and a stiff tail, suitable for soaring. They are the best adapted to soaring of all land birds. The feet are clawed but weak and not adapted to grasping. The front toes are long with small webs at their bases. No New World vulture possesses a syrinx, the vocal organ of birds. Therefore, the voice is limited to infrequent grunts and hisses. The beak is slightly hooked and is relatively weak compared with those of other birds of prey. This is because it is adapted to tear the weak flesh of partially rotted carrion, rather than fresh meat. The nostrils are oval and set in a soft cere. The nasal passage is perforate, not divided by a septum, so that when looking from the side, one can see through the beak. The eyes are prominent, and, unlike those of eagles, hawks, and falcons, they are not shaded by a brow bone. Members of Coragyps and Cathartes have a single incomplete row of eyelashes on the upper lid and two rows on the lower lid, while Gymnogyps, Vultur, and Sarcoramphus lack eyelashes altogether. New World vultures have the unusual habit of urohidrosis, or defecating on their legs to cool them evaporatively. As this behavior is also present in storks, it is one of the arguments for a close relationship between the two groups. Distribution and habitat New World vultures are restricted to the western hemisphere, ranging from southern Canada to South America. Most species are mainly resident, but the turkey vulture breeds in Canada and the northern US and migrates south in the northern winter. New World vultures inhabit a large variety of habitats and ecosystems, ranging from deserts to tropical rainforests and at heights of sea level to mountain ranges, using their highly adapted sense of smell to locate carrion. These species of birds are also occasionally seen in human settlements, perhaps emerging to feed upon the food sources provided from roadkills. Behavior and ecology Breeding New World vultures and condors do not build nests, but lay eggs on bare surfaces. On average one to three eggs are laid, depending on the species. Chicks are naked on hatching and later grow down. Like most birds, the parents feed the young by regurgitation. The young are altricial, fledging in 2 to 3 months. California Condor chicks fledge anywhere from 5–6 months, while Andean condor chicks fledge anywhere from 6–10 months. Feeding All living species of New World vultures and condors are scavengers. Their diet consists primarily of carrion, and they are commonly seen near carcasses. Other additions to the diet include fruit (especially rotten fruit) and garbage. The genus Cathartes locates carrion by detecting the scent of ethyl mercaptan, a gas produced by the bodies of decaying animals. The olfactory lobe of the brains in these species, which is responsible for processing smells, is particularly large compared to that of other animals. Other species, such as the American black vulture and the king vulture, have weak senses of smell and find food only by sight, sometimes by following Cathartes vultures and other scavengers. Tolerance to bacterial toxins in decaying meat Vultures possess a very acidic digestive system, with their gut dominated by two species of anaerobic bacteria that help them withstand toxins present in decaying prey. In a 2014 study of 50 (turkey and black) vultures, researchers analyzed the microbial community or microbiome of the facial skin and the large intestine. The facial bacterial flora and the gut flora overlapped somewhat, but in general, the facial flora was much more diverse than the gut flora, which is in contrast to other vertebrates, where the gut flora is more diverse. Two anaerobic faecal bacteria groups that are pathogenic in other vertebrates stood out: Clostridia and Fusobacteriota (formerly Fusobacteria). They were especially common in the gut with Clostridia DNA sequence counts between 26% and 85% relative to total sequence counts, and Fusobacteriota between 0.2% and 54% in black vultures and 2% to 69% of all counts in turkey vultures. Unexpectedly, both groups of anaerobic bacteria were also found on the air-exposed facial skin samples, with Clostridia at 7%–40% and Fusobacteriota up to 23%. It is assumed that vultures acquire them when they insert their heads into the body cavities of rotten meat. The regularly ingested Clostridia and Fusobacteriota outcompete other bacterial groups in the gut and become predominant. Genes that encode tissue-degrading enzymes and toxins that are associated with Clostridium perfringens have been found in the vulture gut metagenome. This supports the hypothesis that vultures do benefit from the bacterial breakdown of carrion, while at the same time tolerating the bacterial toxins. Status and conservation The California condor is critically endangered. It formerly ranged from Baja California to British Columbia, but by 1937 was restricted to California. In 1987, all surviving birds were removed from the wild into a captive breeding program to ensure the species' survival. In 2005, there were 127 Californian condors in the wild. As of October 31, 2009 there were 180 birds in the wild. The Andean condor is vulnerable. The American black vulture, turkey vulture, lesser yellow-headed vulture, and greater yellow-headed vulture are listed as species of Least Concern by the IUCN Red List. The king vulture is also listed as Least Concern, although there is evidence of a decline in the population. In culture The American black vulture and the king vulture appear in a variety of Maya hieroglyphs in Mayan codices. The king vulture is commonly represented, with its glyph being easily distinguishable by the knob on the bird's beak and by the concentric circles that represent the bird's eyes. It is sometimes portrayed as a god with a human body and a bird head. According to Mayan mythology, this god often carried messages between humans and the other gods. It is also used to represent Cozcaquauhtli, the thirteenth day of the month in the Mayan calendar. Meanwhile, the American black vulture is normally connected with death or shown as a bird of prey, and its glyph is often depicted attacking humans. This species lacks the religious connections that the king vulture has. While some of the glyphs clearly show the American black vulture's open nostril and hooked beak, some are assumed to be this species because they are vulture-like, painted black, and lack the king vulture's knob.
Biology and health sciences
Accipitriformes and Falconiformes
null
183555
https://en.wikipedia.org/wiki/Hypericum%20perforatum
Hypericum perforatum
Hypericum perforatum, commonly known as St John's wort (sometimes perforate St John's wort or common St John's wort), is a flowering plant in the family Hypericaceae. It is a perennial plant that grows up to tall, with many yellow flowers that have clearly visible black glands around their edges, long stamens (male reproductive organs), and three pistils (female reproductive organs). Probably a hybrid between the closely related H. attenuatum and H. maculatum (imperforate St John's wort) that originated in Siberia, the species is now found worldwide. It is native to temperate regions across Eurasia and North Africa, and has been introduced to East Asia, Australia, New Zealand, and parts of North and South America. In many areas where it is not native, H. perforatum is considered a noxious weed. It densely covers open areas to the exclusion of native plants, and is poor grazing material. As such, methods for biocontrol have been introduced in an attempt to slow or reverse the spread of the species. The species produces numerous chemical compounds that are highly active. These chemicals are harmful to large animals, especially sheep, and help to deter herbivores from consuming the plant. Other chemicals in the plant, such as hypericin and hyperforin, have various uses in medicine. St John's wort has been used in traditional medicine since at least the first century AD, often as a cure-all or panacea. The oil from its glands can be extracted, or its above-ground parts can be ground into a powder called herba hyperici. In modern times, its use as an antidepressant has been the focus of numerous studies and clinical trials; however, the active ingredients can be very harmful or even lethal when taken alongside other medicines. Description Hypericum perforatum is an herbaceous perennial plant with hairless (glabrous) stems and leaves. The root of each plant is slender and woody with many small, fibrous small side roots and also extensive, creeping rhizomes. The central root grows to a depth of into the soil depending on conditions. The crown of the root is woody. Its stems are erect and branched in the upper section, and usually range from 0.3 metres to 1 metre in height. The stems are woody near their base and look like they have segmented joints from the scars left behind after the leaves fall off. The stems of H. perforatum are rusty-yellow to rosy in color with two distinct edges and usually have bark that sheds near the base. The stems persist through the winter and sprout new growth with flower buds in the following year; first year growth does not produce flowers. It has leaves that attach on opposite sides of the stems without a stalk (sessile). The leaves vary in shape from being very narrow and almost grass-like (linear), to a rounded oval slightly wider at the base with a rounded tip or not much of a tip (elliptic), or even narrow with the widest portion towards the end of the leaf like a reversed lance point, but still long and narrow (oblanceolate). The principle leaves range in length from 0.8 to 3.5 centimetres and 0.31–1.6 centimetres in width. Leaves borne on the branches subtend the shortened branchlets. The leaves are yellow-green in color, with scattered translucent dots of glandular tissue. The dots are clearly visible when held up to the light, giving the leaves a perforated appearance. The edges (margins) of the leaves usually have scattered black dots, often called dark glands, though sometimes they will appear away from the edges. The odor of the plant is faint, but aromatic, resembling that of resins like balsam. The taste of the plant is bitter and acrid. Flowering characteristics The flowers are conspicuous and showy, measuring about across, and are bright yellow with black dots along the edges of the petals. Each of the flowers normally has five large petals and five smaller leaf-like sepals below them. The sepals are about in length, green in color, are shaped like the head of a spear (lanceolate shape) with a pointed tip, and the same clear and black glands as the leaves. The petals are significantly longer, in length, and have an oblong shape. They completely hide the sepals from the front side of the flower. The many bright yellow stamens are united at the base into three bundles. The stalk portion of the stamens, the filaments, vary in length and stick out in every direction from the center of the flower. The pollen grains are pale brown to orange in color. The flowers are arranged along one side of each flowering stem with two flowers at each node (a helicoid cyme) at the ends of the upper branches, between late spring and early to mid-summer. Each flowering stem bears many flowers, between 25 and 100, and also is quite leafy. The fruit of Hypericum perforatum is a capsule in length containing the seeds in three valved chambers. Seeds that are separated from the capsules have a much higher germination rate due to an inhibiting factor in the capsule itself. The black and lustrous seeds are rough, netted with coarse grooves. Each seed is about in size. Each plant may produce an average of 15,000 to 34,000 seeds. Similar species Hypericum maculatum is visually similar to Hypericum perforatum; however, its stems have four ridges instead of two and are also hollow. In addition, its leaves have fewer translucent glands and more dark glands. H. maculatum is native to the Old World but has also been introduced to North America. In North America several native species may be confused with Hypericum perforatum. Hypericum anagalloides is a low-growing creeping plant with rounder leaves and fewer stamens. Hypericum boreale is a smaller plant with more delicate flowers. Hypericum canadense has smaller flowers with sepals that show between the petals. Hypericum concinnum has flowers with petals that bend backward at the tip and also has much narrower, gray-green leaves. Growing in riparian areas along rivers, Hypericum ellipticum has wider leaves with a more elliptic shape. Hypericum scouleri has leaves that are broader at the base and also thicker. All except for H. concinnum grow in environments that are generally more moist than where H. perforatum is found. Phytochemistry The most common active chemicals in Hypericum perforatum are hypericin and pseudohypericin (naphthodianthrones), and hyperforin (a phloroglucinol derivative). The species contains a host of essential oils, the bulk of which are sesquiterpenes. In the wild, the concentrations of any active chemicals can vary widely among individual plants and populations.
Biology and health sciences
Malpighiales
Plants
183701
https://en.wikipedia.org/wiki/Radiolaria
Radiolaria
The Radiolaria, also called Radiozoa, are unicellular eukaryotes of diameter 0.1–0.2 mm that produce intricate mineral skeletons, typically with a central capsule dividing the cell into the inner and outer portions of endoplasm and ectoplasm. The elaborate mineral skeleton is usually made of silica. They are found as zooplankton throughout the global ocean. As zooplankton, radiolarians are primarily heterotrophic, but many have photosynthetic endosymbionts and are, therefore, considered mixotrophs. The skeletal remains of some types of radiolarians make up a large part of the cover of the ocean floor as siliceous ooze. Due to their rapid change as species and intricate skeletons, radiolarians represent an important diagnostic fossil found from the Cambrian onwards. Description Radiolarians have many needle-like pseudopods supported by bundles of microtubules, which aid in the radiolarian's buoyancy. The cell nucleus and most other organelles are in the endoplasm, while the ectoplasm is filled with frothy vacuoles and lipid droplets, keeping them buoyant. The radiolarian can often contain symbiotic algae, especially zooxanthellae, which provide most of the cell's energy. Some of this organization is found among the heliozoa, but those lack central capsules and only produce simple scales and spines. Some radiolarians are known for their resemblance to regular polyhedra, such as the icosahedron-shaped Circogonia icosahedra pictured below. Taxonomy The radiolarians belong to the supergroup Rhizaria together with (amoeboid or flagellate) Cercozoa and (shelled amoeboid) Foraminifera. Traditionally the radiolarians have been divided into four groups—Acantharea, Nassellaria, Spumellaria and Phaeodarea. Phaeodaria is however now considered to be a Cercozoan. Nassellaria and Spumellaria both produce siliceous skeletons and were therefore grouped together in the group Polycystina. Despite some initial suggestions to the contrary, this is also supported by molecular phylogenies. The Acantharea produce skeletons of strontium sulfate and is closely related to a peculiar genus, Sticholonche (Taxopodida), which lacks an internal skeleton and was for long time considered a heliozoan. The Radiolaria can therefore be divided into two major lineages: Polycystina (Spumellaria + Nassellaria) and Spasmaria (Acantharia + Taxopodida). There are several higher-order groups that have been detected in molecular analyses of environmental data. Particularly, groups related to Acantharia and Spumellaria. These groups are so far completely unknown in terms of morphology and physiology and the radiolarian diversity is therefore likely to be much higher than what is currently known. The relationship between the Foraminifera and Radiolaria is also debated. Molecular trees support their close relationship—a grouping termed Retaria. But whether they are sister lineages or whether the Foraminifera should be included within the Radiolaria is not known. Biogeography In the diagram on the right, a Illustrates generalized radiolarian provinces and their relationship to water mass temperature (warm versus cool color shading) and circulation (gray arrows). Due to high-latitude water mass submergence under warm, stratified waters in lower latitudes, radiolarian species occupy habitats at multiple latitudes, and depths throughout the world oceans. Thus, marine sediments from the tropics reflect a composite of several vertically stacked faunal assemblages, some of which are contiguous with higher latitude surface assemblages. Sediments beneath polar waters include cosmopolitan deep-water radiolarians, as well as high-latitude endemic surface water species. Stars in (a) indicate the latitudes sampled, and the gray bars highlight the radiolarian assemblages included in each sedimentary composite. The horizontal purple bars indicate latitudes known for good radiolarian (silica) preservation, based on surface sediment composition. Data show that some species were extirpated from high latitudes but persisted in the tropics during the late Neogene, either by migration or range restriction (b). With predicted global warming, modern Southern Ocean species will not be able to use migration or range contraction to escape environmental stressors, because their preferred cold-water habitats are disappearing from the globe (c). However, tropical endemic species may expand their ranges toward midlatitudes. The color polygons in all three panels represent generalized radiolarian biogeographic provinces, as well as their relative water mass temperatures (cooler colors indicate cooler temperatures, and vice versa). Radiolarian shells Radiolarians are unicellular predatory protists encased in elaborate globular shells (or "capsules"), usually made of silica and pierced with holes. Their name comes from the Latin for "radius". They catch prey by extending parts of their body through the holes. As with the silica frustules of diatoms, radiolarian shells can sink to the ocean floor when radiolarians die and become preserved as part of the ocean sediment. These remains, as microfossils, provide valuable information about past oceanic conditions. Diversity and morphogenesis Bernard Richards, worked under the supervision of Alan Turing (1912–1954) at Manchester as one of Turing's last students, helping to validate Turing’s theory of morphogenesis. "Turing was keen to take forward the work that D’Arcy Thompson had published in On Growth and Form in 1917". The gallery shows images of the radiolarians as extracted from drawings made by the German zoologist and polymath Ernst Haeckel in 1887. Richards, Bernard (2005-2006) "Turing, Richards and Morphogenesis", The Rutherford Journal, Volume 1. Fossil record The earliest known radiolaria date to the very start of the Cambrian period, appearing in the same beds as the first small shelly fauna—they may even be terminal Precambrian in age. They have significant differences from later radiolaria, with a different silica lattice structure and few, if any, spikes on the test. About ninety percent of known radiolarian species are extinct. The skeletons, or tests, of ancient radiolarians are used in geological dating, including for oil exploration and determination of ancient climates. Some common radiolarian fossils include Actinomma, Heliosphaera and Hexadoridium.
Biology and health sciences
SAR supergroup
Plants
183751
https://en.wikipedia.org/wiki/Primality%20test
Primality test
A primality test is an algorithm for determining whether an input number is prime. Among other fields of mathematics, it is used for cryptography. Unlike integer factorization, primality tests do not generally give prime factors, only stating whether the input number is prime or not. Factorization is thought to be a computationally difficult problem, whereas primality testing is comparatively easy (its running time is polynomial in the size of the input). Some primality tests prove that a number is prime, while others like Miller–Rabin prove that a number is composite. Therefore, the latter might more accurately be called compositeness tests instead of primality tests. Simple methods The simplest primality test is trial division: given an input number, , check whether it is divisible by any prime number between 2 and (i.e., whether the division leaves no remainder). If so, then is composite. Otherwise, it is prime. For any divisor , there must be another divisor , and a prime divisor of , and therefore looking for prime divisors at most is sufficient. For example, consider the number 100, whose divisors are these numbers: 1, 2, 4, 5, 10, 20, 25, 50, 100. When all possible divisors up to are tested, some divisors will be discovered twice. To observe this, consider the list of divisor pairs of 100: . Products past are the reverse of products that appeared earlier. For example, and are the reverse of each other. Further, that of the two divisors, and . This observation generalizes to all : all divisor pairs of contain a divisor less than or equal to , so the algorithm need only search for divisors less than or equal to to guarantee detection of all divisor pairs. Also, 2 is a prime dividing 100, which immediately proves that 100 is not prime. Every positive integer except 1 is divisible by at least one prime number by the Fundamental Theorem of Arithmetic. Therefore the algorithm need only search for prime divisors less than or equal to . For another example, consider how this algorithm determines the primality of 17. One has , and the only primes are 2 and 3. Neither divides 17, proving that 17 is prime. For a last example, consider 221. One has , and the primes are 2, 3, 5, 7, 11, and 13. Upon checking each, one discovers that , proving that 221 is not prime. In cases where it is not feasible to compute the list of primes , it is also possible to simply (and slowly) check all numbers between and for divisors. A rather simple optimization is to test divisibility by 2 and by just the odd numbers between 3 and , since divisibility by an even number implies divisibility by 2. This method can be improved further. Observe that all primes greater than 3 are of the form for a nonnegative integer and . Indeed, every integer is of the form for a positive integer and . Since 2 divides , and , and 3 divides and , the only possible remainders mod 6 for a prime greater than 3 are 1 and 5. So, a more efficient primality test for is to test whether is divisible by 2 or 3, then to check through all numbers of the form and which are . This is almost three times as fast as testing all numbers up to . Generalizing further, all primes greater than (c primorial) are of the form for positive integers, , and coprime to . For example, consider . All integers are of the form for integers with . Now, 2 divides , 3 divides , and 5 divides . Thus all prime numbers greater than 30 are of the form for . Of course, not all numbers of the form with coprime to are prime. For example, is not prime, even though 17 is coprime to . As grows, the fraction of coprime remainders to remainders decreases, and so the time to test decreases (though it still necessary to check for divisibility by all primes that are less than ). Observations analogous to the preceding can be applied recursively, giving the Sieve of Eratosthenes. One way to speed up these methods (and all the others mentioned below) is to pre-compute and store a list of all primes up to a certain bound, such as all primes up to 200. (Such a list can be computed with the Sieve of Eratosthenes or by an algorithm that tests each incremental against all known primes ). Then, before testing for primality with a large-scale method, can first be checked for divisibility by any prime from the list. If it is divisible by any of those numbers then it is composite, and any further tests can be skipped. A simple but very inefficient primality test uses Wilson's theorem, which states that is prime if and only if: Although this method requires about modular multiplications, rendering it impractical, theorems about primes and modular residues form the basis of many more practical methods. Heuristic tests These are tests that seem to work well in practice, but are unproven and therefore are not, technically speaking, algorithms at all. The Fermat test and the Fibonacci test are simple examples, and they are very effective when combined. John Selfridge has conjectured that if p is an odd number, and p ≡ ±2 (mod 5), then p will be prime if both of the following hold: 2p−1 ≡ 1 (mod p), fp+1 ≡ 0 (mod p), where fk is the k-th Fibonacci number. The first condition is the Fermat primality test using base 2. In general, if p ≡ a (mod x2+4), where a is a quadratic non-residue (mod x2+4) then p should be prime if the following conditions hold: 2p−1 ≡ 1 (mod p), f(1)p+1 ≡ 0 (mod p), f(x)k is the k-th Fibonacci polynomial at x. Selfridge, Carl Pomerance and Samuel Wagstaff together offer $620 for a counterexample. Probabilistic tests Probabilistic tests are more rigorous than heuristics in that they provide provable bounds on the probability of being fooled by a composite number. Many popular primality tests are probabilistic tests. These tests use, apart from the tested number n, some other numbers a which are chosen at random from some sample space; the usual randomized primality tests never report a prime number as composite, but it is possible for a composite number to be reported as prime. The probability of error can be reduced by repeating the test with several independently chosen values of a; for two commonly used tests, for any composite n at least half the as detect ns compositeness, so k repetitions reduce the error probability to at most 2−k, which can be made arbitrarily small by increasing k. The basic structure of randomized primality tests is as follows: Randomly pick a number a. Check equality (corresponding to the chosen test) involving a and the given number n. If the equality fails to hold true, then n is a composite number and a is a witness for the compositeness, and the test stops. Get back to the step one until the required accuracy is reached. After one or more iterations, if n is not found to be a composite number, then it can be declared probably prime. Fermat primality test The simplest probabilistic primality test is the Fermat primality test (actually a compositeness test). It works as follows: Given an integer n, choose some integer a coprime to n and calculate an − 1 modulo n. If the result is different from 1, then n is composite. If it is 1, then n may be prime. If an−1 (modulo n) is 1 but n is not prime, then n is called a pseudoprime to base a. In practice, if an−1 (modulo n) is 1, then n is usually prime. But here is a counterexample: if n = 341 and a = 2, then even though 341 = 11·31 is composite. In fact, 341 is the smallest pseudoprime base 2 (see Figure 1 of ). There are only 21853 pseudoprimes base 2 that are less than 2.5 (see page 1005 of ). This means that, for n up to 2.5, if 2n−1 (modulo n) equals 1, then n is prime, unless n is one of these 21853 pseudoprimes. Some composite numbers (Carmichael numbers) have the property that an − 1 is 1 (modulo n) for every a that is coprime to n. The smallest example is n = 561 = 3·11·17, for which a560 is 1 (modulo 561) for all a coprime to 561. Nevertheless, the Fermat test is often used if a rapid screening of numbers is needed, for instance in the key generation phase of the RSA public key cryptographic algorithm. Miller–Rabin and Solovay–Strassen primality test The Miller–Rabin primality test and Solovay–Strassen primality test are more sophisticated variants, which detect all composites (once again, this means: for every composite number n, at least 3/4 (Miller–Rabin) or 1/2 (Solovay–Strassen) of numbers a are witnesses of compositeness of n). These are also compositeness tests. The Miller–Rabin primality test works as follows: Given an integer n, choose some positive integer a < n. Let 2sd = n − 1, where d is odd. If and for all then n is composite and a is a witness for the compositeness. Otherwise, n may or may not be prime. The Miller–Rabin test is a strong probable prime test (see PSW page 1004). The Solovay–Strassen primality test uses another equality: Given an odd number n, choose some integer a < n, if , where is the Jacobi symbol, then n is composite and a is a witness for the compositeness. Otherwise, n may or may not be prime. The Solovay–Strassen test is an Euler probable prime test (see PSW page 1003). For each individual value of a, the Solovay–Strassen test is weaker than the Miller–Rabin test. For example, if n = 1905 and a = 2, then the Miller-Rabin test shows that n is composite, but the Solovay–Strassen test does not. This is because 1905 is an Euler pseudoprime base 2 but not a strong pseudoprime base 2 (this is illustrated in Figure 1 of PSW). Frobenius primality test The Miller–Rabin and the Solovay–Strassen primality tests are simple and are much faster than other general primality tests. One method of improving efficiency further in some cases is the Frobenius pseudoprimality test; a round of this test takes about three times as long as a round of Miller–Rabin, but achieves a probability bound comparable to seven rounds of Miller–Rabin. The Frobenius test is a generalization of the Lucas probable prime test. Baillie–PSW primality test The Baillie–PSW primality test is a probabilistic primality test that combines a Fermat or Miller–Rabin test with a Lucas probable prime test to get a primality test that has no known counterexamples. That is, there are no known composite n for which this test reports that n is probably prime. It has been shown that there are no counterexamples for n . Other tests Leonard Adleman and Ming-Deh Huang presented an errorless (but expected polynomial-time) variant of the elliptic curve primality test. Unlike the other probabilistic tests, this algorithm produces a primality certificate, and thus can be used to prove that a number is prime. The algorithm is prohibitively slow in practice. If quantum computers were available, primality could be tested asymptotically faster than by using classical computers. A combination of Shor's algorithm, an integer factorization method, with the Pocklington primality test could solve the problem in . Fast deterministic tests Near the beginning of the 20th century, it was shown that a corollary of Fermat's little theorem could be used to test for primality. This resulted in the Pocklington primality test. However, as this test requires a partial factorization of n − 1 the running time was still quite slow in the worst case. The first deterministic primality test significantly faster than the naive methods was the cyclotomy test; its runtime can be proven to be O((log n)c log log log n), where n is the number to test for primality and c is a constant independent of n. Many further improvements were made, but none could be proven to have polynomial running time. (Running time is measured in terms of the size of the input, which in this case is ~ log n, that being the number of bits needed to represent the number n.) The elliptic curve primality test can be proven to run in O((log n)6), if some conjectures on analytic number theory are true. Similarly, under the extended Riemann hypothesis, the deterministic Miller's test, which forms the basis of the probabilistic Miller–Rabin test, can be proved to run in Õ((log n)4). In practice, this algorithm is slower than the other two for sizes of numbers that can be dealt with at all. Because the implementation of these two methods is rather difficult and creates a risk of programming errors, slower but simpler tests are often preferred. In 2002, the first provably unconditional deterministic polynomial time test for primality was invented by Manindra Agrawal, Neeraj Kayal, and Nitin Saxena. The AKS primality test runs in Õ((log n)12) (improved to Õ((log n)7.5) in the published revision of their paper), which can be further reduced to Õ((log n)6) if the Sophie Germain conjecture is true. Subsequently, Lenstra and Pomerance presented a version of the test which runs in time Õ((log n)6) unconditionally. Agrawal, Kayal and Saxena suggest a variant of their algorithm which would run in Õ((log n)3) if Agrawal's conjecture is true; however, a heuristic argument by Hendrik Lenstra and Carl Pomerance suggests that it is probably false. A modified version of the Agrawal's conjecture, the Agrawal–Popovych conjecture, may still be true. Complexity In computational complexity theory, the formal language corresponding to the prime numbers is denoted as PRIMES. It is easy to show that PRIMES is in Co-NP: its complement COMPOSITES is in NP because one can decide compositeness by nondeterministically guessing a factor. In 1975, Vaughan Pratt showed that there existed a certificate for primality that was checkable in polynomial time, and thus that PRIMES was in NP, and therefore in . See primality certificate for details. The subsequent discovery of the Solovay–Strassen and Miller–Rabin algorithms put PRIMES in coRP. In 1992, the Adleman–Huang algorithm reduced the complexity to , which superseded Pratt's result. The Adleman–Pomerance–Rumely primality test from 1983 put PRIMES in QP (quasi-polynomial time), which is not known to be comparable with the classes mentioned above. Because of its tractability in practice, polynomial-time algorithms assuming the Riemann hypothesis, and other similar evidence, it was long suspected but not proven that primality could be solved in polynomial time. The existence of the AKS primality test finally settled this long-standing question and placed PRIMES in P. However, PRIMES is not known to be P-complete, and it is not known whether it lies in classes lying inside P such as NC or L. It is known that PRIMES is not in AC0. Number-theoretic methods Certain number-theoretic methods exist for testing whether a number is prime, such as the Lucas test and Proth's test. These tests typically require factorization of n + 1, n − 1, or a similar quantity, which means that they are not useful for general-purpose primality testing, but they are often quite powerful when the tested number n is known to have a special form. The Lucas test relies on the fact that the multiplicative order of a number a modulo n is n − 1 for a prime n when a is a primitive root modulo n. If we can show a is primitive for n, we can show n is prime.
Mathematics
Prime numbers
null
183824
https://en.wikipedia.org/wiki/Sonic%20boom
Sonic boom
A sonic boom is a sound associated with shock waves created when an object travels through the air faster than the speed of sound. Sonic booms generate enormous amounts of sound energy, sounding similar to an explosion or a thunderclap to the human ear. The crack of a supersonic bullet passing overhead or the crack of a bullwhip are examples of a sonic boom in miniature. Sonic booms due to large supersonic aircraft can be particularly loud and startling, tend to awaken people, and may cause minor damage to some structures. This led to the prohibition of routine supersonic flight overland. Although sonic booms cannot be completely prevented, research suggests that with careful shaping of the vehicle, the nuisance due to sonic booms may be reduced to the point that overland supersonic flight may become a feasible option. A sonic boom does not occur only at the moment an object crosses the sound barrier and neither is it heard in all directions emanating from the supersonic object. Rather, the boom is a continuous effect that occurs while the object is traveling at supersonic speeds and affects only observers that are positioned at a point that intersects a region in the shape of a geometrical cone behind the object. As the object moves, this conical region also moves behind it and when the cone passes over observers, they will briefly experience the "boom". Causes When an aircraft passes through the air, it creates a series of pressure waves in front of the aircraft and behind it, similar to the bow and stern waves created by a boat. These waves travel at the speed of sound and, as the speed of the object increases, the waves are forced together, or compressed, because they cannot get out of each other's way quickly enough. Eventually, they merge into a single shock wave, which travels at the speed of sound, a critical speed known as Mach 1, which is approximately at sea level and . In smooth flight, the shock wave starts at the nose of the aircraft and ends at the tail. Because the different radial directions around the aircraft's direction of travel are equivalent (given the "smooth flight" condition), the shock wave forms a Mach cone, similar to a vapour cone, with the aircraft at its tip. The half-angle between the direction of flight and the shock wave is given by: , where is the inverse of the plane's Mach number . Thus the faster the plane travels, the finer and more pointed the cone is. There is a rise in pressure at the nose, decreasing steadily to a negative pressure at the tail, followed by a sudden return to normal pressure after the object passes. This "overpressure profile" is known as an N-wave because of its shape. The "boom" is experienced when there is a sudden change in pressure; therefore, an N-wave causes two booms – one when the initial pressure rise reaches an observer, and another when the pressure returns to normal. This leads to a distinctive "double boom" from a supersonic aircraft. When the aircraft is maneuvering, the pressure distribution changes into different forms, with a characteristic U-wave shape. Since the boom is being generated continually as long as the aircraft is supersonic, it fills out a narrow path on the ground following the aircraft's flight path, a bit like an unrolling red carpet, and hence known as the boom carpet. Its width depends on the altitude of the aircraft. The distance from the point on the ground where the boom is heard to the aircraft depends on its altitude and the angle . For today's supersonic aircraft in normal operating conditions, the peak overpressure varies from less than 50 to 500 Pa (1 to 10 psf (pound per square foot)) for an N-wave boom. Peak overpressures for U-waves are amplified two to five times the N-wave, but this amplified overpressure impacts only a very small area when compared to the area exposed to the rest of the sonic boom. The strongest sonic boom ever recorded was 7,000 Pa (144 psf) and it did not cause injury to the researchers who were exposed to it. The boom was produced by an F-4 flying just above the speed of sound at an altitude of . In recent tests, the maximum boom measured during more realistic flight conditions was 1,010 Pa (21 psf). There is a probability that some damage—shattered glass, for example—will result from a sonic boom. Buildings in good condition should suffer no damage by pressures of 530 Pa (11 psf) or less. And, typically, community exposure to sonic boom is below 100 Pa (2 psf). Ground motion resulting from the sonic boom is rare and is well below structural damage thresholds accepted by the U.S. Bureau of Mines and other agencies. The power, or volume, of the shock wave, depends on the quantity of air that is being accelerated, and thus the size and shape of the aircraft. As the aircraft increases speed the shock cone gets tighter around the craft and becomes weaker to the point that at very high speeds and altitudes, no boom is heard. The "length" of the boom from front to back depends on the length of the aircraft to a power of 3/2. Longer aircraft therefore "spread out" their booms more than smaller ones, which leads to a less powerful boom. Several smaller shock waves can and usually do form at other points on the aircraft, primarily at any convex points, or curves, the leading wing edge, and especially the inlet to engines. These secondary shockwaves are caused by the air being forced to turn around these convex points, which generates a shock wave in supersonic flow. The later shock waves are somewhat faster than the first one, travel faster, and add to the main shockwave at some distance away from the aircraft to create a much more defined N-wave shape. This maximizes both the magnitude and the "rise time" of the shock which makes the boom seem louder. On most aircraft designs the characteristic distance is about , meaning that below this altitude the sonic boom will be "softer". However, the drag at this altitude or below makes supersonic travel particularly inefficient, which poses a serious problem. Supersonic aircraft Supersonic aircraft are any aircraft that can achieve flight faster than Mach 1, which refers to the speed of sound. "Supersonic includes speeds up to five times Mach than the speed of sound, or Mach 5." (Dunbar, 2015) The top mileage per hour for a supersonic aircraft normally ranges from . Typically, most aircraft do not exceed . There are many variations of supersonic aircraft. Some models of supersonic aircraft make use of better-engineered aerodynamics that allow a few sacrifices in the aerodynamics of the model for thruster power. Other models use the efficiency and power of the thruster to allow a less aerodynamic model to achieve greater speeds. A typical model found in United States military use ranges from an average of $13 million to $35 million U.S. dollars. Measurement and examples The pressure from sonic booms caused by aircraft is often a few pounds per square foot. A vehicle flying at greater altitude will generate lower pressures on the ground because the shock wave reduces in intensity as it spreads out away from the vehicle, but the sonic booms are less affected by vehicle speed. Abatement In the late 1950s when supersonic transport (SST) designs were being actively pursued, it was thought that although the boom would be very large, the problems could be avoided by flying higher. This assumption was proven false when the North American XB-70 Valkyrie first flew, and it was found that the boom was a problem even at 70,000 feet (21,000 m). It was during these tests that the N-wave was first characterized. Richard Seebass and his colleague Albert George at Cornell University studied the problem extensively and eventually defined a "figure of merit" (FM) to characterize the sonic boom levels of different aircraft. FM is a function of the aircraft's weight and the aircraft length. The lower this value, the less boom the aircraft generates, with figures of about 1 or lower being considered acceptable. Using this calculation, they found FMs of about 1.4 for Concorde and 1.9 for the Boeing 2707. This eventually doomed most SST projects as public resentment, mixed with politics, eventually resulted in laws that made any such aircraft less useful (flying supersonically only over water for instance). Small airplane designs like business jets are favored and tend to produce minimal to no audible booms. Building on the earlier research of L. B. Jones, Seebass, and George identified conditions in which sonic boom shockwaves could be eliminated. This work was extended by Christine. M. Darden and described as the Jones-Seebass-George-Darden theory of sonic boom minimization. This theory, approached the problem from a different angle, trying to spread out the N-wave laterally and temporally (longitudinally), by producing a strong and downwards-focused (SR-71 Blackbird, Boeing X-43) shock at a sharp, but wide angle nose cone, which will travel at slightly supersonic speed (bow shock), and using a swept back flying wing or an oblique flying wing to smooth out this shock along the direction of flight (the tail of the shock travels at sonic speed). To adapt this principle to existing planes, which generate a shock at their nose cone and an even stronger one at their wing leading edge, the fuselage below the wing is shaped according to the area rule. Ideally, this would raise the characteristic altitude from to 60,000 feet (from 12,000 m to 18,000 m), which is where most SST aircraft were expected to fly. This remained untested for decades, until DARPA started the Quiet Supersonic Platform project and funded the Shaped Sonic Boom Demonstration (SSBD) aircraft to test it. SSBD used an F-5 Freedom Fighter. The F-5E was modified with a highly refined shape which lengthened the nose to that of the F-5F model. The fairing extended from the nose back to the inlets on the underside of the aircraft. The SSBD was tested over two years culminating in 21 flights and was an extensive study on sonic boom characteristics. After measuring the 1,300 recordings, some taken inside the shock wave by a chase plane, the SSBD demonstrated a reduction in boom by about one-third. Although one-third is not a huge reduction, it could have reduced Concorde's boom to an acceptable level below FM = 1. As a follow-on to SSBD, in 2006 a NASA-Gulfstream Aerospace team tested the Quiet Spike on NASA Dryden's F-15B aircraft 836. The Quiet Spike is a telescoping boom fitted to the nose of an aircraft specifically designed to weaken the strength of the shock waves forming on the nose of the aircraft at supersonic speeds. Over 50 test flights were performed. Several flights included probing of the shockwaves by a second F-15B, NASA's Intelligent Flight Control System testbed, aircraft 837. Some theoretical designs do not appear to create sonic booms at all, such as the Busemann biplane. However, creating a shockwave is inescapable if it generates aerodynamic lift. In 2018, NASA awarded Lockheed Martin a $247.5 million contract to construct a design known as the Low Boom Flight Demonstrator, which aims to reduce the boom to the sound of a car door closing. As of October 2023, the first flight was expected in 2024. Perception, noise, and other concerns The sound of a sonic boom depends largely on the distance between the observer and the aircraft shape producing the sonic boom. A sonic boom is usually heard as a deep double "boom" as the aircraft is usually some distance away. The sound is much like that of mortar bombs, commonly used in firework displays. It is a common misconception that only one boom is generated during the subsonic to supersonic transition; rather, the boom is continuous along the boom carpet for the entire supersonic flight. As a former Concorde pilot puts it, "You don't actually hear anything on board. All we see is the pressure wave moving down the airplane – it indicates the instruments. And that's what we see around Mach 1. But we don't hear the sonic boom or anything like that. That's rather like the wake of a ship – it's behind us." In 1964, NASA and the Federal Aviation Administration began the Oklahoma City sonic boom tests, which caused eight sonic booms per day over six months. Valuable data was gathered from the experiment, but 15,000 complaints were generated and ultimately entangled the government in a class-action lawsuit, which it lost on appeal in 1969. Sonic booms were also a nuisance in North Cornwall and North Devon in the UK as these areas were underneath the flight path of Concorde. Windows would rattle and in some cases, the "torching" (masonry mortar underneath roof slates) would be dislodged with the vibration. There has been recent work in this area, notably under DARPA's Quiet Supersonic Platform studies. Research by acoustics experts under this program began looking more closely at the composition of sonic booms, including the frequency content. Several characteristics of the traditional sonic boom "N" wave can influence how loud and irritating it can be perceived by listeners on the ground. Even strong N-waves such as those generated by Concorde or military aircraft can be far less objectionable if the rise time of the over-pressure is sufficiently long. A new metric has emerged, known as perceived loudness, measured in PLdB. This takes into account the frequency content, rise time, etc. A well-known example is the snapping of one's fingers in which the "perceived" sound is nothing more than an annoyance. The energy range of sonic boom is concentrated in the 0.1–100 hertz frequency range that is considerably below that of subsonic aircraft, gunfire and most industrial noise. Duration of sonic boom is brief; less than a second, 100  milliseconds (0.1  second) for most fighter-sized aircraft and 500  milliseconds for the space shuttle or Concorde jetliner. The intensity and width of a sonic boom path depend on the physical characteristics of the aircraft and how it is operated. In general, the greater an aircraft's altitude, the lower the over-pressure on the ground. Greater altitude also increases the boom's lateral spread, exposing a wider area to the boom. Over-pressures in the sonic boom impact area, however, will not be uniform. Boom intensity is greatest directly under the flight path, progressively weakening with greater horizontal distance away from the aircraft flight track. Ground width of the boom exposure area is approximately for each of altitude (the width is about five times the altitude); that is, an aircraft flying supersonic at will create a lateral boom spread of about . For steady supersonic flight, the boom is described as a carpet boom since it moves with the aircraft as it maintains supersonic speed and altitude. Some maneuvers, diving, acceleration, or turning, can cause the focus of the boom. Other maneuvers, such as deceleration and climbing, can reduce the strength of the shock. In some instances, weather conditions can distort sonic booms. Depending on the aircraft's altitude, sonic booms reach the ground 2 to 60  seconds after flyover. However, not all booms are heard at ground level. The speed of sound at any altitude is a function of air temperature. A decrease or increase in temperature results in a corresponding decrease or increase in sound speed. Under standard atmospheric conditions, air temperature decreases with increased altitude. For example, when the sea-level temperature is 59 degrees Fahrenheit (15 °C), the temperature at drops to minus 49 degrees Fahrenheit (−45 °C). This temperature gradient helps bend the sound waves upward. Therefore, for a boom to reach the ground, the aircraft's speed relative to the ground must be greater than the speed of sound at the ground. For example, the speed of sound at is about , but an aircraft must travel at least (Mach 1.12) for a boom to be heard on the ground. The composition of the atmosphere is also a factor. Temperature variations, humidity, atmospheric pollution, and winds can all affect how a sonic boom is perceived on the ground. Even the ground itself can influence the sound of a sonic boom. Hard surfaces such as concrete, pavement, and large buildings can cause reflections that may amplify the sound of a sonic boom. Similarly, grassy fields and profuse foliage can help attenuate the strength of the overpressure of a sonic boom. Currently, there are no industry-accepted standards for the acceptability of a sonic boom. However, work is underway to create metrics that will help in understanding how humans respond to the noise generated by sonic booms. Until such metrics can be established, either through further study or supersonic overflight testing, it is doubtful that legislation will be enacted to remove the current prohibition on supersonic overflight in place in several countries, including the United States. Bullwhip The cracking sound a bullwhip makes when properly wielded is, in fact, a small sonic boom. The end of the whip, known as the "cracker", moves faster than the speed of sound, thus creating a sonic boom. A bullwhip tapers down from the handle section to the cracker. The cracker has much less mass than the handle section. When the whip is sharply swung, the momentum is transferred down the length of the tapering whip, the declining mass being made up for with increasing speed. Goriely and McMillen showed that the physical explanation is complex, involving the way that a loop travels down a tapered filament under tension.
Physical sciences
Waves
Physics
183884
https://en.wikipedia.org/wiki/Rhombus
Rhombus
In plane Euclidean geometry, a rhombus (: rhombi or rhombuses) is a quadrilateral whose four sides all have the same length. Another name is equilateral quadrilateral, since equilateral means that all of its sides are equal in length. The rhombus is often called a "diamond", after the diamonds suit in playing cards which resembles the projection of an octahedral diamond, or a lozenge, though the former sometimes refers specifically to a rhombus with a 60° angle (which some authors call a calisson after the French sweet—also see Polyiamond), and the latter sometimes refers specifically to a rhombus with a 45° angle. Every rhombus is simple (non-self-intersecting), and is a special case of a parallelogram and a kite. A rhombus with right angles is a square. Etymology The word "rhombus" comes from , meaning something that spins, which derives from the verb , romanized: , meaning "to turn round and round." The word was used both by Euclid and Archimedes, who used the term "solid rhombus" for a bicone, two right circular cones sharing a common base. The surface we refer to as rhombus today is a cross section of the bicone on a plane through the apexes of the two cones. Characterizations A simple (non-self-intersecting) quadrilateral is a rhombus if and only if it is any one of the following: a parallelogram in which a diagonal bisects an interior angle a parallelogram in which at least two consecutive sides are equal in length a parallelogram in which the diagonals are perpendicular (an orthodiagonal parallelogram) a quadrilateral with four sides of equal length (by definition) a quadrilateral in which the diagonals are perpendicular and bisect each other a quadrilateral in which each diagonal bisects two opposite interior angles a quadrilateral ABCD possessing a point P in its plane such that the four triangles ABP, BCP, CDP, and DAP are all congruent a quadrilateral ABCD in which the incircles in triangles ABC, BCD, CDA and DAB have a common point Basic properties Every rhombus has two diagonals connecting pairs of opposite vertices, and two pairs of parallel sides. Using congruent triangles, one can prove that the rhombus is symmetric across each of these diagonals. It follows that any rhombus has the following properties: Opposite angles of a rhombus have equal measure. The two diagonals of a rhombus are perpendicular; that is, a rhombus is an orthodiagonal quadrilateral. Its diagonals bisect opposite angles. The first property implies that every rhombus is a parallelogram. A rhombus therefore has all of the properties of a parallelogram: for example, opposite sides are parallel; adjacent angles are supplementary; the two diagonals bisect one another; any line through the midpoint bisects the area; and the sum of the squares of the sides equals the sum of the squares of the diagonals (the parallelogram law). Thus denoting the common side as a and the diagonals as p and q, in every rhombus Not every parallelogram is a rhombus, though any parallelogram with perpendicular diagonals (the second property) is a rhombus. In general, any quadrilateral with perpendicular diagonals, one of which is a line of symmetry, is a kite. Every rhombus is a kite, and any quadrilateral that is both a kite and parallelogram is a rhombus. A rhombus is a tangential quadrilateral. That is, it has an inscribed circle that is tangent to all four sides. Diagonals The length of the diagonals p = AC and q = BD can be expressed in terms of the rhombus side a and one vertex angle α as and These formulas are a direct consequence of the law of cosines. Inradius The inradius (the radius of a circle inscribed in the rhombus), denoted by , can be expressed in terms of the diagonals and as or in terms of the side length and any vertex angle or as Area As for all parallelograms, the area K of a rhombus is the product of its base and its height (h). The base is simply any side length a: The area can also be expressed as the base squared times the sine of any angle: or in terms of the height and a vertex angle: or as half the product of the diagonals p, q: or as the semiperimeter times the radius of the circle inscribed in the rhombus (inradius): Another way, in common with parallelograms, is to consider two adjacent sides as vectors, forming a bivector, so the area is the magnitude of the bivector (the magnitude of the vector product of the two vectors), which is the determinant of the two vectors' Cartesian coordinates: K = x1y2 – x2y1. Dual properties The dual polygon of a rhombus is a rectangle: A rhombus has all sides equal, while a rectangle has all angles equal. A rhombus has opposite angles equal, while a rectangle has opposite sides equal. A rhombus has an inscribed circle, while a rectangle has a circumcircle. A rhombus has an axis of symmetry through each pair of opposite vertex angles, while a rectangle has an axis of symmetry through each pair of opposite sides. The diagonals of a rhombus intersect at equal angles, while the diagonals of a rectangle are equal in length. The figure formed by joining the midpoints of the sides of a rhombus is a rectangle, and vice versa. Cartesian equation The sides of a rhombus centered at the origin, with diagonals each falling on an axis, consist of all points (x, y) satisfying The vertices are at and This is a special case of the superellipse, with exponent 1. Other properties One of the five 2D lattice types is the rhombic lattice, also called centered rectangular lattice. Rhombi can tile the 2D plane edge-to-edge and periodically in three different ways, including, for the 60° rhombus, the rhombille tiling. Three-dimensional analogues of a rhombus include the bipyramid and the bicone as a surface of revolution. As the faces of a polyhedron Convex polyhedra with rhombi include the infinite set of rhombic zonohedrons, which can be seen as projective envelopes of hypercubes. A rhombohedron (also called a rhombic hexahedron) is a three-dimensional figure like a cuboid (also called a rectangular parallelepiped), except that its 3 pairs of parallel faces are up to 3 types of rhombi instead of rectangles. The rhombic dodecahedron is a convex polyhedron with 12 congruent rhombi as its faces. The rhombic triacontahedron is a convex polyhedron with 30 golden rhombi (rhombi whose diagonals are in the golden ratio) as its faces. The great rhombic triacontahedron is a nonconvex isohedral, isotoxal polyhedron with 30 intersecting rhombic faces. The rhombic hexecontahedron is a stellation of the rhombic triacontahedron. It is nonconvex with 60 golden rhombic faces with icosahedral symmetry. The rhombic enneacontahedron is a polyhedron composed of 90 rhombic faces, with three, five, or six rhombi meeting at each vertex. It has 60 broad rhombi and 30 slim ones. The rhombic icosahedron is a polyhedron composed of 20 rhombic faces, of which three, four, or five meet at each vertex. It has 10 faces on the polar axis with 10 faces following the equator.
Mathematics
Two-dimensional space
null
183932
https://en.wikipedia.org/wiki/T%20Tauri%20star
T Tauri star
T Tauri stars (TTS) are a class of variable stars that are less than about ten million years old. This class is named after the prototype, T Tauri, a young star in the Taurus star-forming region. They are found near molecular clouds and identified by their optical variability and strong chromospheric lines. T Tauri stars are pre-main-sequence stars in the process of contracting to the main sequence along the Hayashi track, a luminosity–temperature relationship obeyed by infant stars of less than 3 solar masses () in the pre-main-sequence phase of stellar evolution. It ends when a star of or larger develops a radiative zone, or when a smaller star commences nuclear fusion on the main sequence. History While T Tauri itself was discovered in 1852, the T Tauri class of stars were initially defined by Alfred Harrison Joy in 1945. Characteristics T Tauri stars comprise the youngest visible F, G, K and M spectral type stars (). Their surface temperatures are similar to those of main-sequence stars of the same mass, but they are significantly more luminous because their radii are larger. Their central temperatures are too low for hydrogen fusion. Instead, they are powered by gravitational energy released as the stars contract, while moving towards the main sequence, which they reach after about 100 million years. They typically rotate with a period between one and twelve days, compared to a month for the Sun, and are very active and variable. There is evidence of large areas of starspot coverage, and they have intense and variable X-ray and radio emissions (approximately 1000 times that of the Sun). Many have extremely powerful stellar winds; some eject gas in high-velocity bipolar jets. Another source of brightness variability are clumps (protoplanets and planetesimals) in the disk surrounding T Tauri stars. Their spectra show a higher lithium abundance than the Sun and other main-sequence stars because lithium is destroyed at temperatures above 2,500,000 K. From a study of lithium abundances in 53 T Tauri stars, it has been found that lithium depletion varies strongly with size, suggesting that "lithium burning" by the p-p chain during the last highly convective and unstable stages during the later pre–main sequence phase of the Hayashi contraction may be one of the main sources of energy for T Tauri stars. Rapid rotation tends to improve mixing and increase the transport of lithium into deeper layers where it is destroyed. T Tauri stars generally increase their rotation rates as they age, through contraction and spin-up, as they conserve angular momentum. This causes an increased rate of lithium loss with age. Lithium burning will also increase with higher temperatures and mass, and will last for at most a little over 100 million years. The p-p chain for lithium burning is as follows :{| border="0" |- style="height:2em;" | ||+ || ||→ |||| || |- style="height:2em;" | ||+ || ||→ || ||+  |- style="height:2em;" | ||+ || ||→ || || ||(unstable) |- style="height:2em;" | || || ||→ ||2  ||+ energy |} It will not occur in stars with less than sixty times the mass of Jupiter (). The rate of lithium depletion can be used to calculate the age of the star. Types Several types of TTSs exist: Classical T Tauri star (CTTS) Weak-line T Tauri star (WTTS) Naked T Tauri star (NTTS), which is a subset of WTTS. Roughly half of T Tauri stars have circumstellar disks, which in this case are called protoplanetary discs because they are probably the progenitors of planetary systems like the Solar System. Circumstellar discs are estimated to dissipate on timescales of up to 10 million years. Most T Tauri stars are in binary star systems. In various stages of their life, they are called young stellar object (YSOs). It is thought that the active magnetic fields and strong solar wind of Alfvén waves of T Tauri stars are one means by which angular momentum gets transferred from the star to the protoplanetary disc. A T Tauri stage for the Solar System would be one means by which the angular momentum of the contracting Sun was transferred to the protoplanetary disc and hence, eventually to the planets. Analogs of T Tauri stars in the higher mass range (2–8 solar masses)—A and B spectral type pre–main-sequence stars, are called Herbig Ae/Be-type stars. More massive (>8 solar masses) stars in pre–main sequence stage are not observed, because they evolve very quickly: when they become visible (i.e. disperses surrounding circumstellar gas and dust cloud), the hydrogen in the center is already burning and they are main sequence objects. Planets Planets around T Tauri stars include: HD 106906 b around an F-type star 1RXS J160929.1−210524b around a K-type star Gliese 674 b around an M-type star V830 Tau b around an M-type star PDS 70b around a K-type star
Physical sciences
Stellar astronomy
Astronomy
183934
https://en.wikipedia.org/wiki/Stellar%20wind
Stellar wind
A stellar wind is a flow of gas ejected from the upper atmosphere of a star. It is distinguished from the bipolar outflows characteristic of young stars by being less collimated, although stellar winds are not generally spherically symmetric. Different types of stars have different types of stellar winds. Post-main-sequence stars nearing the ends of their lives often eject large quantities of mass in massive ( solar masses per year), slow (v = 10 km/s) winds. These include red giants and supergiants, and asymptotic giant branch stars. These winds are understood to be driven by radiation pressure on dust condensing in the upper atmosphere of the stars. Young T Tauri stars often have very powerful stellar winds. Massive stars of types O and B have stellar winds with lower mass loss rates ( solar masses per year) but very high velocities (v > 1–2000 km/s). Such winds are driven by radiation pressure on the resonance absorption lines of heavy elements such as carbon and nitrogen. These high-energy stellar winds blow stellar wind bubbles. G-type stars like the Sun have a wind driven by their hot, magnetized corona. The Sun's wind is called the solar wind. These winds consist mostly of high-energy electrons and protons (about 1 keV) that are able to escape the star's gravity because of the high temperature of the corona. Stellar winds from main-sequence stars do not strongly influence the evolution of lower-mass stars such as the Sun. However, for more massive stars such as O stars, the mass loss can result in a star shedding as much as 50% of its mass whilst on the main sequence: this clearly has a significant impact on the later stages of evolution. The influence can even be seen for intermediate mass stars, which will become white dwarfs at the ends of their lives rather than exploding as supernovae only because they lost enough mass in their winds.
Physical sciences
Stellar astronomy
Astronomy
183957
https://en.wikipedia.org/wiki/Subsidence
Subsidence
Subsidence is a general term for downward vertical movement of the Earth's surface, which can be caused by both natural processes and human activities. Subsidence involves little or no horizontal movement, which distinguishes it from slope movement. Processes that lead to subsidence include dissolution of underlying carbonate rock by groundwater; gradual compaction of sediments; withdrawal of fluid lava from beneath a solidified crust of rock; mining; pumping of subsurface fluids, such as groundwater or petroleum; or warping of the Earth's crust by tectonic forces. Subsidence resulting from tectonic deformation of the crust is known as tectonic subsidence and can create accommodation for sediments to accumulate and eventually lithify into sedimentary rock. Ground subsidence is of global concern to geologists, geotechnical engineers, surveyors, engineers, urban planners, landowners, and the public in general. Pumping of groundwater or petroleum has led to subsidence of as much as in many locations around the world and incurring costs measured in hundreds of millions of US dollars. Land subsidence caused by groundwater withdrawal will likely increase in occurrence and related damages, primarily due to global population and economic growth, which will continue to drive higher groundwater demand. Causes Dissolution of limestone Subsidence frequently causes major problems in karst terrains, where dissolution of limestone by fluid flow in the subsurface creates voids (i.e., caves). If the roof of a void becomes too weak, it can collapse and the overlying rock and earth will fall into the space, causing subsidence at the surface. This type of subsidence can cause sinkholes which can be many hundreds of meters deep. Mining Several types of sub-surface mining, and specifically methods which intentionally cause the extracted void to collapse (such as pillar extraction, longwall mining and any metalliferous mining method which uses "caving" such as "block caving" or "sub-level caving") will result in surface subsidence. Mining-induced subsidence is relatively predictable in its magnitude, manifestation and extent, except where a sudden pillar or near-surface tunnel collapse occurs (usually very old workings). Mining-induced subsidence is nearly always very localized to the surface above the mined area, plus a margin around the outside. The vertical magnitude of the subsidence itself typically does not cause problems, except in the case of drainage (including natural drainage)–rather, it is the associated surface compressive and tensile strains, curvature, tilts and horizontal displacement that are the cause of the worst damage to the natural environment, buildings and infrastructure. Where mining activity is planned, mining-induced subsidence can be successfully managed if there is co-operation from all of the stakeholders. This is accomplished through a combination of careful mine planning, the taking of preventive measures, and the carrying out of repairs post-mining. Extraction of petroleum and natural gas If natural gas is extracted from a natural gas field the initial pressure (up to 60 MPa (600 bar)) in the field will drop over the years. The pressure helps support the soil layers above the field. If the gas is extracted, the overburden pressure sediment compacts and may lead to earthquakes and subsidence at the ground level. Since exploitation of the Slochteren (Netherlands) gas field started in the late 1960s the ground level over a 250 km2 area has dropped by a current maximum of 30 cm. Extraction of petroleum likewise can cause significant subsidence. The city of Long Beach, California, has experienced over the course of 34 years of petroleum extraction, resulting in damage of over $100 million to infrastructure in the area. The subsidence was brought to a halt when secondary recovery wells pumped enough water into the oil reservoir to stabilize it. Earthquake Land subsidence can occur in various ways during an earthquake. Large areas of land can subside drastically during an earthquake because of offset along fault lines. Land subsidence can also occur as a result of settling and compacting of unconsolidated sediment from the shaking of an earthquake. The Geospatial Information Authority of Japan reported immediate subsidence caused by the 2011 Tōhoku earthquake. In Northern Japan, subsidence of 0.50 m (1.64 ft) was observed on the coast of the Pacific Ocean in Miyako, Tōhoku, while Rikuzentakata, Iwate measured 0.84 m (2.75 ft). In the south at Sōma, Fukushima, 0.29 m (0.95 ft) was observed. The maximum amount of subsidence was 1.2 m (3.93 ft), coupled with horizontal diastrophism of up to 5.3 m (17.3 ft) on the Oshika Peninsula in Miyagi Prefecture. Groundwater-related subsidence Groundwater-related subsidence is the subsidence (or the sinking) of land resulting from groundwater extraction. It is a growing problem in the developing world as cities increase in population and water use, without adequate pumping regulation and enforcement. One estimate has 80% of serious land subsidence problems associated with the excessive extraction of groundwater, making it a growing problem throughout the world. Groundwater fluctuations can also indirectly affect the decay of organic material. The habitation of lowlands, such as coastal or delta plains, requires drainage. The resulting aeration of the soil leads to the oxidation of its organic components, such as peat, and this decomposition process may cause significant land subsidence. This applies especially when groundwater levels are periodically adapted to subsidence, in order to maintain desired unsaturated zone depths, exposing more and more peat to oxygen. In addition to this, drained soils consolidate as a result of increased effective stress. In this way, land subsidence has the potential of becoming self-perpetuating, having rates up to 5 cm/yr. Water management used to be tuned primarily to factors such as crop optimization but, to varying extents, avoiding subsidence has come to be taken into account as well. Faulting induced When differential stresses exist in the Earth, these can be accommodated either by geological faulting in the brittle crust, or by ductile flow in the hotter and more fluid mantle. Where faults occur, absolute subsidence may occur in the hanging wall of normal faults. In reverse, or thrust, faults, relative subsidence may be measured in the footwall. Isostatic subsidence The crust floats buoyantly in the asthenosphere, with a ratio of mass below the "surface" in proportion to its own density and the density of the asthenosphere. If mass is added to a local area of the crust (e.g., through deposition), the crust subsides to compensate and maintain isostatic balance. The opposite of isostatic subsidence is known as isostatic rebound—the action of the crust returning (sometimes over periods of thousands of years) to a state of isostacy, such as after the melting of large ice sheets or the drying-up of large lakes after the last ice age. Lake Bonneville is a famous example of isostatic rebound. Due to the weight of the water once held in the lake, the earth's crust subsided nearly to maintain equilibrium. When the lake dried up, the crust rebounded. Today at Lake Bonneville, the center of the former lake is about higher than the former lake edges. Seasonal effects Many soils contain significant proportions of clay. Because of the very small particle size, they are affected by changes in soil moisture content. Seasonal drying of the soil results in a lowering of both the volume and the surface of the soil. If building foundations are above the level reached by seasonal drying, they move, possibly resulting in damage to the building in the form of tapering cracks. Trees and other vegetation can have a significant local effect on seasonal drying of soils. Over a number of years, a cumulative drying occurs as the tree grows. That can lead to the opposite of subsidence, known as heave or swelling of the soil, when the tree declines or is felled. As the cumulative moisture deficit is reversed, which can last up to 25 years, the surface level around the tree will rise and expand laterally. That often damages buildings unless the foundations have been strengthened or designed to cope with the effect. Weight of buildings High buildings can create land subsidence by pressing the soil beneath with their weight. The problem is already felt in New York City, San Francisco Bay Area, Lagos. Impacts Increase of flooding potential Land subsidence leads to the lowering of the ground surface, altering the topography. This elevation reduction increases the risk of flooding, particularly in river flood plains and delta areas. Sinking cities Earth fissures Earth fissures are linear fractures that appear on the land surface, characterized by openings or offsets. These fissures can be several meters deep, several meters wide, and extend for several kilometers. They form when the deformation of an aquifer, caused by pumping, concentrates stress in the sediment. This inhomogeneous deformation results in the differential compaction of the sediments. Ground fissures develop when this tensile stress exceeds the tensile strength of the sediment. Infrastructure damage Land subsidence can lead to differential settlements in buildings and other infrastructures, causing angular distortions. When these angular distortions exceed certain values, the structures can become damaged, resulting in issues such as tilting or cracking. Field measurement of subsidence Land subsidence causes vertical displacements (subsidence or uplift). Although horizontal displacements also occur, they are generally less significant. The following are field methods used to measure vertical and horizontal displacements in subsiding areas: Surveying. Borehole extensometers. Global Navigation Satellite System (GNSS) Interferometric Synthetic Apertura Radar (InSAR) LiDAR Tiltmeters. Tomás et al. conducted a comparative analysis of various land subsidence monitoring techniques. The results indicated that InSAR offered the highest coverage, lowest annual cost per point of information and the highest point density. Additionally, they found that, aside from continuous acquisition systems typically installed in areas with rapid subsidence, InSAR had the highest measurement frequencies. In contrast, leveling, non-permanent GNSS, and non-permanent extensometers generally provided only one or two measurements per year. Land Subsidence Prediction Empirical Methods These methods project future land subsidence trends by extrapolating from existing data, treating subsidence as a function solely of time. The extrapolation can be performed either visually or by fitting appropriate curves. Common functions used for fitting include linear, bilinear, quadratic, and/or exponential models. For example, this method has been successfully applied for predicting mining-induced subsidence. Semi-Empirical or Statistical Methods These approaches evaluate land subsidence based on its relationship with one or more influencing factors, such as changes in groundwater levels, the volume of groundwater extraction, and clay content. Theoretical Methods 1D Model This model assumes that changes in piezometric levels affecting aquifers and aquitards occur only in the vertical direction. It allows for subsidence calculations at a specific point using only vertical soil parameters. Quasi-3D Model Quasi-three-dimensional seepage models apply Terzaghi's one-dimensional consolidation equation to estimate subsidence, integrating some aspects of three-dimensional effects. 3D Model The fully coupled three-dimensional model simulates water flow in three dimensions and calculates subsidence using Biot's three-dimensional consolidation theory. Machine learning Machine learning has become a new approach for tackling nonlinear problems. It has emerged as a promising method for simulating and predicting land subsidence. Examples
Physical sciences
Geomorphology: General
Earth science
183968
https://en.wikipedia.org/wiki/Ibis
Ibis
The ibis () (collective plural ibises; classical plurals ibides and ibes) are a group of long-legged wading birds in the family Threskiornithidae that inhabit wetlands, forests and plains. "Ibis" derives from the Latin and Ancient Greek word for this group of birds. It also occurs in the scientific name of the western cattle egret (Ardea ibis) mistakenly identified in 1757 as being the sacred ibis. Description Ibises all have long, downcurved bills, and usually feed as a group, probing mud for food items, usually crustaceans. They are monogamous and highly territorial while nesting and feeding. Most nest in trees, often with spoonbills or herons. All extant species are capable of flight, but two extinct genera were flightless, namely the kiwi-like Apteribis in the Hawaiian Islands, and the peculiar Xenicibis in Jamaica. The word ibis comes from Latin ibis from Greek ἶβις ibis from Egyptian hb, hīb. Species in taxonomic order There are 29 extant species and 4 extinct species of ibis. An extinct species, the Jamaican ibis or clubbed-wing ibis (Xenicibis xympithecus) was uniquely characterized by its club-like wings. Extinct ibis species include the following: Geronticus perplexus. Discovered in France. It is known only from a piece of distal right humerus, found at Sansan France, in Middle Miocene rocks. It appears to represent an ancient member of the Geronticus lineage, in line with the theory that most living ibis genera seem to have evolved before 15 million years ago (mya). Geronticus apelex. Discovered in South Africa. Geronticus balcanicus. Discovered in Bulgaria. Theristicus wetmorei. Discovered in Peru. Eudodmus peruvianus. Discovered in Peru. Gerandibis pagana. Discovered in France. It is the sole species known for this genera. Aptertbis glenos. Discovered in Hawaii. Xenicibis xympithecus. Discovered in Jamaica. Ecology Habitat Most ibises are freshwater wetland birds using natural marshes, ponds, lakes, riversides for foraging. Some ibis species such as the white-faced ibis, and black-headed ibis benefit from flooded and irrigated agriculture. The Andean ibis is unusual in being found in high altitude grasslands of South America. The foraging and nesting behaviour, and fluctuating numbers of the white ibis matches closely with water levels in the Everglades ecosystem leading to its selection as a potential indicator species for the system. Few ibis species such as the olive ibis and green ibis are also found in dense forests. The Llanos grasslands of Venezuela have the highest global ibis diversity with seven species sharing the marshes and grasslands. Multiple ibis species manage to use the same area by exhibiting differences in the habitats used and the prey eaten. In Indian agricultural landscapes, three ibis species manage to live together by altering the habitats they use seasonally with the Black-headed Ibises and Glossy preferring shallow wetlands throughout the year, while the endemic Red-naped Ibises preferred upland areas thereby entirely avoiding potential competitive interactions. Breeding Ibises breeding habits are very diverse. Many ibises such as the black-headed Ibis, scarlet ibis, glossy ibis, American white ibis and Australian white ibis breed in large colonies on trees. Nest trees are located either in large wetlands or in agricultural fields, with many species like the red-naped ibis breeding inside cities. The Australian white ibis also breeds extensively inside cities and has greatly expanded its population. The white-faced ibis sometimes nests on dry land and on low shrubs in marshes. In culture The African sacred ibis was an object of religious veneration in ancient Egypt, particularly associated with the deity Djehuty or otherwise commonly referred to in Greek as Thoth. He is responsible for writing, mathematics, measurement, and time as well as the moon and magic. In artworks of the Late Period of Ancient Egypt, Thoth is popularly depicted as an ibis-headed man in the act of writing. However, Mitogenomic diversity in sacred ibis mummies indicates that ancient Egyptians captured the birds from the wild rather than farming them. At the town of Hermopolis, ibises were reared specifically for sacrificial purposes, and in the Ibis Galleries at Saqqara, archaeologists found the mummies of one and a half million ibises. According to local legend in the Birecik area, the northern bald ibis was one of the first birds that Noah released from the Ark as a symbol of fertility, and a lingering religious sentiment in Turkey helped the colonies there to survive long after the demise of the species in Europe. The mascot of the University of Miami is an American white ibis named Sebastian. The ibis was selected as the school mascot because of its legendary bravery during hurricanes. According to legend, the ibis is the last of wildlife to take shelter before a hurricane hits and the first to reappear once the storm has passed. Harvard University's humor magazine, Harvard Lampoon, uses the ibis as its symbol. A copper statue of an ibis is prominently displayed on the roof of the Harvard Lampoon Building at 44 Bow Street. The short story "The Scarlet Ibis" by James Hurst uses the red bird as foreshadowing for a character's death and as the primary symbol. The African sacred ibis is the unit symbol of the Israeli Special Forces unit known as Unit 212 or Maglan (Hebrew מגלן). According to Josephus, Moses used the ibis to help him defeat the Ethiopians. The Australian white ibis has become a focus of art, pop culture, and memes since rapidly adapting to city life in recent decades, and has earned the popular nicknames "bin chicken" and "tip turkey". In December 2017, the ibis placed second in Guardian Australia inaugural Bird of the Year poll, after leading for much of the voting period. In April 2022, Queensland sports minister Stirling Hinchliffe suggested the ibis as a potential mascot for the 2032 Olympic Games, which are scheduled to be held in Brisbane. Hinchcliffe's suggestion prompted much discussion in the media. Gallery
Biology and health sciences
Pelecanimorphae
null
183970
https://en.wikipedia.org/wiki/Aragonite
Aragonite
Aragonite is a carbonate mineral and one of the three most common naturally occurring crystal forms of calcium carbonate (), the others being calcite and vaterite. It is formed by biological and physical processes, including precipitation from marine and freshwater environments. The crystal lattice of aragonite differs from that of calcite, resulting in a different crystal shape, an orthorhombic crystal system with acicular crystal. Repeated twinning results in pseudo-hexagonal forms. Aragonite may be columnar or fibrous, occasionally in branching helictitic forms called flos-ferri ("flowers of iron") from their association with the ores at the Carinthian iron mines. Occurrence The type location for aragonite is Molina de Aragón in the Province of Guadalajara in Castilla-La Mancha, Spain, for which it was named in 1797. Aragonite is found in this locality as cyclic twins inside gypsum and marls of the Keuper facies of the Triassic. This type of aragonite deposit is very common in Spain, and there are also some in France. An aragonite cave, the Ochtinská Aragonite Cave, is situated in Slovakia. In the US, aragonite in the form of stalactites and "cave flowers" (anthodite) is known from Carlsbad Caverns and other caves. For a few years in the early 1900s, aragonite was mined at Aragonite, Utah (now a ghost town). Massive deposits of oolitic aragonite sand are found on the seabed in the Bahamas. Aragonite is the high pressure polymorph of calcium carbonate. As such, it occurs in high pressure metamorphic rocks such as those formed at subduction zones. Aragonite forms naturally in almost all mollusk shells, and as the calcareous endoskeleton of warm- and cold-water corals (Scleractinia). Several serpulids have aragonitic tubes. Because the mineral deposition in mollusk shells is strongly biologically controlled, some crystal forms are distinctively different from those of inorganic aragonite. In some mollusks, the entire shell is aragonite; in others, aragonite forms only discrete parts of a bimineralic shell (aragonite plus calcite). The nacreous layer of the aragonite fossil shells of some extinct ammonites forms an iridescent material called ammolite. Aragonite also forms naturally in the endocarp of Celtis occidentalis. The skeleton of some calcareous sponges is made of aragonite. Aragonite also forms in the ocean inorganic precipitates called marine cements (in the sediment) or as free crystals (in the water column). Inorganic precipitation of aragonite in caves can occur in the form of speleothems. Aragonite is common in serpentinites where magnesium-rich pore solutions apparently inhibit calcite growth and promote aragonite precipitation. Aragonite is metastable at the low pressures near the Earth's surface and is thus commonly replaced by calcite in fossils. Aragonite older than the Carboniferous is essentially unknown. Aragonite can be synthesized by adding a calcium chloride solution to a sodium carbonate solution at temperatures above or in water-ethanol mixtures at ambient temperatures. Physical properties Aragonite is a thermodynamically unstable phase of calcium carbonate at any pressure below about at any temperature. Aragonite nonetheless frequently forms in near-surface environments at ambient temperatures. The weak Van der Waals forces inside aragonite give an important contribution to both the crystallographic and elastic properties of this mineral. The difference in stability between aragonite and calcite, as measured by the Gibbs free energy of formation, is small, and effects of grain size and impurities can be important. The formation of aragonite at temperatures and pressures where calcite should be the stable polymorph may be an example of Ostwald's step rule, where a less stable phase is the first to form. The presence of magnesium ions may inhibit calcite formation in favor of aragonite. Once formed, aragonite tends to alter to calcite on scales of 107 to 108 years. The mineral vaterite, also known as μ-CaCO3, is another phase of calcium carbonate that is metastable at ambient conditions typical of Earth's surface, and decomposes even more readily than aragonite. Uses In aquaria, aragonite is considered essential for the replication of reef conditions. Aragonite provides the materials necessary for much sea life and also keeps the pH of the water close to its natural level, to prevent the dissolution of biogenic calcium carbonate. Aragonite has been successfully tested for the removal of pollutants like zinc, cobalt and lead from contaminated wastewaters. Gallery
Physical sciences
Minerals
Earth science
184000
https://en.wikipedia.org/wiki/Hysteria
Hysteria
Hysteria is a term used to mean ungovernable emotional excess and can refer to a temporary state of mind or emotion. In the nineteenth century, female hysteria was considered a diagnosable physical illness in women. It is assumed that the basis for diagnosis operated under the belief that women are predisposed to mental and behavioral conditions; an interpretation of sex-related differences in stress responses. In the twentieth century, it shifted to being considered a mental illness. Many influential people such as Sigmund Freud and Jean-Martin Charcot dedicated research to hysteria patients. Currently, most physicians do not accept hysteria as a medical diagnosis. The blanket diagnosis of hysteria has been fragmented into myriad medical categories such as epilepsy, histrionic personality disorder, conversion disorders, dissociative disorders, or other medical conditions. Furthermore, lifestyle choices, such as choosing not to wed, are no longer considered symptoms of psychological disorders such as hysteria. History The word hysteria originates from the Greek word for uterus, hystera. The oldest record of hysteria dates back to 1900 BCE when Egyptians recorded behavioral abnormalities in adult women on the Kahun Papyrus. The Egyptians attributed the behavioral disturbances to a wandering uterusthus the condition later being dubbed hysteria. To treat hysteria Egyptian doctors prescribed various medications. For example, doctors put strong smelling substances on the patients' vulvas to encourage the uterus to return to its proper position. Another tactic was to smell or swallow unsavory herbs to encourage the uterus to flee back to the lower part of the female's abdomen. The ancient Greeks accepted the ancient Egyptians' explanation for hysteria; however, they included in their definition of hysteria the inability to bear children or the unwillingness to marry. Plato and Aristotle believed that hysteria, which Plato also called female madness, was directly related to these women's lack of sexual activity and described the uterus as those who suffered from it as having a sad, bad, or melancholic uterus. In the 5th century BCE Hippocrates first used the term hysteria. Ancient Romans also attributed hysteria to an abnormality in the womb; however, discarded the traditional explanation of a wandering uterus. Instead, the ancient Romans credited hysteria to a disease of the womb or a disruption in reproduction (i.e., a miscarriage, menopause, etc.). Hysteria theories from the ancient Egyptians, ancient Greeks, and ancient Romans were the basis of the Western understanding of hysteria. Between the fifth and thirteenth centuries, however, the increasing influence of Christianity in the Latin West altered medical and public understanding of hysteria. St. Augustine's writings suggested that human suffering resulted from sin, thus hysteria became perceived as satanic possession. With the shift in perception of hysteria came a shift in treatment options. Instead of admitting patients to a hospital, the church began treating patients through prayers, amulets, and exorcisms. At this time, writings such as Constantine the African's Viaticum and Pantegni, described women with hysteria as the cause of amor heroycus, a form of sexual desire so strong that it caused madness, rather than someone with a problem who should be cured. Trota de Ruggiero is considered the first female doctor in Christian Europe as well as the first gynecologist, though she could not become a magister. She recognized that women were often ashamed to go to a doctor with gynecological issues, and studied women's diseases and attempted to avoid common misconceptions and prejudice of the era. She prescribed remedies such as mint for women suffering from hysteria. Hildegard of Bingen was another female doctor, whose work was part of an attempt to combine science and faith. She agreed with the theories of Hippocrates and suggested hysteria may be connected to the idea of original sin; She believed that men and women were both responsible for original sin, and could both suffer from hysteria. Furthermore, during the Renaissance period many patients of hysteria were prosecuted as witches and underwent interrogations, torture, exorcisms, and execution. During this time the common point of view was that women were inferior beings, connected to Aristotle's ideas of male superiority. Saint Thomas Aquinas supported this idea and in his writing, Summa Theologica stated "'some old women' are evil-minded; they gaze on children in a poisonous and evil way, and demons, with whom the witches enter into agreements, interacting through their eyes". This type of fear of witches and sorcery is part of the rules of celibacy and chastity imposed on the clergy. Philippe Pinel believed that there was little difference between madness and healthy people, and believed that people should be treated if they were unwell. He considered hysteria a female disorder. However, during the sixteenth and seventeenth centuries activists and scholars worked to change the perception of hysteria back to a medical condition. Particularly, French physician Charles Le Pois insisted that hysteria was a malady of the brain. In addition, in 1697, English physician Thomas Sydenham theorized that hysteria was an emotional condition, instead of a physical condition. Many physicians followed Lepois and Sydenham's lead and hysteria became disassociated with the soul and the womb. During this time period, science started to focalize hysteria in the central nervous system. As doctors developed a greater understanding of the human nervous system, the neurological model of hysteria was created, which further propelled the conception of hysteria as a mental disorder. Joseph Raulin published a work in 1748, associating hysteria with the air quality in cities, he suggested that men and women could both have hysteria, women would be more likely to have it due to laziness. In 1859 Paul Briquet defined hysteria as a chronic syndrome manifesting in many unexplained symptoms throughout the body's organ systems. What Briquet described became known as Briquet's syndrome, or Somatization disorders, in 1971. Over a ten-year period, Briquet conducted 430 case studies of patients with hysteria. Following Briquet, Jean-Martin Charcot studied women in an asylum in France and used hypnosis as treatment. Charcot detailed the intricacies of hysteria, understanding it as being caused by patriarchy. He also mentored Pierre Janet, another French psychologist, who studied five of hysteria's symptoms (anaesthesia, amnesia, abulia, motor control diseases, and character change) in depth and proposed that hysteria symptoms occurred due to a lapse in consciousness. Both Charcot and Janet inspired Freud's work. Freud theorized hysteria stemmed from childhood sexual abuse or repression. Briquet, Freud and Charcot noted male hysteria; both genders could exhibit the syndrome. Hysterics may be able to manipulate their caretakers thus complicating treatment. L.E. Emerson was a Freudian who worked at the Boston Psychopathic Hospital and saw hysteric patients. Investigating the files, Elizabeth Lunbeck found that most of hysteric patients at this hospital, were typically single, either being young or purposefully avoiding men due to past sexual abuse. Emerson published case studies on his patients and was interested in the stories they told, relating their stories to sex and their inner sexual conflicts. Emerson stated that their hysteria, which ranged from self-harm to immense guilt for what happened, was due to the patients' traumas or a lack of sexual knowledge, to which he stated that they were sexually repressed. During the twentieth century, as psychiatry advanced in the West, anxiety and depression diagnoses began to replace hysteria diagnoses in Western countries. For example, from 1949 to 1978, annual admissions of hysteria patients in England and Wales decreased by roughly two-thirds. With the decrease of hysteria patients in Western cultures came an increase in anxiety and depression patients. Theories for why hysteria diagnoses began to decline vary, but many historians infer that World WarII, along with the use of the diagnosis of shell-shock, westernization, and migration shifted Western mental health expectations. Twentieth-century western societies expected depression and anxiety manifest itself more in post World War II generations and displaced individuals; and thus, individuals reported or were diagnosed accordingly. In addition, medical advancements explained ailments that were previously attributed to hysteria such as epilepsy or infertility. World Wars caused military doctors to become focused on hysteria as during this time there seemed to be a rise in cases, especially under instances of high stress, in 1919 Arthur Frederick Hurst wrote that "many cases of gross hysterical symptoms occurred in soldiers who had no family or personal history of neuroses, and who were perfectly fit". In 1970 Colin P. McEvedy and Alanson W. Beard suggested that Royal Free Disease (Royal Free Hospital outbreak, now also known as myalgic encephalomyelitis/chronic fatigue syndrome a neurological disease), which mainly affected young women, was an epidemic of hysteria. They also said that hysteria had a historically negative connotation, however that should not prevent doctors from assessing symptoms of the patient. In 1980, after a gradual decline in diagnoses and reports, hysteria was removed from the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM), which had included hysteria as a mental disorder from its second publication in 1968. The term is still used in the twenty-first century, though not as a diagnosis. When used, it is often a general term for any dramatic displays of outrage or emotion. Historical symptoms Historically, the symptoms of hysteria have a large range. Shortness of breath Anxiety Insomnia Fainting Amnesia Paralysis Pain Spasms Convulsive fits Vomiting Deafness Bizarre movements Seizures Hallucinations Inability to speak Infertility Historical treatment Regular marital sex Pregnancy Childbirth Rest cure Notable theorists Charcot In the late nineteenth century, French neurologist Jean-Martin Charcot tackled what he referred to as "the great neurosis" or hysteria. Charcot theorized that hysteria was a hereditary, physiological disorder. He believed hysteria impaired areas of the brain which provoked the physical symptoms displayed in each patient. While Charcot believed hysteria was hereditary, he also thought that environmental factors such as stress could trigger hysteria in an individual. Charcot published more than 120 case studies of patients who he diagnosed with hysteria, including Marie Wittman. Whittman was referred to as the "Queen of Hysterics", and remains the most famous patient of hysteria. To treat his patients, Charcot used hypnosis, which he determined was successful only when used on hysterics. Using patients as props, Charcot executed dramatic public demonstrations of hysterical patients and his cures for hysteria, which many suggest produced the hysterical phenomenon. Furthermore, Charcot noted similarities between demon possession and hysteria, and thus, he concluded "demonomania" was a form of hysteria. Freud In 1896 Sigmund Freud, an Austrian neurologist, published "The Aetiology of Hysteria". The paper explains how Freud believes his female patients' neurosis, which he labels hysteria, resulted from sexual abuse as children. Freud named the concept of physical symptoms resulting from childhood trauma: hysterical conversion. Freud hypothesized that in order to cure hysteria the patient must relive the experiences through imagination in the most vivid form while under light hypnosis. Freud later adapted this, realizing that sexual abuse must not be the only way of developing neuroses. He then theorized that, in addition to abuse, fantasies of sexual abuse could be responsible, though he never ruled out that sexual abuse could be the cause of illness, simply not the only possible cause. Freud was also one of the first noted psychiatrists to attribute hysteria to men. He diagnosed himself with hysteria, writing that he feared his work had exacerbated his condition. Modern perceptions For the most part, hysteria does not exist as a medical diagnosis in Western culture and has been replaced by other diagnoses such as conversion or functional disorders. The effects of hysteria as a diagnosable illness in the eighteenth and nineteenth centuries has had a lasting effect on the medical treatment of women's health. The term hysterical, applied to an individual, can mean that they are emotional, irrationally upset, or frenzied. When applied to a situation not involving panic, hysteria means that that situation is uncontrollably amusingthe connotation being that it invokes hysterical laughter.
Biology and health sciences
Outdated disorders
Health
184021
https://en.wikipedia.org/wiki/Fynbos
Fynbos
Fynbos (; , ) is a small belt of natural shrubland or heathland vegetation located in the Western Cape and Eastern Cape provinces of South Africa. The area is predominantly coastal and mountainous, with a Mediterranean climate and rainy winters. The fynbos ecoregion is within the Mediterranean forests, woodlands, and scrub biome. In fields related to biogeography, fynbos is known for its exceptional degree of biodiversity and endemism, consisting of about 80% (8,500 fynbos) species of the Cape floral kingdom, where nearly 6,000 of them are endemic. The area continues to face severe human-caused threats, but due to the many economic uses of the fynbos, conservation efforts are being made to help restore it. Origin of the term The word fynbos is often taken literally to mean fine bush, as in Afrikaans bos means bush, whereas in this instance bush refers to the type of vegetation. Typical fynbos foliage is ericoid rather than fine. The term in its pre‑Afrikaans, Dutch form, fynbosch, was recorded by Noble as being in casual use in the late 19th century. In the early 20th century, John Bews referred to: "South-Western or Cape Region of Macchia or Fynbosch". He said: "In this well-known region where the rain occurs in winter and the summers are more or less dry, the dominant vegetation is of a sclerophyllous type and there is little or no natural grassland, though there are many kinds of grass..." He also refers to a high degree of endemism in the grasses in that region. Elsewhere he speaks of the term as "...applied by the inhabitants of the Cape to any sort of small woodland growth that does not include timber trees"; in the current vernacular, this still is the effective sense of the word. However, in the technical, ecological sense, the constraints are more demanding. In the latter half of the 20th century, "fynbos" gained currency as the term for the "distinctive vegetation of the southwestern Cape". Cape Floral Kingdom Fynbos – which grows in a 100-to-200-km-wide coastal belt stretching from Clanwilliam on the West coast to Port Elizabeth on the Southeast coast – forms part of the Cape floral kingdom, where it accounts for half of the surface area and 80% of the plant species. The fynbos in the western regions is richer and more varied than in the eastern regions of South Africa. Of the world's six floral kingdoms, this is the smallest and richest per unit of area. The Holarctic kingdom, in contrast, incorporates the whole of the Northern Hemisphere north of the tropics. The diversity of fynbos plants is extremely high, with over 9,000 species of plants occurring in the area, around 6,200 of which are endemic, i.e. growing nowhere else in the world. South Africa's Western Cape has the vast majority of species with one estimate finding 8,550 species in 89,000 km2, which is higher than that estimated for the Malayan forests, 7,900 species in 132,000 km2. It has been claimed that it exceeds even the richest tropical rainforest in South America, including the Amazon. Of the Ericas, over 600 occur in the fynbos kingdom, while only two or three dozen have been described in the rest of the world. This is in an area of 46,000 km2 – by comparison, the Netherlands, with an area of 33,000 km2, has 1,400 species, none of them endemic. Table Mountain in Cape Town supports 2,200 species, more than the entire United Kingdom. Thus, although the fynbos covers only 6% of the area of southern Africa, it has half the species on the subcontinent – and in fact has almost one in five of all African plant species so far described. Five main river systems traverse the Cape floral kingdom: the Oliphants River of the Western Cape; the Berg River which drains the West Coast Forelands plain stretching from the Cape Flats to the Olifants; the Breede, which is the largest river on the Cape; the Olifants River (Southern Cape); Gourits and the Groot Rivers which drain the Little Karoo basin and the South Coast Forelands; and the Baviaanskloof and Gamtoos Rivers to the east. Flora The most conspicuous components of the flora are evergreen sclerophyllous plants, many with ericoid leaves and gracile habit, as opposed to timber forest. Several plant families are conspicuous in fynbos; the Proteaceae are prominent, with genera such as Protea, Leucospermum (the "pincushions"), and Leucadendron (the silver tree and "cone bushes"). Proteas are represented by many species and are prominent in the landscape, generally with large striking flowers, many of which are pollinated by birds, and others by small mammals. Most of these do not have anything like ericoid leaves, and nor do most Rhamnaceae, Fabaceae, or Geraniaceae. Fynbos Ericaceae include more species of Erica than all other regions combined. They are popularly called heaths and are generally smaller plants bearing many small, tubular or globular flowers and ericoid leaves. Restionaceae also occur in greater variety in fynbos than anywhere else; their species are superficially grass-like. Many of them grow in wet areas such as seasonal marshes and spongy basins in the sources of mountain streams, but others grow in decidedly arid conditions. Depending on the locality and the aspects under discussion, several other families have equal claim to being characteristic, including Asteraceae, Rutaceae, and Iridaceae. More than 1400 bulb species occur among the fynbos, of which 96 are Gladiolus and 54 Lachenalia. Areas that are dominated by "renosterbos", Dicerothamnus rhinocerotis, (Asteraceae) are known as Renosterveld (Afrikaans for "rhinoceros field"). Vegetation types Fynbos vegetation types, code FF: Agulhas Limestone Fynbos (FFl 1) Agulhas Sand Fynbos (FFd 7) Albertinia Sand Fynbos (FFd 9) Algoa Sandstone Fynbos (FFs 29) Atlantis Sand Fynbos (FFd 4) Bokkeveld Sandstone Fynbos (FFs 1) Boland Granite Fynbos (FFg 2) Breede Alluvium Fynbos (FFa 2) Breede Quartzite Fynbos (FFq 4) Breede Sand Fynbos (FFd 8) Breede Shale Fynbos (FFh 4) Canca Limestone Fynbos (FFl 3) Cape Flats Sand Fynbos (FFd 5) Cape Winelands Shale Fynbos (FFh 5) Cederberg Sandstone Fynbos (FFs 4) Central Coastal Shale Band Vegetation (FFb 4) Central Inland Shale Band Vegetation (FFb 3) De Hoop Limestone Fynbos (FFl 2) Eastern Coastal Shale Band Vegetation (FFb 6) Eastern Inland Shale Band Vegetation (FFb 5) Elgin Shale Fynbos (FFh 6) Elim Ferricrete Fynbos (FFf 1) Garden Route Granite Fynbos (FFg 5) Garden Route Shale Fynbos (FFh 9) Graafwater Sandstone Fynbos (FFs 2) Greyton Shale Fynbos (FFh 7) Grootrivier Quartzite Fynbos (FFq 5) Hangklip Sand Fynbos (FFd 6) Hawequas Sandstone Fynbos (FFs 10) Hopefield Sand Fynbos (FFd 3) Kamiesberg Granite Fynbos (FFg 1) Kango Conglomerate Fynbos (FFt 1) Knysna Sand Fynbos (FFd 10) Kogelberg Sandstone Fynbos (FFs 11) Kouebokkeveld Alluvium Fynbos (FFa 1) Kouebokkeveld Shale Fynbos (FFh 1) Kouga Grassy Sandstone Fynbos (FFs 28) Kouga Sandstone Fynbos (FFs 27) Leipoldtville Sand Fynbos (FFd 2) Loerie Conglomerate Fynbos (FFt 2) Lourensford Alluvium Fynbos (FFa 4) Matjiesfontein Quartzite Fynbos (FFq 3) Matjiesfontein Shale Fynbos (FFh 2) Montagu Shale Fynbos (FFh 8) Namaqualand Sand Fynbos (FFd 1) North Hex Sandstone Fynbos (FFs 7) North Kammanassie Sandstone Fynbos (FFs 25) North Langeberg Sandstone Fynbos (FFs 15) North Outeniqua Sandstone Fynbos (FFs 18) North Rooiberg Sandstone Fynbos (FFs 21) North Sonderend Sandstone Fynbos (FFs 13)? North Swartberg Sandstone Fynbos (FFs 23) Northern Inland Shale Band Vegetation (FFb 1) Olifants Sandstone Fynbos (FFs 3) Overberg Sandstone Fynbos (FFs 12) Peninsula Granite Fynbos (FFg 3) Peninsula Sandstone Fynbos (FFs 9) Piketberg Sandstone Fynbos (FFs 6) Potberg Ferricrete Fynbos (FFf 2) Potberg Sandstone Fynbos (FFs 17) Robertson Granite Fynbos (FFg 4) South Hex Sandstone Fynbos (FFs 8) South Kammanassie Sandstone Fynbos (FFs 26) South Langeberg Sandstone Fynbos (FFs 16) South Outeniqua Sandstone Fynbos (FFs 19) South Rooiberg Sandstone Fynbos (FFs 22) South Sonderend Sandstone Fynbos (FFs 14) South Swartberg Sandstone Fynbos (FFs 24) Southern Cape Dune Fynbos (FFd 11) Stinkfonteinberge Quartzite Fynbos (FFq 1) Suurberg Quartzite Fynbos (FFq 6) Suurberg Shale Fynbos (FFh 10) Swartberg Altimontane Sandstone Fynbos (FFs 31) Swartberg Shale Fynbos (FFh 3) Swartland Alluvium Fynbos (FFa 3) Swartruggens Quartzite Fynbos (FFq 2) Swellendam Silcrete Fynbos (FFc 1) Tsitsikamma Sandstone Fynbos (FFs 20) Western Altimontane Sandstone Fynbos (FFs 30) Western Coastal Shale Band Vegetation (FFb 2) Winterhoek Sandstone Fynbos (FFs 5) Fauna The fynbos is home to many unique and endemic animals, with seven species of endemic bird and an unknown number of endemic reptiles, amphibians, and arthropods. The seven avian endemics include the Cape rockjumper, Cape sugarbird, Victorin's warbler, Orange-breasted sunbird, Protea canary, Cape siskin, and Fynbos buttonquail. Ecoregions The fynbos area has been divided into two very similar ecoregions: the lowland fynbos (below 300 m above sea level) on the sandy soil of the west coast, and the montane fynbos of the Cape Fold Belt. The Lowland Fynbos and Renosterveld experiences regular winter rainfall, especially to the west of Cape Agulhas. The ecoregion has been subdivided into nine areas: the West Coast Forelands from the Cape Flats to the Olifants River (Western Cape); the Warm Bokkeveld basin around the town of Ceres; the Elgin Valley around the town of Elgin; the sandy Agulhas Plain on the coast; the Breede River valley around the town of Worcester; the South Coast Forelands from Caledon west to Mossel Bay; the south-eastern end of the Little Karoo; Langkloof valley; and the Southeastern Coast Forelands west from Tsitsikamma to Gqeberha. The flora of the lowlands contains a high number of endemic species, and tends to favour larger plants than those growing on the hillier areas. They include the larger Restionaceae such as species of Elegia, Thamnochortus, and Willdenowia and proteas such as king protea (Protea cynaroides) and blushing bride (Serruria florida). Particular types of lowland fynbos include the shrubs and herbs of the coastal sand dunes, the mixture of ericoids and restoids with thickets of shrubs such as Maytenus, and other Celastraceae, sideroxylons and other Sapotaceae, and Rhus and other Anacardiaceae on the coastal sands; the classic fynbos of the sandplains of the West Coast Forelands, and the Agulhas Plain; the grassy fynbos of the hillier and wetter areas of the South and South-Eastern Coast Forelands; areas where fynbos and renosterveld are mixed; coastal renosterveld on the West and South Coast Forelands; and the inland renosterveld of the drier inland Little Karoo and Warm Bokkeveld. The area is also home to a large number of endemic creatures that have adapted to life in this area, such as the monkey beetles which pollinate Ixia viridiflora. Endemic species of fish in the five river systems occur in the area, too. Endemic reptiles and amphibians include a number of tortoises and the chameleon-like arum frog (Hyperolius horstockii). The Montane Fynbos and Renosterveld is the area above , a total of of the Cape Fold Mountains. The same level of floral variety, including all three characteristic fynbos families, is found there, but ericas predominate. Because the higher and wetter areas are more protected and contain important water sources, the original flora is more intact than in the lowlands; but agriculture and global warming are still threats. The region includes the mountains in the west from the Cape Peninsula to the Kouebokkeveld Mountains, the south coast hinterland from Elgin to Gqeberha, the mountains north of the Little Karoo from Laingsburg to Willowmore, and the inselberg hills within the Little Karoo. About half of these areas are originally fynbos, and about half are renosterveld. Many different microclimates occur, so the flora changes from west to east, and also varies with altitude up the hillsides away from the coast and according to compass direction. Lower elevations are covered with protea fynbos, with ericas taking over further up. Plant species include pincushions (Leucospermum). The wildlife includes a number of endemic bees, beetles, horseflies, and ants, and birds such as Cape sugarbirds and the orange-breasted sunbird. Many of these birds and insects are important and specific pollinators for the fynbos, such as the mountain pride butterfly (Aeropetes tulbaghia) which only visits red flowers such as Disa uniflora and pollinates 15 different species. Larger animals include antelopes, particularly Cape grysbok (Raphicerus melanotis), common duiker (Sylvicapra grimmia), and klipspringer (Oreotragus oreotragus). The extinct blue antelope and quagga were also fynbos natives. Economic uses Rooibos (Aspalathus linearis) and honeybush (Cyclopia intermedia) are of economic importance, grown and harvested in large quantities in the Cederberg area, and providing important exports. Restios continue to be used for thatching, as they have for hundreds or even thousands of years. Proteas and other floral species are grown in many areas and their flowers harvested for export. In many areas with Mediterranean climates, fynbos species have become popular garden plants, in particular aloes and geraniums, and in cooler regions are used as window plants. A very large number of fynbos plant species are used in traditional medicine, and while only a tiny proportion have as yet been subjected to formal testing, many have already been identified as having medicinal properties. Threats and conservation The fynbos is the region of South Africa most affected by invasive alien species which collectively cover around 10% of the entire country. The most common invasive plants are wattles and hakeas, native to Australia, and pines native to Europe and the Californian coast of the United States. Pines had been introduced to South Africa by the 19th century and the wattles were imported into the mid-1870s to stabilize sand dunes. In 1997 it was estimated that invasion caused the fynbos region to decline in value by US$750 million per year. The Working for Water (WfW) program was started in 1995 by the Department of Water Affairs and Forestry to control these invasive species which were shown to sequester 9.95% of usable surface water runoff. Since then, over 100,000 hectares of land have been cleared of invasive species while providing jobs to around 20,000 people per year, most of which are women and unskilled workers. Systematic monitoring of WfW's progress is lacking but there is anecdotal evidence that endemic silver peas have returned to Table Mountain after being thought extinct.
Physical sciences
Biomes: General
Earth science
184120
https://en.wikipedia.org/wiki/Time%20hierarchy%20theorem
Time hierarchy theorem
In computational complexity theory, the time hierarchy theorems are important statements about time-bounded computation on Turing machines. Informally, these theorems say that given more time, a Turing machine can solve more problems. For example, there are problems that can be solved with n2 time but not n time, where n is the input length. The time hierarchy theorem for deterministic multi-tape Turing machines was first proven by Richard E. Stearns and Juris Hartmanis in 1965. It was improved a year later when F. C. Hennie and Richard E. Stearns improved the efficiency of the Universal Turing machine. Consequent to the theorem, for every deterministic time-bounded complexity class, there is a strictly larger time-bounded complexity class, and so the time-bounded hierarchy of complexity classes does not completely collapse. More precisely, the time hierarchy theorem for deterministic Turing machines states that for all time-constructible functions f(n), , where DTIME(f(n)) denotes the complexity class of decision problems solvable in time O(f(n)). The left-hand class involves little o notation, referring to the set of decision problems solvable in asymptotically less than f(n) time. In particular, this shows that if and only if , so we have an infinite time hierarchy. The time hierarchy theorem for nondeterministic Turing machines was originally proven by Stephen Cook in 1972. It was improved to its current form via a complex proof by Joel Seiferas, Michael Fischer, and Albert Meyer in 1978. Finally in 1983, Stanislav Žák achieved the same result with the simple proof taught today. The time hierarchy theorem for nondeterministic Turing machines states that if g(n) is a time-constructible function, and f(n+1) = o(g(n)), then . The analogous theorems for space are the space hierarchy theorems. A similar theorem is not known for time-bounded probabilistic complexity classes, unless the class also has one bit of advice. Background Both theorems use the notion of a time-constructible function. A function is time-constructible if there exists a deterministic Turing machine such that for every , if the machine is started with an input of n ones, it will halt after precisely f(n) steps. All polynomials with non-negative integer coefficients are time-constructible, as are exponential functions such as 2n. Proof overview We need to prove that some time class TIME(g(n)) is strictly larger than some time class TIME(f(n)). We do this by constructing a machine which cannot be in TIME(f(n)), by diagonalization. We then show that the machine is in TIME(g(n)), using a simulator machine. Deterministic time hierarchy theorem Statement Time Hierarchy Theorem. If f(n) is a time-constructible function, then there exists a decision problem which cannot be solved in worst-case deterministic time o(f(n)) but can be solved in worst-case deterministic time O(f(n)log f(n)). Thus Note 1. f(n) is at least n, since smaller functions are never time-constructible. Example. There are problems solvable in time nlog2n but not time n. This follows by setting , since n is in Proof We include here a proof of a weaker result, namely that DTIME(f(n)) is a strict subset of DTIME(f(2n + 1)3), as it is simpler but illustrates the proof idea. See the bottom of this section for information on how to extend the proof to f(n)logf(n). To prove this, we first define the language of the encodings of machines and their inputs which cause them to halt within f Notice here that this is a time-class. It is the set of pairs of machines and inputs to those machines (M,x) so that the machine M accepts within f(|x|) steps. Here, M is a deterministic Turing machine, and x is its input (the initial contents of its tape). [M] denotes an input that encodes the Turing machine M. Let m be the size of the tuple ([M], x). We know that we can decide membership of Hf by way of a deterministic Turing machine R, that simulates M for f(x) steps by first calculating f(|x|) and then writing out a row of 0s of that length, and then using this row of 0s as a "clock" or "counter" to simulate M for at most that many steps. At each step, the simulating machine needs to look through the definition of M to decide what the next action would be. It is safe to say that this takes at most f(m)3 operations (since it is known that a simulation of a machine of time complexity T(n) for can be achieved in time on a multitape machine, where |M| is the length of the encoding of M), we have that: The rest of the proof will show that so that if we substitute 2n + 1 for m, we get the desired result. Let us assume that Hf is in this time complexity class, and we will reach a contradiction. If Hf is in this time complexity class, then there exists a machine K which, given some machine description [M] and input x, decides whether the tuple ([M], x) is in Hf within We use this K to construct another machine, N, which takes a machine description [M] and runs K on the tuple ([M], [M]), ie. M is simulated on its own code by K, and then N accepts if K rejects, and rejects if K accepts. If n is the length of the input to N, then m (the length of the input to K) is twice n plus some delimiter symbol, so m = 2n + 1. Ns running time is thus Now if we feed [N] as input into N itself (which makes n the length of [N]) and ask the question whether N accepts its own description as input, we get: If N accepts [N] (which we know it does in at most f(n) operations since K halts on ([N], [N]) in f(n) steps), this means that K rejects ([N], [N]), so ([N], [N]) is not in Hf, and so by the definition of Hf, this implies that N does not accept [N] in f(n) steps. Contradiction. If N rejects [N] (which we know it does in at most f(n) operations), this means that K accepts ([N], [N]), so ([N], [N]) is in Hf, and thus N does accept [N] in f(n) steps. Contradiction. We thus conclude that the machine K does not exist, and so Extension The reader may have realised that the proof gives the weaker result because we have chosen a simple Turing machine simulation for which we know that It is known that a more efficient simulation exists which establishes that . Non-deterministic time hierarchy theorem If g(n) is a time-constructible function, and f(n+1) = o(g(n)), then there exists a decision problem which cannot be solved in non-deterministic time f(n) but can be solved in non-deterministic time g(n). In other words, the complexity class NTIME(f(n)) is a strict subset of NTIME(g(n)). Consequences The time hierarchy theorems guarantee that the deterministic and non-deterministic versions of the exponential hierarchy are genuine hierarchies: in other words P ⊊ EXPTIME ⊊ 2-EXP ⊊ ... and NP ⊊ NEXPTIME ⊊ 2-NEXP ⊊ .... For example, since . Indeed, from the time hierarchy theorem. The theorem also guarantees that there are problems in P requiring arbitrarily large exponents to solve; in other words, P does not collapse to DTIME(nk) for any fixed k. For example, there are problems solvable in n5000 time but not n4999 time. This is one argument against Cobham's thesis, the convention that P is a practical class of algorithms. If such a collapse did occur, we could deduce that P ≠ PSPACE, since it is a well-known theorem that DTIME(f(n)) is strictly contained in DSPACE(f(n)). However, the time hierarchy theorems provide no means to relate deterministic and non-deterministic complexity, or time and space complexity, so they cast no light on the great unsolved questions of computational complexity theory: whether P and NP, NP and PSPACE, PSPACE and EXPTIME, or EXPTIME and NEXPTIME''' are equal or not. Sharper hierarchy theorems The gap of approximately between the lower and upper time bound in the hierarchy theorem can be traced to the efficiency of the device used in the proof, namely a universal program that maintains a step-count. This can be done more efficiently on certain computational models. The sharpest results, presented below, have been proved for: The unit-cost random-access machine A programming language model whose programs operate on a binary tree that is always accessed via its root. This model, introduced by Neil D. Jones is stronger than a deterministic Turing machine but weaker than a random-access machine. For these models, the theorem has the following form: If f(n) is a time-constructible function, then there exists a decision problem which cannot be solved in worst-case deterministic time f(n) but can be solved in worst-case time af(n) for some constant a (dependent on f). Thus, a constant-factor increase in the time bound allows for solving more problems, in contrast with the situation for Turing machines (see Linear speedup theorem). Moreover, Ben-Amram proved that, in the above models, for f of polynomial growth rate (but more than linear), it is the case that for all , there exists a decision problem which cannot be solved in worst-case deterministic time f(n'') but can be solved in worst-case time .
Mathematics
Complexity theory
null
184300
https://en.wikipedia.org/wiki/Brookite
Brookite
Brookite is the orthorhombic variant of titanium dioxide (TiO2), which occurs in four known natural polymorphic forms (minerals with the same composition but different structure). The other three of these forms are akaogiite (monoclinic), anatase (tetragonal) and rutile (tetragonal). Brookite is rare compared to anatase and rutile and, like these forms, it exhibits photocatalytic activity. Brookite also has a larger cell volume than either anatase or rutile, with 8 TiO2 groups per unit cell, compared with 4 for anatase and 2 for rutile. Iron (Fe), tantalum (Ta) and niobium (Nb) are common impurities in brookite. Brookite was named in 1825 by French mineralogist Armand Lévy for Henry James Brooke (1771–1857), an English crystallographer, mineralogist and wool trader. Arkansite is a variety of brookite from Magnet Cove, Arkansas, US. It is also found in the Murun Massif on the Olyokma-Chara Plateau of Eastern Siberia, Russia, part of the Aldan Shield. At temperatures above about 750 °C, brookite will revert to the rutile structure. Unit cell Brookite belongs to the orthorhombic dipyramidal crystal class 2/m 2/m 2/m (also designated mmm). The space group is Pcab and the unit cell parameters are a = 5.4558 Å, b = 9.1819 Å and c = 5.1429 Å. The formula is TiO2, with 8 formula units per unit cell. Structure The brookite structure is built up of distorted octahedra with a titanium ion at the center and oxygen ions at each of the six vertices. Each octahedron shares three edges with adjoining octahedra, forming an orthorhombic structure. Appearance Brookite crystals are typically tabular, elongated and striated parallel to their length. They may also be pyramidal, pseudo-hexagonal or prismatic. Brookite and rutile may grow together in an epitaxial relationship. Brookite is usually brown in color, sometimes yellowish or reddish brown, or even black. Beautiful, deep red crystals (seen above-right) similar to pyrope and almandite garnet are also known. Brookite displays a submetallic luster. It is opaque to translucent, transparent in thin fragments and yellowish brown to dark brown in transmitted light. Optical properties Brookite is doubly refracting, as are all orthorhombic minerals, and it is biaxial (+). Refractive indices are very high, above 2.5, which is even higher than diamond at 2.42. For comparison, ordinary window glass has a refractive index of about 1.5. Brookite exhibits very weak pleochroism, yellowish, reddish and orange to brown. It is neither fluorescent nor radioactive. Physical properties Brookite is a brittle mineral, with a subconchoidal to irregular fracture and poor cleavage in one direction parallel to the c crystal axis and traces of cleavage in a direction perpendicular to both the a and the b crystal axes. Twinning is uncertain. The mineral has a Mohs hardness of to 6, between apatite and feldspar. This is the same hardness as anatase and a little less than that of rutile (6 to ). The specific gravity is 4.08 to 4.18, between that of anatase at 3.9 and rutile at 4.2. Occurrence and associations Brookite is an accessory mineral in alpine veins in gneiss and schist; it is also a common detrital mineral. Associated minerals include its polymorphs anatase and rutile, and also titanite, orthoclase, quartz, hematite, calcite, chlorite and muscovite. The type locality is Twll Maen Grisial, Fron Olau, Prenteg, Gwynedd, Wales. In 2004, brookite crystals were found in the Kharan, in Balochistan, Pakistan.
Physical sciences
Minerals
Earth science
184306
https://en.wikipedia.org/wiki/Perovskite%20%28structure%29
Perovskite (structure)
A perovskite is any material of formula ABX3 with a crystal structure similar to that of the mineral perovskite, which consists of calcium titanium oxide (CaTiO3). The mineral was first discovered in the Ural mountains of Russia by Gustav Rose in 1839 and named after Russian mineralogist L. A. Perovski (1792–1856). 'A' and 'B' are two positively charged ions (i.e. cations), often of very different sizes, and X is a negatively charged ion (an anion, frequently oxide) that bonds to both cations. The 'A' atoms are generally larger than the 'B' atoms. The ideal cubic structure has the B cation in 6-fold coordination, surrounded by an octahedron of anions, and the A cation in 12-fold cuboctahedral coordination. Additional perovskite forms may exist where both/either the A and B sites have a configuration of A1x-1A2x and/or B1y-1B2y and the X may deviate from the ideal coordination configuration as ions within the A and B sites undergo changes in their oxidation states. As one of the most abundant structural families, perovskites are found in an enormous number of compounds which have wide-ranging properties, applications and importance. Natural compounds with this structure are perovskite, loparite, and the silicate perovskite bridgmanite. Since the 2009 discovery of perovskite solar cells, which contain methylammonium lead halide perovskites, there has been considerable research interest into perovskite materials. Structure Perovskite structures are adopted by many compounds that have the chemical formula ABX3. The idealized form is a cubic structure (space group Pmm, no. 221), which is rarely encountered. The orthorhombic (e.g. space group Pnma, no. 62, or Amm2, no. 68) and tetragonal (e.g. space group I4/mcm, no. 140, or P4mm, no. 99) structures are the most common non-cubic variants. Although the perovskite structure is named after CaTiO3, this mineral has a non-cubic structure. SrTiO3 and CaRbF3 are examples of cubic perovskites. Barium titanate is an example of a perovskite which can take on the rhombohedral (space group R3m, no. 160), orthorhombic, tetragonal and cubic forms depending on temperature. In the idealized cubic unit cell of such a compound, the type 'A' atom sits at cube corner position (0, 0, 0), the type 'B' atom sits at the body-center position (1/2, 1/2, 1/2) and X atoms (typically oxygen) sit at face centered positions (1/2, 1/2, 0), (1/2, 0, 1/2) and (0, 1/2, 1/2). The diagram to the right shows edges for an equivalent unit cell with A in the cube corner position, B at the body center, and X at face-centered positions. Four general categories of cation-pairing are possible: A+B2+X−3, or 1:2 perovskites; A2+B4+X2−3, or 2:4 perovskites; A3+B3+X2−3, or 3:3 perovskites; and A+B5+X2−3, or 1:5 perovskites. The relative ion size requirements for stability of the cubic structure are quite stringent, so slight buckling and distortion can produce several lower-symmetry distorted versions, in which the coordination numbers of A cations, B cations or both are reduced. Tilting of the BO6 octahedra reduces the coordination of an undersized A cation from 12 to as low as 8. Conversely, off-centering of an undersized B cation within its octahedron allows it to attain a stable bonding pattern. The resulting electric dipole is responsible for the property of ferroelectricity and shown by perovskites such as BaTiO3 that distort in this fashion. Complex perovskite structures contain two different B-site cations. This results in the possibility of ordered and disordered variants. Layered perovskites Perovskites may be structured in layers, with the structure separated by thin sheets of intrusive material. Different forms of intrusions, based on the chemical makeup of the intrusion, are defined as: Aurivillius phase: the intruding layer is composed of a []2+ ion, occurring every n layers, leading to an overall chemical formula of []-. Their oxide ion-conducting properties were first discovered in the 1970s by Takahashi et al., and they have been used for this purpose ever since. Dion−Jacobson phase: the intruding layer is composed of an alkali metal (M) every n layers, giving the overall formula as Ruddlesden-Popper phase: the simplest of the phases, the intruding layer occurs between every one (n = 1) or multiple (n > 1) layers of the lattice. Ruddlesden−Popper phases have a similar relationship to perovskites in terms of atomic radii of elements with A typically being large (such as La or Sr) with the B ion being much smaller typically a transition metal (such as Mn, Co or Ni). Recently, hybrid organic-inorganic layered perovskites have been developed, where the structure is constituted of one or more layers of MX64--octahedra, where M is a +2 metal (such as Pb2+ or Sn2+) and X and halide ion (such as ), separated by layers of organic cations (such as butylammonium- or phenylethylammonium-cation). Thin films Perovskites can be deposited as epitaxial thin films on top of other perovskites, using techniques such as pulsed laser deposition and molecular-beam epitaxy. These films can be a couple of nanometres thick or as small as a single unit cell. The well-defined and unique structures at the interfaces between the film and substrate can be used for interface engineering, where new types properties can arise. This can happen through several mechanisms, from mismatch strain between the substrate and film, change in the oxygen octahedral rotation, compositional changes, and quantum confinement. An example of this is LaAlO3 grown on SrTiO3, where the interface can exhibit conductivity, even though both LaAlO3 and SrTiO3 are non-conductive. Another example is SrTiO3 grown on LSAT ((LaAlO3)0.3 (Sr2AlTaO6)0.7) or DyScO3 can morph the incipient ferroelectric into a ferroelectric at room temperature through the means of epitaxially applied biaxial strain. The lattice mismatch of GdScO3 to SrTiO3 (+1.0%) applies tensile stress resulting in a decrease of the out-of-plane lattice constant of SrTiO3, compared to LSAT (−0.9 %), which epitaxially applies compressive stress leading to an extension of the out-of-plane lattice constant of SrTiO3 (and subsequent increase of the in-plane lattice constant). Octahedral tilting Beyond the most common perovskite symmetries (cubic, tetragonal, orthorhombic), a more precise determination leads to a total of 23 different structure types that can be found. These 23 structure can be categorized into 4 different so-called tilt systems that are denoted by their respective Glazer notation. The notation consists of a letter a/b/c, which describes the rotation around a Cartesian axis and a superscript +/—/0 to denote the rotation with respect to the adjacent layer. A "+" denotes that the rotation of two adjacent layers points in the same direction, whereas a "—" denotes that adjacent layers are rotated in opposite directions. Common examples are a0a0a0, a0a0a– and a0a0a+ which are visualized here. Examples Minerals The perovskite structure is adopted at high pressure by bridgmanite, a silicate with the chemical formula , which is the most common mineral in the Earth's mantle. As pressure increases, the SiO44− tetrahedral units in the dominant silica-bearing minerals become unstable compared with SiO68− octahedral units. At the pressure and temperature conditions of the lower mantle, the second most abundant material is likely the rocksalt-structured oxide, periclase. At the high pressure conditions of the Earth's lower mantle, the pyroxene enstatite, MgSiO3, transforms into a denser perovskite-structured polymorph; this phase may be the most common mineral in the Earth. This phase has the orthorhombically distorted perovskite structure (GdFeO3-type structure) that is stable at pressures from ~24 GPa to ~110 GPa. However, it cannot be transported from depths of several hundred km to the Earth's surface without transforming back into less dense materials. At higher pressures, MgSiO3 perovskite, commonly known as silicate perovskite, transforms to post-perovskite. Complex perovskites Although there is a large number of simple known ABX3 perovskites, this number can be greatly expanded if the A and B sites are increasingly doubled / complex ABX6. Ordered double perovskites are usually denoted as A2BO6 where disordered are denoted as A(B)O3. In ordered perovskites, three different types of ordering are possible: rock-salt, layered, and columnar. The most common ordering is rock-salt followed by the much more uncommon disordered and very distant columnar and layered. The formation of rock-salt superstructures is dependent on the B-site cation ordering. Octahedral tilting can occur in double perovskites, however Jahn–Teller distortions and alternative modes alter the B–O bond length. Others Although the most common perovskite compounds contain oxygen, there are a few perovskite compounds that form without oxygen. Fluoride perovskites such as NaMgF3 are well known. A large family of metallic perovskite compounds can be represented by RT3M (R: rare-earth or other relatively large ion, T: transition metal ion and M: light metalloids). The metalloids occupy the octahedrally coordinated "B" sites in these compounds. RPd3B, RRh3B and CeRu3C are examples. MgCNi3 is a metallic perovskite compound and has received lot of attention because of its superconducting properties. An even more exotic type of perovskite is represented by the mixed oxide-aurides of Cs and Rb, such as Cs3AuO, which contain large alkali cations in the traditional "anion" sites, bonded to O2− and Au− anions. Materials properties Perovskite materials exhibit many interesting and intriguing properties from both the theoretical and the application point of view. Colossal magnetoresistance, ferroelectricity, superconductivity, charge ordering, spin dependent transport, high thermopower and the interplay of structural, magnetic and transport properties are commonly observed features in this family. These compounds are used as sensors and catalyst electrodes in certain types of fuel cells and are candidates for memory devices and spintronics applications. Many superconducting ceramic materials (the high temperature superconductors) have perovskite-like structures, often with 3 or more metals including copper, and some oxygen positions left vacant. One prime example is yttrium barium copper oxide which can be insulating or superconducting depending on the oxygen content. Chemical engineers are considering a cobalt-based perovskite material as a replacement for platinum in catalytic converters for diesel vehicles. Aspirational applications Physical properties of interest to materials science among perovskites include superconductivity, magnetoresistance, ionic conductivity, and a multitude of dielectric properties, which are of great importance in microelectronics and telecommunications. They are also some interests for scintillator as they have a large light yield for radiation conversion. Because of the flexibility of bond angles inherent in the perovskite structure there are many different types of distortions that can occur from the ideal structure. These include tilting of the octahedra, displacements of the cations out of the centers of their coordination polyhedra, and distortions of the octahedra driven by electronic factors (Jahn-Teller distortions). The financially biggest application of perovskites is in ceramic capacitors, in which BaTiO3 is used because of its high dielectric constant. Photovoltaics Synthetic perovskites are possible materials for high-efficiency photovoltaics – they showed a conversion efficiency of up to 26.3% and can be manufactured using the same thin-film manufacturing techniques as that used for thin film silicon solar cells. Methylammonium tin halides and methylammonium lead halides are of interest for use in dye-sensitized solar cells. Some perovskite PV cells reach a theoretical peak efficiency of 31%. Among the methylammonium halides studied so far the most common is the methylammonium lead triiodide (). It has a high charge carrier mobility and charge carrier lifetime that allow light-generated electrons and holes to move far enough to be extracted as current, instead of losing their energy as heat within the cell. effective diffusion lengths are some 100 nm for both electrons and holes. Methylammonium halides are deposited by low-temperature solution methods (typically spin-coating). Other low-temperature (below 100 °C) solution-processed films tend to have considerably smaller diffusion lengths. Stranks et al. described nanostructured cells using a mixed methylammonium lead halide () and demonstrated one amorphous thin-film solar cell with an 11.4% conversion efficiency, and another that reached 15.4% using vacuum evaporation. The film thickness of about 500 to 600 nm implies that the electron and hole diffusion lengths were at least of this order. They measured values of the diffusion length exceeding 1 μm for the mixed perovskite, an order of magnitude greater than the 100 nm for the pure iodide. They also showed that carrier lifetimes in the mixed perovskite are longer than in the pure iodide. Liu et al. applied Scanning Photo-current Microscopy to show that the electron diffusion length in mixed halide perovskite along (110) plane is in the order of 10 μm. For , open-circuit voltage (VOC) typically approaches 1 V, while for with low Cl content, VOC > 1.1 V has been reported. Because the band gaps (Eg) of both are 1.55 eV, VOC-to-Eg ratios are higher than usually observed for similar third-generation cells. With wider bandgap perovskites, VOC up to 1.3 V has been demonstrated. The technique offers the potential of low cost because of the low temperature solution methods and the absence of rare elements. Cell durability is currently insufficient for commercial use. However, the solar cells are prone to degradation due to volatility of the organic [CH3NH3]+I− salt. The all-inorganic perovskite cesium lead iodide perovskite (CsPbI3) circumvents this problem, but is itself phase-unstable, the low temperature solution methods of which have only been recently developed. Planar heterojunction perovskite solar cells can be manufactured in simplified device architectures (without complex nanostructures) using only vapor deposition. This technique produces 15% solar-to-electrical power conversion as measured under simulated full sunlight. Lasers LaAlO3 doped with neodymium gave laser emission at 1080 nm. Mixed methylammonium lead halide () cells fashioned into optically pumped vertical-cavity surface-emitting lasers (VCSELs) convert visible pump light to near-IR laser light with a 70% efficiency. Light-emitting diodes Due to their high photoluminescence quantum efficiencies, perovskites may find use in light-emitting diodes (LEDs). Although the stability of perovskite LEDs is not yet as good as III-V or organic LEDs, there is ongoing research to solve this problem, such as incorporating organic molecules or potassium dopants in perovskite LEDs. Perovskite-based printing ink can be used to produce OLED display and quantum dot display panels. Photoelectrolysis Water electrolysis at 12.3% efficiency use perovskite photovoltaics. Scintillators Cerium-doped lutetium aluminum perovskite (LuAP:Ce) single crystals were reported. The main property of those crystals is a large mass density of 8.4 g/cm3, which gives short X- and gamma-ray absorption length. The scintillation light yield and the decay time with Cs137 radiation source are 11,400 photons/MeV and 17 ns, respectively. Those properties made LUAP:Ce scintillators attractive for commercials and they were used quite often in high energy physics experiments. Until eleven years later, one group in Japan proposed Ruddlesden-Popper solution-based hybrid organic-inorganic perovskite crystals as low-cost scintillators. However, the properties were not so impressive in comparison with LuAP:Ce. Until the next nine years, the solution-based hybrid organic-inorganic perovskite crystals became popular again through a report about their high light yields of more than 100,000 photons/MeV at cryogenic temperatures. Recent demonstration of perovskite nanocrystal scintillators for X-ray imaging screen was reported and it is triggering more research efforts for perovskite scintillators. Layered Ruddlesden-Popper perovskites have shown potential as fast novel scintillators with room temperature light yields up to 40,000 photons/MeV, fast decay times below 5 ns and negligible afterglow. In addition this class of materials have shown capability for wide-range particle detection, including alpha particles and thermal neutrons. Examples of perovskites Simple: Strontium titanate Calcium titanate Lead titanate Bismuth ferrite Lanthanum ytterbium oxide Silicate perovskite Lanthanum manganite Yttrium aluminum perovskite (YAP) Lutetium aluminum perovskite (LuAP) Solid solutions: Lanthanum strontium manganite LSAT (lanthanum aluminate – strontium aluminum tantalate) Lead scandium tantalate Lead zirconate titanate Methylammonium lead halide Methylammonium tin halide Formamidinium tin halide
Physical sciences
Crystallography
Physics
184324
https://en.wikipedia.org/wiki/Lepidolite
Lepidolite
Lepidolite is a lilac-gray or rose-colored member of the mica group of minerals with chemical formula . It is the most abundant lithium-bearing mineral and is a secondary source of this metal. It is the major source of the alkali metal rubidium. Lepidolite is found with other lithium-bearing minerals, such as spodumene, in pegmatite bodies. It has also been found in high-temperature quartz veins, greisens and granite. Description Lepidolite is a phyllosilicate mineral and a member of the polylithionite-trilithionite series. Lepidolite is part of a three-part series consisting of polylithionite, lepidolite, and trilithionite. All three minerals share similar properties and are caused because of varying ratios of lithium and aluminum in their chemical formulas. The Li:Al ratio varies from 2:1 in polylithionite up to 1.5:1.5 in trilithionite. Lepidolite is found naturally in a variety of colors, mainly pink, purple, and red, but also gray and, rarely, yellow and colorless. Because lepidolite is a lithium-bearing mica, it is often wrongly assumed that lithium is what causes the pink hues that are so characteristic of this mineral. Instead, it is trace amounts of manganese that cause the pink, purple, and red colors. Structure and composition Lepidolite belongs to the group of trioctahedral micas, with a structure resembling biotite. This structure is sometimes described as TOT-c. The crystal consists of stacked TOT layers weakly bound together by potassium ions (c). Each TOT layer consists of two outer T (tetrahedral) sheets in which silicon or aluminium ions each bind with four oxygen atoms, which in turn bind to other aluminium and silicon to form the sheet structure. The inner O (octahedral) sheet contains iron or magnesium ions each bonded to six oxygen, fluoride, or hydroxide ions. In biotite, silicon occupies three out of every four tetrahedal sites in the crystal and aluminium occupies the remaining tetrahedral sites, while magnesium or iron fill all the available octahedral sites. Lepidolite shares this structure, but aluminium and lithium substitute for magnesium and iron in the octahedral sites. If nearly equal quantities of aluminium and lithium occupy the octahedral sites, the resulting mineral is trilithionite, If lithium occupies two out of three octahedral sites and aluminium the remaining octahedra site, then charge balance can be preserved only if silicon occupies all the tetrahedral sites. The result is polylithionite, . Lepidolite has a composition intermediate between these end members. Fluoride ions can substitute for some of the hydroxide in the structure, while sodium, rubidium, or caesium may substitute in small quantities for potassium. Occurrences Lepidolite is associated with other lithium-bearing minerals like spodumene in pegmatite bodies. It is the major source of the alkali metal rubidium. In 1861, Robert Bunsen and Gustav Kirchhoff extracted of lepidolite to yield a few grams of rubidium salts for analysis, and therefore discovered the new element rubidium. It occurs in granite pegmatites, in some high-temperature quartz veins, greisens and granites. Associated minerals include quartz, feldspar, spodumene, amblygonite, tourmaline, columbite, cassiterite, topaz and beryl. Notable occurrences include Brazil; Ural Mountains, Russia; California and the Black Hills, United States; Tanco Mine, Bernic Lake, Manitoba, Canada; and Madagascar.
Physical sciences
Silicate minerals
Earth science
184325
https://en.wikipedia.org/wiki/Spodumene
Spodumene
Spodumene is a pyroxene mineral consisting of lithium aluminium inosilicate, LiAl(SiO3)2, and is a commercially important source of lithium. It occurs as colorless to yellowish, purplish, or lilac kunzite (see below), yellowish-green or emerald-green hiddenite, prismatic crystals, often of great size. Single crystals of in size are reported from the Black Hills of South Dakota, United States. The naturally-occurring low-temperature form α-spodumene is in the monoclinic system, and the high-temperature β-spodumene crystallizes in the tetragonal system. α-spodumene converts to β-spodumene at temperatures above 900 °C. Crystals are typically heavily striated parallel to the principal axis. Crystal faces are often etched and pitted with triangular markings. Discovery and occurrence Spodumene was first described in 1800 for an occurrence in the type locality in Utö, Södermanland, Sweden. It was discovered by Brazilian naturalist Jose Bonifacio de Andrada e Silva. The name is derived from the Greek spodumenos (σποδούμενος), meaning "burnt to ashes", owing to the opaque ash-grey appearance of material refined for use in industry. Spodumene occurs in lithium-rich granite pegmatites and aplites. Associated minerals include: quartz, albite, petalite, eucryptite, lepidolite and beryl. Transparent material has long been used as a gemstone with varieties kunzite and hiddenite noted for their strong pleochroism. Source localities include Democratic Republic of Congo, Afghanistan, Australia, Brazil, Madagascar (see mining), Pakistan, Québec in Canada, and North Carolina and California in the U.S. Since 2018, the Democratic Republic of Congo (DRC) has been known to have the largest lithium spodumene hard rock deposit in the world, with mining operations occurring in the central DRC territory of Manono, Tanganyika Province. As of 2021, the Australian company AVZ Minerals is developing the Manono Lithium and Tin project and has a resource size of 400 million tonnes of high grade low impurities at 1.65% lithium oxide (Li2O) spodumene hard-rock based on studies and drilling of Roche Dure, one of several pegmatites in the deposit. Economic importance Spodumene is an important source of lithium, for use in ceramics, mobile phones and batteries (including for automotive applications), medicine, Pyroceram and as a fluxing agent. As of 2019, around half of lithium is extracted from mineral ores, which mainly consist of spodumene. Lithium is recovered from spodumene by dissolution in acid, or extraction with other reagents, after roasting to convert it to the more reactive β-spodumene. The advantage of spodumene as a lithium source compared to brine sources is the higher lithium concentration, but at a higher extraction cost. In 2016, the price was forecast to be $500–600/ton for years to come. However, price spiked above $800 in January 2018, and production increased more than consumption, reducing the price to $400 in September 2020. World production of lithium via spodumene was around 80,000 metric tonnes per annum in 2018, primarily from the Greenbushes pegmatite of Western Australia and from some Chinese and Chilean sources. The Talison Minerals mine in Greenbushes, Western Australia (involving Tianqi Lithium, Albemarle Corporation and Global Advanced Metals), is reported to be the world's second largest and to have the highest grade of ore at 2.4% Li2O (2012 figures). In 2020, Australia expanded spodumene mining to become the leading lithium producing country in the world. An important economic concentrate of spodumene, known as spodumene concentrate 6 or SC6, is a high-purity lithium ore with approximately 6 percent lithium content being produced as a raw material for the subsequent production of lithium-ion batteries for electric vehicles. Refining Extraction of lithium from spodumene, often spodumene concentrate 6 (SC6), is challenging due to the tight binding of lithium in the crystal structure. Traditional lithium refining in the 2010s involves acid leaching of lithium-containing ores, precipitation of impurities, concentration of the lithium solution, and then conversion to lithium carbonate or lithium hydroxide. These refining methods result in significant quantities of caustic waste effluent and tailings, which are usually either highly acidic or alkali. Another processing method relies on pyrometallurgical processing of SC6—roasting at high temperatures exceeding to convert the spodumene from the tightly-bound alpha structure to a more open beta structure from which the lithium is more easily extracted—then cooling and reacting with various reagents in a sequence of hydrometallurgical processing steps. Some offer the use of non-caustic reagents and result in reduced waste streams, potentially allowing the use of a closed-loop refining process. Suitable extraction reagents include alkali metal sulfates, such as sodium sulfate; sodium carbonate; chlorine; or hydrofluoric acid. A common form of more highly refined lithium is lithium hydroxide, commonly used as an input in the battery industry to manufacture lithium-ion (Li-ion) battery cathode material. Gemstone varieties Hiddenite Hiddenite is a pale, emerald-green gem variety first reported from Alexander County, North Carolina, U.S. It was named in honor of William Earl Hidden (16 February 1853 – 12 June 1918), mining engineer, mineral collector, and mineral dealer. This emerald-green variety of spodumene is colored by chromium, just as for emeralds. Some green spodumene is colored with substances other than chromium; such stones tend to have a lighter color; they are not true hiddenite. Kunzite Kunzite is a purple-colored gemstone, a variety of spodumene, with the color coming from minor to trace amounts of manganese. Exposure to sunlight can fade its color. Kunzite was discovered in 1902, and was named after George Frederick Kunz, Tiffany & Co's chief jeweler at the time, and a noted mineralogist. It has been found in Brazil, the U.S., Canada, CIS, Mexico, Sweden, Western Australia, Afghanistan and Pakistan. Triphane Triphane is the name used for yellowish varieties of spodumene.
Physical sciences
Silicate minerals
Earth science
1061346
https://en.wikipedia.org/wiki/Allium%20fistulosum
Allium fistulosum
Allium fistulosum, the Welsh onion, also commonly called bunching onion, long green onion, Japanese bunching onion, and spring onion, is a species of perennial plant, often considered to be a kind of scallion. The species is very similar in taste and odor to the related common onion, Allium cepa, and hybrids between the two (tree onions) exist. A. fistulosum, however, does not develop bulbs, and its leaves and scapes are hollow (fistulosum means "hollow"). Larger varieties of A. fistulosum, such as the Japanese negi, resemble the leek, whilst smaller varieties resemble chives. A. fistulosum can multiply by forming perennial evergreen clumps. It is also grown in a bunch as an ornamental plant. Names The common name "Welsh onion" does not refer to Wales; indeed, the plant is neither indigenous to Wales nor particularly common in Welsh cuisine (the green Allium common to Wales is the leek, A. ampeloprasum, the national vegetable of Wales). Instead, it derives from a near-obsolete botanical use of "Welsh" in the sense "foreign, non-native", as the species is native to China, though cultivated in many places and naturalized in scattered locations throughout Eurasia and North America. Historically, A. fistulosum was known as the cibol. In Cornwall, they are known as chibols, and in the west of Scotland as sybows. Other names that may be applied to this plant include green onion, salad onion, and spring onion. These names are ambiguous, as they may also be used to refer to any young green onion stalk, whether grown from Welsh onions, common onions, or other similar members of the genus Allium (also see scallion). Culinary use A. fistulosum is an ingredient in Asian cuisine, especially in East Asia and Southeast Asia. It is particularly important in China, Japan, and Korea, hence one of the English names for this plant, Japanese bunching onion. In the West, A. fistulosum is primarily used as a scallion or salad onion, but is more widely used in other parts of the world, particularly East Asia. China In China, it is often used in scallion pancakes, and as a garnish for a variety of dishes. It is also mixed with meat, into shumai dumplings or pearl meatballs. Japan The Japanese name is negi (葱), which can also refer to other plants of the genus Allium, or more specifically naganegi (長葱), meaning "long onion". Common onions were introduced to East Asia in the 19th century, but A. fistulosum remains more popular and widespread. It is used in miso soup, negimaki (beef and scallion rolls), among other dishes, and it is in wide use as a sliced garnish, such as on teriyaki or takoyaki. Korea In Korea, A. fistulosum along with A. × proliferum is called pa (, "scallion"), while common onions are called yangpa (, "Western scallion"). Larger varieties, looking similar to leek and sometimes referred to as "Asian leek", are called daepa (, "big scallion"), while the thinner early variety is called silpa (, "thread scallion"). A similar scallion plant, A. × wakegi (now considered a synonym of A. × proliferum), is called jjokpa (). Both daepa and silpa are usually used as a spice, herb, or garnish in Korean cuisine. The white part of daepa is often used as the flavour base for various broths and infused oil, while the green part of silpa is preferred as garnish. Dishes using daepa include pa-jangajji (pickled scallions), pa-mandu (scallion dumplings), pa-sanjeok (skewered beef and scallions), and padak (scallion chicken), which is a variety of Korean fried chicken topped with shredded raw daepa. Dishes using silpa include pa-namul (seasoned scallions), pa-jangguk (scallion beef-broth soup), and pa-ganghoe (parboiled scallion rolls) where silpa is used as a ribbon that bundles other ingredients. Russia A. fistulosum is used in Russia in the spring for adding green leaves to salads. Jamaica Known as escallion, A. fistulosum is an ingredient in Jamaican cuisine, in combination with thyme, Scotch bonnet pepper, garlic, and allspice (called pimento). Recipes with escallion sometimes suggest leek as a substitute in salads. Jamaican dried spice mixtures using escallion are available commercially. The Jamaican name is probably a variant of scallion, the term used loosely for the spring onion and various other plants in the genus Allium. Nutrition Gallery
Biology and health sciences
Leafy vegetables
Plants
1062339
https://en.wikipedia.org/wiki/Polynesian%20rat
Polynesian rat
The Polynesian rat, Pacific rat or little rat (Rattus exulans), or , is the third most widespread species of rat in the world behind the brown rat and black rat. Contrary to its vernacular name, the Polynesian rat originated in Southeast Asia, and like its relatives has become widespread, migrating to most of Polynesia, including New Zealand, Easter Island, and Hawaii. It shares high adaptability with other rat species extending to many environments, from grasslands to forests. It is also closely associated with humans, who provide easy access to food. It has become a major pest in most areas of its distribution. Description The Polynesian rat is similar in appearance to other rats, such as the black rat and the brown rat. It has large, round ears, a pointed snout, black/brown hair with a lighter belly, and comparatively small feet. It has a thin, long body, reaching up to in length from the nose to the base of the tail, making it slightly smaller than other human-associated rats. Where it exists on smaller islands, it tends to be smaller still – . It is commonly distinguished by a dark upper edge of the hind foot near the ankle; the rest of its foot is pale. Distribution and habitat The Polynesian rat is widespread throughout the Pacific and Southeast Asia. Mitochondrial DNA analysis suggests that the species originated on the island of Flores. The IUCN Red List considers it native to Bangladesh, all of mainland Southeast Asia, and Indonesia, but introduced to all of its Pacific range (including the island of New Guinea), the Philippines, Brunei, and Singapore, and of uncertain origin in Taiwan. It cannot swim over long distances, so is considered to be a significant marker of the human migrations across the Pacific, as the Polynesians accidentally or deliberately introduced it to the islands they settled. The species has been implicated in many of the extinctions that occurred in the Pacific amongst the native birds and insects; these species had evolved in the absence of mammals and were unable to cope with the predation pressures posed by the rat. This rat also may have played a role in the complete deforestation of Easter Island by eating the nuts of the local palm tree Paschalococos, thus preventing regrowth of the forest. Although remains of the Polynesian rat in New Zealand were dated to over 2,000 years old during the 1990s, which was much earlier than the accepted dates for Polynesian migrations to New Zealand, this finding has been challenged by later research showing the rat was introduced to both the country's main islands circa 1280. Behaviour Polynesian rats are nocturnal like most rodents, and are adept climbers, often nesting in trees. In winter, when food is scarce, they commonly strip bark for consumption and satisfy themselves with plant stems. They have common rat characteristics regarding reproduction: polyestrous, with gestations of 21–24 days, litter size affected by food and other resources (6–11 pups), weaning takes around another month at 28 days. They diverge only in that they do not breed year round, instead being restricted to spring and summer. Diet R. exulans is an omnivorous species, eating seeds, fruit, leaves, bark, insects, earthworms, spiders, lizards, and avian eggs and hatchlings. Polynesian rats have been observed to often take pieces of food back to a safe place to properly shell a seed or otherwise prepare certain foods. This not only protects them from predators, but also from rain and other rats. These "husking stations" are often found among trees, near the roots, in fissures of the trunk, and even in the top branches. In New Zealand, for instance, such stations are found under rock piles and fronds shed by nīkau palms. Rat control and bird conservation New Zealand In New Zealand and its offshore islands, many bird species evolved in the absence of terrestrial mammalian predators, so developed no behavioral defenses to rats. The introduction by the Māori of the Polynesian rat into New Zealand resulted in the eradication of several species of terrestrial and small seabirds. Subsequent elimination of rats from islands has resulted in substantial increases in populations of certain seabirds and endemic terrestrial birds, as well as species of insects such as the Little Barrier Island giant wētā. As part of its program to restore these populations, such as the critically endangered kākāpō, the New Zealand Department of Conservation undertakes programs to eliminate the Polynesian rat on most offshore islands in its jurisdiction, and other conservation groups have adopted similar programs in other reserves seeking to be predator- and rat-free. However, two islands in the Hen and Chickens group, Mauitaha and Araara, have now been set aside as sanctuaries for the Polynesian rat. Rest of the Pacific NZAID has funded rat eradication programs in the Phoenix Islands of Kiribati in order to protect the bird species of the Phoenix Islands Protected Area. Between July and November 2011, a partnership of the Pitcairn Islands Government and the Royal Society for the Protection of Birds implemented a poison baiting programme on Henderson Island aimed at eradicating the Polynesian rat. Mortality was massive, but of the 50,000 to 100,000 population, 60 to 80 individuals survived and the population has now fully recovered.
Biology and health sciences
Rodents
Animals
1063435
https://en.wikipedia.org/wiki/Normal%20force
Normal force
In mechanics, the normal force is the component of a contact force that is perpendicular to the surface that an object contacts. In this instance normal is used in the geometric sense and means perpendicular, as opposed to the meaning "ordinary" or "expected". A person standing still on a platform is acted upon by gravity, which would pull them down towards the Earth's core unless there were a countervailing force from the resistance of the platform's molecules, a force which is named the "normal force". The normal force is one type of ground reaction force. If the person stands on a slope and does not sink into the ground or slide downhill, the total ground reaction force can be divided into two components: a normal force perpendicular to the ground and a frictional force parallel to the ground. In another common situation, if an object hits a surface with some speed, and the surface can withstand the impact, the normal force provides for a rapid deceleration, which will depend on the flexibility of the surface and the object. Equations In the case of an object resting upon a flat table (unlike on an incline as in Figures 1 and 2), the normal force on the object is equal but in opposite direction to the gravitational force applied on the object (or the weight of the object), that is, , where m is mass, and g is the gravitational field strength (about 9.81 m/s2 on Earth). The normal force here represents the force applied by the table against the object that prevents it from sinking through the table and requires that the table be sturdy enough to deliver this normal force without breaking. However, it is easy to assume that the normal force and weight are action-reaction force pairs (a common mistake). In this case, the normal force and weight need to be equal in magnitude to explain why there is no upward acceleration of the object. For example, a ball that bounces upwards accelerates upwards because the normal force acting on the ball is larger in magnitude than the weight of the ball. Where an object rests on an incline as in Figures 1 and 2, the normal force is perpendicular to the plane the object rests on. Still, the normal force will be as large as necessary to prevent sinking through the surface, presuming the surface is sturdy enough. The strength of the force can be calculated as: where is the normal force, m is the mass of the object, g is the gravitational field strength, and θ is the angle of the inclined surface measured from the horizontal. The normal force is one of the several forces which act on the object. In the simple situations so far considered, the most important other forces acting on it are friction and the force of gravity. Using vectors In general, the magnitude of the normal force, N, is the projection of the net surface interaction force, T, in the normal direction, n, and so the normal force vector can be found by scaling the normal direction by the net surface interaction force. The surface interaction force, in turn, is equal to the dot product of the unit normal with the Cauchy stress tensor describing the stress state of the surface. That is: or, in indicial notation, The parallel shear component of the contact force is known as the frictional force (). The static coefficient of friction for an object on an inclined plane can be calculated as follows: for an object on the point of sliding where is the angle between the slope and the horizontal. Physical origin Normal force is directly a result of Pauli exclusion principle and not a true force per se: it is a result of the interactions of the electrons at the surfaces of the objects. The atoms in the two surfaces cannot penetrate one another without a large investment of energy because there is no low energy state for which the electron wavefunctions from the two surfaces overlap; thus no microscopic force is needed to prevent this penetration. However these interactions are often modeled as van der Waals force, a force that grows very large very quickly as distance becomes smaller. On the more macroscopic level, such surfaces can be treated as a single object, and two bodies do not penetrate each other due to the stability of matter, which is again a consequence of Pauli exclusion principle, but also of the fundamental forces of nature: cracks in the bodies do not widen due to electromagnetic forces that create the chemical bonds between the atoms; the atoms themselves do not disintegrate because of the electromagnetic forces between the electrons and the nuclei; and the nuclei do not disintegrate due to the nuclear forces. Practical applications In an elevator either stationary or moving at constant velocity, the normal force on the person's feet balances the person's weight. In an elevator that is accelerating upward, the normal force is greater than the person's ground weight and so the person's perceived weight increases (making the person feel heavier). In an elevator that is accelerating downward, the normal force is less than the person's ground weight and so a passenger's perceived weight decreases. If a passenger were to stand on a weighing scale, such as a conventional bathroom scale, while riding the elevator, the scale will be reading the normal force it delivers to the passenger's feet, and will be different than the person's ground weight if the elevator cab is accelerating up or down. The weighing scale measures normal force (which varies as the elevator cab accelerates), not gravitational force (which does not vary as the cab accelerates). When we define upward to be the positive direction, constructing Newton's second law and solving for the normal force on a passenger yields the following equation: In a gravitron amusement ride, the static friction caused by and perpendicular to the normal force acting on the passengers against the walls results in suspension of the passengers above the floor as the ride rotates. In such a scenario, the walls of the ride apply normal force to the passengers in the direction of the center, which is a result of the centripetal force applied to the passengers as the ride rotates. As a result of the normal force experienced by the passengers, the static friction between the passengers and the walls of the ride counteracts the pull of gravity on the passengers, resulting in suspension above ground of the passengers throughout the duration of the ride. When we define the center of the ride to be the positive direction, solving for the normal force on a passenger that is suspended above ground yields the following equation: where is the normal force on the passenger, is the mass of the passenger, is the tangential velocity of the passenger and is the distance of the passenger from the center of the ride. With the normal force known, we can solve for the static coefficient of friction needed to maintain a net force of zero in the vertical direction: where is the static coefficient of friction, and is the gravitational field strength.
Physical sciences
Classical mechanics
Physics